Non-identical Neural Network Synchronization

CHIN. PHYS. LETT. Vol. 29, No. 9 (2012) 090501
Non-identical Neural Network Synchronization Study Based on an Adaptive
Learning Rule of Synapses *
YAN Chuan-Kui(严传魁)1,2 , WANG Ru-Bin(王如彬)1**
1
Institute for Cognitive Neurodynamics, School of Information Science and Engineering, Department of Mathematics,
School of Science, East China University of Science and Technology, Shanghai 200237
2
Department of Mathematics, School of Science, Hangzhou Normal University, Hangzhou 310036
(Received 28 November 2011)
An adaptive learning rule of synapses is proposed for a general asymmetric non-identical neural network. Its
feasibility is proved by the Lasalle principle. Numerical simulation results show that synaptic connection weight
can converge to an appropriate strength and the identical network comes to synchronization. Furthermore, by
this approach of learning, a non-identical neural population can still reach synchronization. This means that
the learning rule has robustness on mismatch parameters. The firing rhythm of the neural population is totally
dependent on topological properties, which promotes our understanding of neuron population activities.
PACS: 05.45.−a, 05.45.Tp
DOI: 10.1088/0256-307X/29/9/090501
With the diverse applications of synchronization in
the engineering field, network synchronization studies
have been rapidly advanced.[1−5] The synchronization
of the system can be made by controlling some system state variables. Moreover, in the biophysical experiment, it is found that synchronous discharges of
neurons also exist in the brain.[6,7] This firing pattern
may be relevant to some functions of the brain. Synchronization studies of nervous systems began with
two coupling neurons, and then went to network. The
topological structures of these networks considered
are mainly regular networks, such as chain, grid and
ring.[3−5] Furthermore, the small-world network for
simulating the nervous system is also involved.[8−10]
However, the small-world networks are different from
the real nervous systems due to their regularity. The
synaptic connection of the real nervous system is more
random.
Another question is how to reach synchronization.
For a neural network, the synchronization orbit transition process of the dynamic system is discussed here.
By external control of some system state variables,
synchronization can be attained from some studies on
engineering dynamic systems,[11−13] but the external
control is not a necessity for the synchronization of
the nervous system. Due to the plasticity of a synapse
between neurons, synchronization can be achieved in
the nervous system by synaptic adaptive learning. We
propose an adaptive learning rule of synapses to simulate plasticity dependent on spike timing, demonstrating the feasibility of the algorithm by theoretical proof
and simulation.
Considering a normal network consisting of 𝑛 neurons, its topology connection matrix 𝐴 = (𝑎𝑖𝑗 ) ∈
𝑅𝑛×𝑛 . Among 𝑎𝑖𝑗 = 1 when there is a synapse from
𝑗 th neuron to the 𝑖th neuron; 𝑎𝑖𝑗 = 0 for no connection
and 𝑎𝑖𝑗 = 0, 𝑖 = 𝑗. Due to the difference between a
neural network and a general dynamic system, synaptic connection is asymmetrical and thus 𝑎𝑖𝑗 is not necessarily equal to 𝑎𝑗𝑖 . 𝐶(𝑡) = (𝑐𝑖𝑗 (𝑡)) ∈ 𝑅𝑛×𝑛 means
the strength of the synaptic connection.
Synchronization of a neural network means membrane potential synchronization. We assume that
the first state variable of a dynamic system is membrane potential. Then the single neuron model is
𝑋˙ 𝑖 = (𝑓1 (𝑋 𝑖 ), 𝑓2 (𝑋 𝑖 ), 𝑓3 (𝑋 𝑖 ), . . . , 𝑓𝑚 (𝑋 𝑖 ))𝑇 , where
𝑋 𝑖 = (𝑥𝑖1 , 𝑥𝑖2 , . . . , 𝑥𝑖𝑚 )𝑇 indicates the state variable,
and 𝑥𝑖1 is membrane potential. Furthermore, the
model of the network is
𝑛
(︁
∑︁
𝑎𝑖𝑗 𝑐𝑖𝑗 (𝑡)(𝑥𝑗1 − 𝑥𝑖1 ),
𝑋˙ 𝑖 = 𝑓1 (𝑋 𝑖 ) +
𝑗=1
)︁𝑇
𝑓2 (𝑋 𝑖 ), 𝑓3 (𝑋 𝑖 ), . . . , 𝑓𝑚 (𝑋 𝑖 ) ,
(1)
𝑋 = (𝑋 1 , 𝑋 2 , . . . , 𝑋 𝑛 ) ∈ 𝑅𝑚×𝑛 is the state of this
neural system.
The adaptive learning rule of synapses was
designed for neuron 𝑖 and neuron 𝑗; 𝑐˙𝑖𝑗 (𝑡) =
𝜂𝑃𝑖𝑗 (𝑡)(𝑥𝑗1 − 𝑥𝑖1 )2 , where synaptic learning probabil𝑇0
ity 𝑃𝑖𝑗 (𝑡) = exp(− |𝑥𝑗 −𝑥𝑖1|𝑇 (𝑡)+𝜀 ), 𝑇 (𝑡) = 1+𝛼𝑡
.
1
1
Then the Lasalle principle is used to prove feasibility of synchronization of an asymmetric irregular
neural network by this adaptive algorithm as follows.
Theorem 1 (Lasalle):[14] For a dynamic system
˙
𝑋 = 𝑓 (𝑋), 𝑓 : 𝑅𝑛 → 𝑅𝑛 is a 𝐶 1 continuous map.
If function 𝑉 (𝑋) : 𝑅𝑛 → 𝑅+ exists, for each 𝑋 ∈ 𝑅𝑛 ,
𝑉˙ (𝑋) ≤ 0 and definition 𝐸 := {𝑋 ∈ 𝑅𝑛 : 𝑉˙ (𝑋) = 0}.
Given that 𝐵 is the largest invariant set in 𝐸, then
each bounded solution will converge to 𝐵 when 𝑡 →
+∞.
Hypothesis 1: 𝑓1 (𝑋 𝑖 ) satisfies the Lipschitz condi-
* Supported by the National Natural Science Foundation of China under Grant No 10872068, the Fundamental Research Funds
for the Central Universities, Youth Cultivating Foundation of Hangzhou Normal University under Grant No 2010QN02, and the Key
Program of National Natural Science Foundation of China under Grant No 11232005 (on neurodynamics research and experimental
analysis of perceptual cognition and decision making).
** Corresponding author. Email: [email protected]
© 2012 Chinese Physical Society and IOP Publishing Ltd
090501-1
CHIN. PHYS. LETT. Vol. 29, No. 9 (2012) 090501
tion about first state variable (membrane potential),
‖𝑓1 (𝑋 𝑖 ) − 𝑓1 (𝑋 𝑗 )‖ ≤ 𝐿‖𝑥𝑖1 − 𝑥𝑗1 ‖.
= 𝑒𝑖𝑗 (𝑓1 (𝑋 𝑖 ) − 𝑓1 (𝑋 𝑗 )) + 𝑒𝑖𝑗
(2)
For a general neural network with 𝑛 neurons, we study
the error dynamic system.
𝑒˙ 𝑖𝑗 = 𝑥˙ 𝑖1 − 𝑥˙ 𝑗1 , 𝑖, 𝑗 = 1, 2, . . . , 𝑛,
2
𝑛
1 𝑀 ∑︁
𝜂𝑃𝑖𝑗 (𝑡)𝑒2𝑖𝑗
𝜂 𝛿 𝑗=1
𝑛
𝑛
𝑛
1 ∑︁ ∑︁ 2
1 ∑︁
𝑐𝑗𝑖 +
𝑐𝑗𝑖
+ 𝑒2𝑖𝑗
𝑒𝑖𝑗
2
2 𝑖=1
𝑖=1
𝑖=1
+
𝑛
𝑛 {︁
∑︁
1 (︁ ∑︁
𝑀 )︁2
1 2
𝑉 (𝑡) =
𝑉𝑖𝑗 (𝑡) =
𝑒𝑖𝑗 +
𝑐𝑖𝑗 −
2
2𝜂 𝑗=1
𝛿
𝑖,𝑗=1
𝑖,𝑗=1
𝑛
∑︁
−𝑀
(4)
≤𝐿
𝑡→+∞
𝑛
∑︁
−𝑀
𝑛
∑︁
𝑐𝑖𝑗
𝑖=1
𝑛
∑︁
𝑒2𝑖𝑗
𝑖=1
(𝛿 ≤ 𝑃𝑖𝑗 ≤ 1)
𝑛
∑︁
𝑐𝑖𝑗
𝑒2𝑖𝑗 − 𝑀
𝑛
∑︁
𝑒2𝑖𝑗 + 2
𝑛
∑︁
𝑖=1
𝑗=1
𝑛
∑︁
𝑐𝑗𝑖
𝑛
∑︁
𝑒2𝑖𝑗
𝑖=1
𝑒2𝑖𝑗
𝑖=1
𝑛
∑︁
𝑐𝑖𝑗 − 𝑀 )
𝑒2𝑖𝑗 + (
𝑛
∑︁
𝑐𝑗𝑖 − 𝑀 )
𝑖=1
𝑗=1
𝑛
∑︁
𝑒2𝑖𝑗
𝑖=1
We can obtain
|𝑐𝑖𝑗 𝑒𝑗𝑖 |
𝑛
∑︁
𝑑𝑉𝑖𝑗 (𝑡)
𝑑𝑉 (𝑡)
=
≤ 0.
𝑑𝑡
𝑑𝑡
𝑖,𝑗=1
𝑒2𝑖𝑗 + 𝑒21𝑖
𝑒2𝑖𝑗 + 𝑒22𝑖
𝑒2𝑖𝑗 + 𝑒2𝑛𝑖
+ 𝑐𝑖2
+ · · · 𝑐𝑖𝑛
2
2
2
𝑛
𝑛
∑︁
∑︁
1
1
= 𝑒2𝑖𝑗
𝑐𝑖𝑗 +
(𝑐𝑖𝑗 𝑒2𝑗𝑖 )
2 𝑗=1
2 𝑗=1
≤ 𝑐𝑖1
𝑛
𝑛
𝑛
1 ∑︁
1 ∑︁ ∑︁ 2
≤ 𝑒2𝑖𝑗
𝑐𝑖𝑗 +
𝑐𝑖𝑗
𝑒𝑖𝑗 .
2 𝑗=1
2 𝑗=1
𝑗=1
Therefore,
𝑑𝑉𝑖𝑗 (𝑡)
𝑑𝑡
𝑛
𝑛
{︁
𝑑 1 2
1 ∑︁
𝑀
1 (︁ ∑︁
𝑀 )︁2 }︁
=
𝑒𝑖𝑗 + (
𝑐𝑖𝑗 − )2 +
𝑐𝑗𝑖 −
𝑑𝑡 2
2𝜂 𝑗=1
𝛿
2𝜂 𝑖=1
𝛿
𝑛
𝑛
1 (︁ ∑︁
𝑀 )︁ ∑︁
𝑐𝑖𝑗 −
𝑐˙𝑖𝑗
𝜂 𝑗=1
𝛿 𝑗=1
𝑖=1
𝑒2𝑖𝑗 +
≤ 0.
= |𝑒𝑖𝑗 |(|𝑐𝑖1 𝑒1𝑖 | + |𝑐𝑖2 𝑒2𝑖 | + · · · |𝑐𝑖𝑛 𝑒𝑛𝑖 |)
𝜂
𝑛
∑︁
𝑗=1
𝑗=1
𝑛
∑︁
𝑗=1
𝑗=1
+
𝑒2𝑖𝑗 + 2
𝑗=1
𝑛
∑︁
𝑎𝑖𝑗 𝑐𝑖𝑗 𝑒𝑗𝑖
≤ |𝑒𝑖𝑗 |
𝑛
1 (︁ ∑︁
𝑒2𝑖𝑗
= (𝐿 + 2
𝑗=1
𝑛
∑︁
𝑒2𝑖𝑗 − 𝑀
𝑗=1
𝑛
∑︁
𝑗=1
is subject to 𝑀 ≥ 2𝑐0 𝑛2 + 𝐿.
Then
= 𝑒𝑖𝑗 𝑒˙ 𝑖𝑗 +
𝑛
∑︁
𝑖=1
1
𝑒𝑖𝑗
𝑐𝑖𝑗
𝑗=1
where 𝛿 = 𝑒− 𝜀 , 𝑐0 = max{ lim 𝑐𝑖𝑗 (𝑡)}. Constant 𝑀
𝑖,𝑗
𝑛
𝑛
1 ∑︁ ∑︁
𝑐𝑖𝑗
𝜂𝑃𝑖𝑗 (𝑡)𝑒2𝑖𝑗
𝜂 𝑗=1
𝑗=1
𝑛
𝑛
𝑛
1 ∑︁
1 ∑︁ ∑︁ 2
≤ 𝐿𝑒2𝑖𝑗 + 𝑒2𝑖𝑗
𝑐𝑖𝑗
𝑐𝑖𝑗 +
𝑒𝑖𝑗
2 𝑗=1
2 𝑗=1
𝑗=1
𝑛
∑︁
𝑛
∑︁
𝑎𝑗𝑖 𝑐𝑗𝑖 𝑒𝑖𝑗 +
𝑛
𝑛
𝑛
1 ∑︁ ∑︁
1 𝑀 ∑︁
2
𝑐𝑗𝑖
𝜂𝑃𝑗𝑖 (𝑡)𝑒2𝑖𝑗
+
𝜂𝑃𝑗𝑖 (𝑡)𝑒𝑖𝑗 −
𝜂 𝑖=1
𝜂 𝛿 𝑖=1
𝑖=1
(3)
Theorem 2: 𝐵 = {(𝑒𝑖𝑗 , 𝑐𝑖𝑗 ) ∈ 𝑅2𝑛 : 𝑒𝑖𝑗 = 0, 𝑐𝑖𝑗 =
𝑐ˆ𝑖𝑗 , 𝑖, 𝑗 = 1, 2, . . . , 𝑛}. For an error dynamic system,
the orbit will converge to B with any initial states,
that is, 𝑒𝑖𝑗 = 𝑥𝑖1 − 𝑥𝑗1 → 0, 𝑐𝑖𝑗 → 𝑐ˆ𝑖𝑗 , 𝑡 → +∞.
Proof: For the error system, we define the Lyapunov function
𝑛
∑︁
𝑖=1
−
𝑛
𝑀 2 }︁
1 ∑︁
𝑐𝑗𝑖 −
) ≥ 0,
+ (
2𝜂 𝑖=1
𝛿
𝑎𝑖𝑗 𝑐𝑖𝑗 𝑒𝑗𝑖
𝑗=1
+ 𝑒𝑗𝑖
𝑐˙𝑖𝑗 (𝑡) = 𝜂𝑃𝑖𝑗 (𝑡)(𝑥𝑗1 − 𝑥𝑖1 )2 .
𝑛
∑︁
𝑐𝑗𝑖 −
𝑛
𝑀 )︁ ∑︁
𝛿
Then 𝑉˙ (𝑡) = 0 if and only if 𝑒𝑖𝑗 = 0, 𝑖, 𝑗 =
1, 2 · · · 𝑛. That is to say, 𝐵 is the largest invariant
set in 𝐸. By theorem 1, we can obtain the conclusion that the orbit will converge to B with any initial
states, that is, 𝑒𝑖𝑗 = 𝑥𝑖1 − 𝑥𝑗1 → 0, 𝑐𝑖𝑗 → 𝑐ˆ𝑖𝑗 , 𝑡 → +∞.
By the proof procedure, we find that this adaptive
synchronization approach can be applied to a largescale irregular network, especially an asymmetric network. Due to the directionality and asymmetry of the
synaptic connection in the neural network, a modified
NW small world network is used to simulate the neural
network. We connect the nodes randomly with unilateralism instead of the original symmetric coupling.
In this network, the ML model is used for a single
neuron. The dynamics model of the neural network
reads
𝑉˙ 𝑖 = 𝑓1 (𝑉 𝑖 , 𝑊 𝑖 , 𝐼 𝑖 ) +
𝑛
∑︁
𝑎𝑖𝑗 𝑐𝑖𝑗 (𝑡)(𝑉 𝑗 − 𝑉 𝑖 ),
𝑗=1
𝑐˙𝑗𝑖
˙ 𝑖 = 𝜆(𝑉 𝑖 )(𝑊 𝑖 (𝑉 𝑖 ) − 𝑊 𝑖 ),
𝑊
∞
𝑖=1
090501-2
CHIN. PHYS. LETT. Vol. 29, No. 9 (2012) 090501
The other parameters are chosen as follows:
𝐼˙𝑖 = 𝜇(0.2 + 𝑉 𝑖 ),
(︁
𝑉 − 𝑉𝑎 )︁
𝑚∞ (𝑉 ) = 0.5 1 + tanh
,
𝑉𝑏
(︁
𝑉 − 𝑉𝑐 )︁
,
𝑊∞ (𝑉 ) = 0.5 1 + tanh
𝑉𝑑
1
𝑉 − 𝑉𝑐
𝜆(𝑉 ) = cosh
,
3
2𝑉𝑑
𝑐˙𝑖𝑗 (𝑡) = 𝜂𝑃𝑖𝑗 (𝑡)(𝑉 𝑗 − 𝑉 𝑖 )2 ,
𝑓1 (𝑉 𝑖 , 𝑊 𝑖 , 𝐼 𝑖 )
= 𝑔ca 𝑚∞ (𝑉ca − 𝑉 𝑖 )
+ 𝑔𝐾 𝑊 𝑖 (𝑉𝐾 − 𝑉 𝑖 ) + 𝑔𝐿 (𝑉𝐿 − 𝑉 𝑖 ) − 𝐼 𝑖 ,
𝑖 = 1, 2, 3, . . . , 𝑛,
(5)
2
2
𝑔ca = 1.2 µS/cm , 𝑔𝐾 = 2.0 µS/cm ,
2
𝑔𝐿 = 0.5 µS/cm , 𝜇 = 0.005, 𝑉𝐾 = −1.1 mV,
𝑉𝐿 = −0.5 mV, 𝑉𝑎 = −0.01 mV, 𝑉𝑏 = 0.15 mV,
𝑉𝑐 = 0.1 mV, 𝑉𝑑 = 0.05 mV,
𝜂 = 0.07, 𝑇0 = 10000, 𝛼 = 0.7.
V1
V9
0
0.4
-0.5
0
500
1000
(b)
0.2
0
-0.2
-0.4
0
2000
0.2
0.4
(c)
(d)
0.15
0.1
0.05
0
0
4000
t (ms)
V1 (mV)
C19 (mS/cm2)
t (ms)
500
t (ms)
1000
0.2
0
-0.2
-0.4
-0.08
-0.075
-0.07
I1 (nA)
Fig. 1. (a) The discharge of the first and the ninth neuron; (b) the error system; (c) synaptic connection weight
modification of adaptive learning; (d) neural network synchronization to period 4 orbit; for 𝑉ca = 0.855.
In fact, as long as 𝜕𝑓1 (𝑉,𝑊,𝐼)
is bounded, hypothe𝜕𝑉
sis 1 can be satisfied, because
In the neural network, neurons can come to synchronization by the synaptic adaptive learning rule we
propose to transmit information. When 𝑉ca = 0.855,
with the learning of synapses, all neurons in the network start with asynchronous bursting and finally
reach synchronization to period 4 firing (Figs. 1(a) and
1(d)). The error of membrane potentials between any
two neurons tends to zero (Fig. 1(b)).
Synaptic connection strength gradually becomes
strong and tends to a certain value (Fig. 1(c)). Each
synaptic connection tends to a different value decided
by the topological structure of the neural networks
and the initial state of neurons. The simulation results indicate that the strength ladder-like rises with
time between platforms that have no obvious increase.
Compared to the discharge of neurons, it is shown that
the rising section corresponds to the asynchronous firing of presynaptic and postsynaptic neurons and that
the platform stage corresponds to the presynaptic and
postsynaptic neurons in the subthreshold activities or
synchronous firing. This result just proves the appearance in the experiment.[15]
0.4
|𝑓1 (𝑉 𝑖 , 𝑊, 𝐼) − 𝑓1 (𝑉 𝑗 , 𝑊, 𝐼)|
𝜕𝑓1 (𝑉, 𝑊, 𝐼)
|𝑉 =𝑉𝜀 (𝑉 𝑖 − 𝑉 𝑗 )|
=|
𝜕𝑉
𝜕𝑓1 (𝑉, 𝑊, 𝐼)
≤|
|𝑉 =𝑉𝜀 ||(𝑉 𝑖 − 𝑉 𝑗 )| ≤ 𝐿|(𝑉 𝑖 − 𝑉 𝑗 )|.
𝜕𝑉
(6)
0.3
0
-0.2
-0.4
0.4
0.2
0
-0.2
Since | 𝜕𝑓1 (𝑉,𝑊,𝐼)
| = |−𝑔ca 𝑚∞ − 𝑔𝐾 𝑊 − 𝑔𝐿 | ≤ 𝐿, the
𝜕𝑉
hypothesis 1 can be met in this model.
Without loss of generality, the number of neurons
is 20 and the synaptic connection probability is 0.2.
Orignal orbit
Transfer orbit
Synchronization orbit
(a1)
0.2
Ratio
(a)
0.5
e19 (mV)
V1,V9 (mV)
where 𝑉 𝑖 indicates the membrane potential; 𝑊 𝑖
means the activation probability of the potassium ion
channels; 𝐴 = (𝑎𝑖𝑗 )𝑛×𝑛 indicates the connection matrix and 𝐶(𝑡) = (𝑐𝑖𝑗 (𝑡))𝑛×𝑛 shows the strength matrix
of the synaptic connections.
-0.4
(a2)
(b)
0.15
0
0.75
0.4
0.8
0.85 0.9
Vca
0.95
1
(a3)
0.2
0
-0.2
-0.4
Fig. 2. Synchronous orbit transition of the non-identical
neural network, 𝜇ca = 0.875, 𝜎ca = 0.03: (a) neurons with
period 2, period 1, chaotic orbits synchronize to period 2
orbit; (b) the distribution of 𝑉ca .
090501-3
CHIN. PHYS. LETT. Vol. 29, No. 9 (2012) 090501
0.4
>8
(a1)
Periods
0.2
0
(b)
4
-0.2
-0.4
0.4
0.2
Orignal orbit
Synchronization orbit
(a2)
0
0.8
0.85
0.9
0.95
mCa
0.4
(a3)
0.2
0
0
-0.2
-0.2
-0.4
-0.4
Fig. 3. Period 2 orbit transition in different topological properties: (a) period 2 orbit under different 𝜇ca can
transfer to chaotic orbit ( 𝜇ca = 0.84 ), period 2 orbit
(𝜇ca = 0.875), and period 1 orbit (𝜇ca = 0.925); (b) the
period of synchronous orbit with 𝜇ca .
The results show that in the network, neurons in
different phases would finally synchronize to period
4 orbit with the same phases after learning. However, considering the difference between neurons, not
all neurons have the same firing rhythm such as period
4 in the network. Whether the network synchronous
orbit can transfer to another period orbit is still a
question under discussion.
To simulate the nonidentity of neurons, take all parameters in the network as random variables subject
to normal distribution. In particular, the different values of random parameter 𝑉ca ∼ 𝑁 (𝜇ca , 𝜎ca ) make the
neurons of the network in all kinds of firing rhythms
(Fig. 2(b)), which is clearly in line with the firing features of the biological nervous system.
Before learning, neurons in the system have their
own firing rhythms and orbits, such as period 2 firing,
period 1 spiking and chaotic firing (Fig. 2(a)). After learning, the orbits of different neurons can be
synchronized to the same period 2 orbit through a
complex transition process. Simulation results show
that the adaptive learning rule not only applies to the
identical nervous system, but is also valid for a nonidentical nervous system.
Here 𝜇ca is the parameter distribution mean of the
system, representing the topological properties of the
network. Then the orbit transition result is observed
with 𝜇ca as the control parameter. From the simulation results, it is shown that the same period 2 orbit can transfer to any orbit in different topological
networks (Fig. 3(a)). The final synchronous orbit is
determined by the 𝜇ca (Fig. 3(b)), that is, the whole
topological features of the neuron population determine the firing rhythm. This explains the emergence
of synchronization rhythms in the experiment from
the perspective of dynamics.
An adaptive learning rule of synapses in the neural
network is proposed in this study, giving the theoretical proof of the network synchronization of a general asymmetric network similar to the real biological
neural network. This learning rule plays a relatively
great physiological significance. It coincides with some
physiological experimental results of synaptic learning
and can be well applied into all kinds of neural network topological structures.
By the learning approach, the simulation results
indicate that a neural network can be synchronized to
any orbit. In a non-identical system, neurons in different firing rhythms can be synchronized to the same
firing rhythm, while neurons of the same firing rhythm
in different topological networks can be synchronized
to any other rhythm. The discharge feature of population is determined by the topological property of
the network.
Numerical simulation results show that synchronous orbit of a non-identical neural network can
converge to lots of invariant sets. In the real biological
network, there is a time delay in the synaptic transmission between neurons. Ion channels are interfered with
by all kinds of environmental noises, and all neurons
in the network are not the same. For these aspects,
we need more improvement. Under these factors, the
dynamic system will be more complicated and its dynamics become unpredictable. This will be our further
work.
References
[1] Dhamala M, Jirsa V and Ding M 2004 Phys. Rev. Lett. 92
074104
[2] Wang Q Y, Lu Q S, Chen G R and Guo D H 2006 Phys.
Lett. A 356 17
[3] Yoshioka M 2005 Phys. Rev. E 71 061914
[4] Bazhenov M, Huerta R, Robinovich M L and Sejnowski T
1998 Physica D 116 392
[5] Belykh I, Lange E D and Hasler M 2005 Phys. Rev. Lett.
94 188101
[6] Allen I S, Mikhail I R, Henry D I A, Robert E, Attila S,
Reynaldo D P, Ramón H and Pablo V 2000 J. Physiol. 94
357
[7] Regina M G 1997 J. Exp. Biol. 200 1421
[8] Zheng Y H and Lu Q S 2008 Physica A 387 3719
[9] Wang Q Y, Duan Z S, Perc M et al 2008 Europhys. Lett.
83 50008
[10] Perc M 2007 Phys. Rev. E 76 066203
[11] Fradkov A L, Andrievsky B and Evans R J 2008 IEEE
Trans. Circuits I 55 1685
[12] Li C P, Sun W G and Kurths J 2007 Phys. Rev. E 76 046204
[13] Li R, Duan Z S and Chen G R 2008 J. Phys. A: Math.
Theor. 41 385103
[14] Lasalle J P 1960 IRE Trans. Circuit Theor. CT-7 520
[15] Bi G Q and Poo M M 1998 J. Neurosci. 18 10464
090501-4