Neural Networks Coexistence and local stability of multiple

Neural Networks 23 (2010) 189–200
Contents lists available at ScienceDirect
Neural Networks
journal homepage: www.elsevier.com/locate/neunet
Coexistence and local stability of multiple equilibria in neural networks with
piecewise linear nondecreasing activation functions
Wang Lili, Lu Wenlian, Chen Tianping ∗
Shanghai Key Laboratory for Contemporary Applied Mathematics, School of Mathematical Sciences, Fudan University, Shanghai 200433, PR China
Key Laboratory of Nonlinear Science of Chinese Ministry of Education, School of Mathematical Sciences, Fudan University, Shanghai 200433, PR China
article
info
Article history:
Received 22 January 2009
Received in revised form 7 November 2009
Accepted 11 November 2009
Keywords:
Coexistence
Multiple equilibria
Multistability
Neural networks
Attraction basin
abstract
In this paper, we investigate the neural networks with a class of nondecreasing piecewise linear activation
functions with 2r corner points. It is proposed that the n-neuron dynamical systems can have and only
have (2r + 1)n equilibria under some conditions, of which (r + 1)n are locally exponentially stable and
others are unstable. Furthermore, the attraction basins of these stationary equilibria are estimated. In the
case of n = 2, the precise attraction basin of each stable equilibrium point can be figured out, and their
boundaries are composed of the stable manifolds of unstable equilibrium points. Simulations are also
provided to illustrate the effectiveness of our results.
© 2009 Elsevier Ltd. All rights reserved.
1. Introduction
In past decades, neural networks have been extensively
studied due to their applications in image processing, pattern
recognition, associative memories and many other fields. Wilson
and Cowan (1972) studied the dynamics of spatially localized
neural populations, and introduced two functions E (t ) and I (t )
to characterize the states of excitatory neurons and inhibitory
neurons, respectively. And in Wilson and Cowan (1973), authors
derived the following differential equations
 ∂

µ hE (x, t )i = −hE (x, t )i + [1 − re hE (x, t )i]Se [αµ[%e hE (x, t )i



 ∂t
⊗βee (x) − %i hI (x, t )i ⊗ βie (x) ± hP (x, t )i]]
∂


µ hI (x, t )i = −hI (x, t )i + [1 − ri hI (x, t )i]Si [αµ[%e hE (x, t )i


 ∂t
⊗βei (x) − %i hI (x, t )i ⊗ βii (x) ± hQ (x, t )i]]
where hE (x, t )i, hI (x, t )i represent time coarse-grained excitatory
and inhibitory activities, respectively; Se [·], Si [·] are the expected
proportions of excitatory neurons and inhibitory neurons receiving at least threshold excitation per unit time; βjj0 (x) stands for the
probability that cells of class j0 be connected with cells of class j a
distance x away; ⊗ denotes spatial convolution, P (x, t ), Q (x, t ) are
the afferent stimuli to excitatory neurons and inhibitory neurons,
∗
Corresponding author. Tel.: +86 21 55665013; fax: +86 21 65646073.
E-mail addresses: [email protected] (L. Wang), [email protected]
(W. Lu), [email protected] (T. Chen).
0893-6080/$ – see front matter © 2009 Elsevier Ltd. All rights reserved.
doi:10.1016/j.neunet.2009.11.010
respectively. The results obtained are closely related to the biological systems and succeeded in providing qualitative descriptions
of several neural processes. Grossberg (1973) introduced another
class of recurrent on-center off-surround networks, which were
shown to be capable of contrast enhancing significant input information; sustaining this information in short term memory; producing multistable equilibrium points that normalize, or adapt, the
field’s total activity; suppressing noise; and preventing saturation
of population response even to input patterns whose intensities are
high (Ellias & Grossberg, 1975). In such an on-center off-surround
anatomy, a given population excites itself (and possibly near populations) and inhibits populations that are further away (and possibly itself and nearby populations also). And in Cohen and Grossberg
(1983), Cohen–Grossberg neural networks were proposed, which
can be described by the following differential equations
dui
dt
n
X
= ai ( ui ) b i ( ui ) −
cij hj (uj ) ,
i = 1 , . . . , n.
j =1
In particular, let ai (·)
≡ 1, bi (x) = −di x, then,
the Cohen–Grossberg neural networks reduce to the following
Hopfield neural networks
dui (t )
dt
= −di ui (t ) +
n
X
wij fj (uj (t )) + Ii ,
i = 1, . . . , n,
(1)
j =1
where ui (t ) represents the state of the i-th unit at time t; di > 0
denotes the rate with which the i-th unit will reset its potential to
the resting state in isolation when disconnected from the network
and external inputs; wij corresponds to the connection weight of
190
L. Wang et al. / Neural Networks 23 (2010) 189–200
Fig. 1. The saturated activation function.
the j-th unit on the i-th unit; fj (·) is the activation function; and Ii
stands for the external input.
There has been a large number of works on the dynamics
of neural networks in the literature. Note that in many existing
works, the authors mainly focused on the existence of a unique
equilibrium and its stability, see Chen (2001), Chen and Amari
(2001) and other papers. However, in practice, it is desired that
the network has several equilibria, of which each represents an
individual pattern. For example, in the Cellular Neural Networks
|x+1|−|x−1|
(CNNs) with saturated activation function fj (x) =
,j =
2
1, . . . , n see (Fig. 1), a pattern or an associative memory is usually
stored as a binary vector in {−1, 1}n , and the process of the pattern
recognition or memory attaining is that the system converges
to certain stable equilibrium with all components located in
(−∞, −1) or (1, +∞). Also in some neuromorphic analog circuits,
multistable dynamics even play an essential role, as revealed in
Douglas, Koch, Mahowald, Martin, and Suarez (1995), Hahnloser,
Sarpeshkar, Mahowald, Douglas, and Seung (2000), and Wersing,
Beyn, and Ritter (2001). Therefore, the study of coexistence and
stability of multiple equilibrium points, in particular, the attraction
basin, is of great interest in both theory and applications.
In an earlier paper (Chen & Amari, 2001), the authors pointed
du(t )
out that the 1-neuron neural network model dt = −u(t ) + (1 +
)g (u(t )), where is a small positive number and g (u) = tanh(u),
has three equilibrium points and two of them are locally stable,
one is unstable. Recently, for the n-neuron neural networks, many
results have been reported in the literature, see Ma and Wu (2007);
Remy, Ruet, and Thieffry (2008); Shayer and Campbell (2000);
Zeng and Wang (2006); Zhang and Tan (2004) and Zhang, Yi, and
Yu (2008). In Zeng and Wang (2006), by decomposing phase space
Rn into 3n subsets, the authors investigated the multiperiodicity
of delayed cellular neural networks and showed that the n-neural
networks can have 2n stable periodic orbits located in 2n subsets of
Rn . The multistability of Cohen–Grossberg neural networks with a
general class of piecewise activation functions was also discussed
in Cao, Feng, and Wang (2008). It was shown in Cao et al. (2008),
Cheng, Lin, and Shih (2007), Zeng and Wang (2006), and other
papers that under some conditions, the n-neuron networks can
have 2n locally exponentially stable equilibrium points located in
2n saturation regions. But it is still unknown what happens in
the remaining 3n − 2n subsets. In Cheng, Lin, and Shih (2006),
the authors indicated that there can be 3n equilibrium points for
the n-neuron neural networks. However, they only gave emphasis
on 2n equilibrium points which are stable in a class of subsets
with positive invariance, never mentioned the stability nor the
dynamical behaviors of solutions in other 3n − 2n subsets. To
Fig. 2. An example of activation function (2).
the best of our knowledge, there are few papers addressing the
dynamics in the remaining 3n − 2n subsets of Rn , nor the attraction
basins of all stable equilibrium points.
In this paper, we investigate the neural networks (1) and deal
with these issues. To be more general, we present a class of
nondecreasing piecewise linear activation functions with 2r corner
points, which can be described by:
m1
j


 m2 − m1


j
j


(x − p1j ) + m1j

1
1

qj − pj






m2

 j


3
2

 mj − mj

 2
(x − p2j ) + m2j
2
q
−
p
j
j
fj (x) =


m3j




··· ···


· · · · · ·



 m r +1 − m r


j
j


(x − prj ) + mrj

r
r

q
−
p

j
j

 r +1
mj
−∞ < x < p1j ,
p1j ≤ x ≤ q1j ,
q1j < x < p2j ,
p2j ≤ x ≤ q2j ,
(2)
q2j < x < p3j ,
···
···
···
···
prj ≤ x ≤ qrj ,
qrj < x < +∞,
1
k
k
where r ≥ 1, {mkj }kr +
=1 is an increasing constant series, pj , qj , k =
1, 2, . . . , r are constants with −∞ < p1j < q1j < p2j < q2j <
· · · · · · < prj < qrj < +∞, j = 1, 2, . . . , n.
The neural networks with activation function (2) can store
many more patterns or associative memories than those with
saturated function, It is meaningful in applications Fig. 2.
In the following, we will precisely figure out all equilibria for
the system (1), and investigate the stability and attraction basin for
each equilibrium. Discussions and Simulations are also provided to
illustrate and verify theoretical results.
We begin with the multistability for r = 1.
2. Case I: r = 1
In this case, the activation function fj reduces to

mj


M − m
j
j
(x − pj ) + mj
fj (x) =
q
−
p

j
j


Mj
−∞ < x < pj ,
pj ≤ x ≤ qj ,
qj < x < +∞,
(3)
L. Wang et al. / Neural Networks 23 (2010) 189–200
191
Let u(t ) be a solution of dynamical system (4) with initial state
(u1 (0), u2 (0)) ∈ S1 . We claim that u(t ) will stay in S1 for all t > 0.
In fact, from conditions (5), we can find a small constant > 0
such that
−d1 (p1 − ) + w11 m1 + w12 M2 + I1 < 0,
−d2 (q2 + ) + w21 m1 + w22 M2 + I2 > 0.
If for some t1 > 0, (u1 (t1 ), u2 (t1 )) ∈ S1 such that u1 (t1 ) ≥ p1 −
du1 (t )
(or u2 (t1 ) ≤ q2 + ), then, we have dt
|t =t1 < 0 (or dudt2 (t ) |t =t1 >
0), which implies the solution u(t ) ∈ S1 for all t > 0.
Define xi (t ) = ui (t ) − u∗i , i = 1, 2. Then, we have

dx1 (t )


= −d1 x1 (t ),
dt
Fig. 3. The phase plane diagram of subsets of R2 with r = 1.
dt
where mj , Mj , pj , qj are constants with mj < Mj , pj < qj , j =
1, 2, . . . , n. And we first investigate 2-neuron neural networks. The
n-neuron neural networks can be dealt with similarly.
2.1. Neural networks with 2-neurons
Therefore, u∗ is locally stable.
Similarly, we conclude that in each subset S2 , S3 , S4 , there is a
unique equilibrium point, which is locally exponentially stable.
(ii) Subset Λ = [p1 , q1 ] × [p2 , q2 ]
Every equilibrium in subset Λ is a root of the following
equations
In this case, the network (1) reduces to

du1 (t )


= −d1 u1 (t ) + w11 f1 (u1 (t )) + w12 f2 (u2 (t )) + I1 ,
dt

 du2 (t )
(4)
= −d2 u2 (t ) + w21 f1 (u1 (t )) + w22 f2 (u2 (t )) + I2 .
−d1 u1 + w11 l1 u1 + w12 l2 u2 + Î1 = 0,
−d2 u2 + w21 l1 u1 + w22 l2 u2 + Î2 = 0,
lj =
Theorem 1. Suppose that
and
(5)
for i, j = 1, 2, i 6= j. Then system (4) has 9 equilibrium points, in
which 4 are locally exponentially stable and others are unstable.
,
cj =
mj qj − Mj pj
qj − pj
,
i, j = 1, 2,
(11)
Î2 = w21 c1 + w22 c2 + I2 .
(12)
(−di + wii li )pi + lj max{wij pj , wij qj } + Îi < 0,
(−di + wii li )qi + lj min{wij pj , wij qj } + Îi > 0,
(13)
for i, j = 1, 2, i 6= j.
S2 = (q1 , ∞) × (q2 , ∞),
Ξ2 = (−∞, p1 ) × [p2 , q2 ],
Ξ3 = (q1 , ∞) × [p2 , q2 ],
S3 = (−∞, p1 ) × (−∞, p2 ),
S4 = (q1 , ∞) × (−∞, p2 ).
qj − pj
In this case, conditions (5) become
Proof. Obviously, R can be divided into 9 subsets (see Fig. 3):
Ξ1 = [p1 , q1 ] × (q2 , ∞),
Mj − mj
Î1 = w11 c1 + w12 c2 + I1 ,
2
S1 = (−∞, p1 ) × (q2 , ∞),
(10)
where
dt
and we prove
−di pi + wii mi + max{wij mj , wij Mj } + Ii < 0,
−di qi + wii Mi + min{wij mj , wij Mj } + Ii > 0,
(9)

 dx2 (t ) = −d2 x2 (t ).
2
If w21 = 0, by solving the Eq. (10), we get u∗∗
2 = − −d2 +w22 l2 ∈
Î
Λ = [p1 , q1 ] × [p2 , q2 ],
l2 Î2 −(−d2 +w22 l2 )Î1
∈ (p1 , q1 ). Hence,
(p2 , q2 ) and u∗∗
= (−wd121 +w
1
11 l1 )(−d2 +w22 l2 )
∗∗
∗∗
(u1 , u2 ) is the unique root of Eq. (10) in Λ.
Otherwise, assume w21 > 0. Consider the following two
Ξ4 = [p1 , q1 ] × (−∞, p2 ),
straight lines deduced by the Eq. (10):
We discuss each subset, separately.
(i) Subset S1 = (−∞, p1 ) × (q2 , ∞)
In this subset, any equilibrium is a root of the following
equations
−d1 u1 + w11 m1 + w12 M2 + I1 = 0,
−d2 u2 + w21 m1 + w22 M2 + I2 = 0.
y1 (u2 ) = −
w12 l2
Î1
u2 −
,
−d1 + w11 l1
−d1 + w11 l1
(14)
y2 (u2 ) = −
−d2 + w22 l2
Î2
u2 −
.
w21 l1
w21 l1
(15)
(6)
By (13), we have
By (5), we know that
−d1 p1 + w11 m1 + w12 M2 + I1 < 0,
lim (−d1 x + w11 m1 + w12 M2 + I1 ) = +∞,
(7)
−d2 q2 + w21 m1 + w22 M2 + I2 > 0,
lim (−d2 x + w21 m1 + w22 M2 + I2 ) = −∞.
(8)
x→−∞
(−d1 + w11 l1 )q1 + w12 l2 p2 + Î1 > 0,
(−d2 + w22 l2 )p2 + w21 l1 q1 + Î2 < 0.
(16)
Eliminating q1 , we have
x→+∞
Therefore, (6) has a unique solution u∗ = (u∗1 , u∗2 ) ∈ S1 , which is
also the unique equilibrium point of (4) in S1 .
−
w12 l2
Î1
−d2 + w22 l2
Î2
p2 −
<−
p2 −
,
−d1 + w11 l1
−d1 + w11 l1
w21 l1
w21 l1
which means y1 (p2 ) < y2 (p2 ).
192
L. Wang et al. / Neural Networks 23 (2010) 189–200
By similar arguments, we have y1 (q2 ) > y2 (q2 ), too.
Therefore, there must be a unique u∗∗
∈ (p2 , q2 ) such that
2
∗∗
∗∗
∗∗
∗∗
y1 (u∗∗
= y1 (u∗∗
2 ) = y2 (u2 ). Denote u1
2 ). Then (u1 , u2 ) is the
unique solution of (10), which is the unique equilibrium of (4) in Λ.
Let u(t ) be a solution of dynamical system (4) with initial state
(u1 (0), u2 (0)) near u∗∗ , and xi (t ) = ui (t ) − u∗∗
i , i = 1, 2. Then, we
have

dx1 (t )


= (−d1 + w11 l1 )x1 (t ) + w12 l2 x2 (t ),
dt

 dx2 (t ) = (−d2 + w22 l2 )x2 (t ) + w21 l1 x1 (t ).
(17)
dt
Denote
−d1 + w11 l1
A=
w21 l1
w12 l2
.
−d2 + w22 l2
(18)
Because the trace
trace(A) = (−d1 + w11 l1 ) + (−d2 + w22 l2 ) > 0
which implies Re(λ) > 0 for at least one eigenvalue λ of A. Thus,
u∗∗ is an unstable equilibrium.
(iii) Subset Ξ1 = [p1 , q1 ] × (q2 , ∞)
In this case, any equilibrium is a root of the following equations
−d1 u1 + w11 l1 u1 + w12 l2 q2 + Î1 = 0,
−d2 u2 + w21 l1 u1 + w22 l2 q2 + Î2 = 0,
(19)
where Î1 , Î2 are defined as (12).
Using conditions (5), by similar method, we can prove that
∗∗∗
(u∗∗∗
1 , u2 ) ∈ Ξ1 is a unique solution (19) and is unstable.
Similar discussions apply to the subsets Ξ2 , Ξ3 , Ξ4 .
In summary, there are 32 equilibria of system (4) under
conditions (5), and 22 of them are locally exponentially stable while
others are unstable. Y
Ω̃1 =
(−∞, pk )δ1 × [pk , qk ]δ2 × (qk , +∞)δ3 :
k
k
k=1
(δ1k , δ2k , δ3k ) = (1, 0, 0),
k = 1, . . . , n.
or (0, 1, 0),
or (0, 0, 1),
where N1 , N2 are
T subsets of {1, 2, . . . , n}, and N1
{1, 2, . . . , n}, N1 N2 = ∅.
Denote
Fi (ξ ) = −di ξ +
X
ηk ∈{0,1},
k=1,...,n
n
Y
dt
Φ3 = Rn − Φ1 − Φ2 .
k=1
Now, we will prove the following theorem:
Theorem 2. Suppose that

n
X


−
di pi + wii mi +
max{wij mj , wij Mj } + Ii < 0,


j=1,j6=i
n
X



min{wij mj , wij Mj } + Ii > 0,
−di qi + wii Mi +
j=1,j6=i
X
wij Mj + Ii ,
S
N2
=
i = 1, . . . , n.
j∈N2
Fi (pi ) < 0,
for i ∈ N1 ;
Fi (+∞) = −∞,
for i ∈ N2 .
= −di xi (t ),
i = 1, . . . , n,
(21)
which implies v ∗ in Ω̃1 is stable exponentially.
Because Ω̃1 ⊂ Φ1 is chosen arbitrarily. We conclude that in
each of the subsets of Φ1 , there is a unique locally exponentially
stable equilibrium. Therefore, the system (1) has 2n locally stable
equilibrium points.
(ii) Subset Φ2
Qn
Denote Ω̃2 = Φ2 =
k=1 [pk , qk ]. Consider the following
equations
k
k
(−∞, pk )η × [pk , qk ]0 × (qk , +∞)1−η ,
k=1
[pk , qk ],
wij mj +
j∈N1
dxi (t )
Furthermore, denote
n
[ Y
(qk , +∞) ⊂ Φ1 ,
k∈N2
k∈N1
Therefore, the equation set Fi (ξ ) = 0, i = 1, . . . , n, has a unique
root, which is the very equilibrium of system (1) in Ω̃1 . Denote it
as v ∗ = (v1∗ , v2∗ , . . . , vn∗ ).
Using the same method in the proof of case (i) in Theorem 1,
we can prove that Ω̃1 is invariant with respect to the solution
with initial state in Ω̃1 . That is, if u(0) ∈ Ω̃1 , then the solution
u(t ) (t ≥ 0) of the dynamical system (1) will stay in Ω̃1 . Letting
x(t ) = u(t ) − v ∗ , we have
the phase space Rn is divided into 3n subsets (see Fig. 4):
k
Y
(−∞, pk ) ×
Fi (qi ) > 0,
(−∞, pk ) = (−∞, pk )1 × [pk , qk ]0 × (qk , +∞)0 ,
[pk , qk ] = (−∞, pk )0 × [pk , qk ]1 × (qk , +∞)0 ,
(qk , +∞) = (−∞, pk )0 × [pk , qk ]0 × (qk , +∞)1 ,
Φ2 =
Proof. Similar to the proof of Theorem 1, we discuss the dynamics
for every subset, respectively.
(i) Subset Φ1
Pick
Fi (−∞) = +∞,
Let
Φ1 =
for i, j = 1, 2, . . . , n. Then, the dynamical system (1) has 3n
equilibrium points in all, 2n of which are locally exponentially stable
and others are unstable.
Then, from conditions (20), it is easy to see that
2.2. Neural networks with n-neurons
n
Y
Fig. 4. The phase space diagram of subsets of R3 with r = 1.
− di ui + wii li ui +
X
wij lj uj +
j6=i
n
X
wij cj + Ii = 0,
(22)
,
(23)
j =1
where
(20)
lj =
Mj − mj
qj − pj
,
cj =
mj qj − Mj pj
qj − pj
i, j = 1, . . . , n.
L. Wang et al. / Neural Networks 23 (2010) 189–200
For any u = (u1 , u2 , . . . , un ) ∈ Ω̃2 , fix u2 , . . . , un , we can find a
unique ū1 ∈ (p1 , q1 ) such that
−d1 ū1 + w11 l1 ū1 +
X
w1j lj uj +
n
X
since
+

n
X
X


−
d1 p1 + w11 l1 p1 +
w
lj uj +
w1j cj + I1 < 0,

1j




−d1 q1 + w11 l1 q1 +
j6=1
j =1
X
n
X
w1j lj uj +
X
kj wij im+1 lim+1 qim+1 +
w1j cj + I1 > 0.
n
X
wij cj + Ii = 0,
m
X
+
wim+1 ij lij qij +
Define a map
T : Ω̃2 → Ω̃2
(u1 , u2 , . . . , un ) → (ū1 , ū2 , . . . , ūn ).
It is clearly a continuous map. By Brouwer’s fixed point theorem,
there exists at least one v ∗∗ ∈ Ω̃2 such that T v ∗∗ = v ∗∗ , which is
also the equilibrium point of (1) in Ω̃2 .
Now, we claim that the coefficient matrix of (22)
Jij kj <
(24)
Without loss of generality, let k1 , . . . , ks > 0, while ks+1 , . . . , km <
0. Denote Θ as the index set except {i1 , . . . , im , im+1 }.
From conditions (20), it follows that

s
m
+1
X
X


wkij lij pij +
wkij lij qij + Jk < 0,
(−dk + wkk lk )pk +



j
=
s
+
1
j
=
1
,
i
=
6
k
j


k = i1 , . . . , is ,
m
+1
s
X
X



(−
d
+
w
l
)
q
+
w
l
q
+
wkij lij pij + Jk > 0,

k
kk
k
k
ki
i
i
j
j
j



j =1
j=s+1,ij 6=k

k = is+1 , . . . , im ,
Jk =
X
h∈Θ
X
wim+1 h lh ph +
n
X
wim+1 h ch + Iim+1 .
s h
X
−kj dij +
+
m
X
j=s+1
h=1
m
X
m h
m
i
X
X
−kj dij +
kh wih ij lij pij
i
kh wih ij lij qij +
kj wij im+1 lim+1 pim+1 +
j =1
m
X
wkh lh ph +
h=1
wkh ch + Ik ,
k = i1 , . . . , im .
h=1
Jij kj > 0,
(29)
j =1
(−dim+1 + wim+1 ,im+1 lim+1 )pim+1 +
s
X
wim+1 ij lij qij
j=1
+
m
X
wim+1 ij lij pij +
j=s+1
m
X
Jij kj > 0.
(30)
j =1
Compared with (20), it holds that
m
X
Jij kj >
X
wim+1 h lh ph +
h∈Θ
j =1
n
X
wim+1 h ch + Iim+1 ,
(31)
h=1
which is a contradiction to (28). Therefore, Ã is invertible. It means
the equilibrium v ∗∗ of (1) in Ω̃2 is unique.
Furthermore, let u(t ) be a solution of system (1) with initial
state nearby v ∗∗ . Denote xi (t ) = ui (t ) − vi∗∗ , i = 1, . . . , n. We
have
dxi (t )
dt
= (−di + wii li )xi (t ) +
X
j6=i
Because the trace
n
X
(28)
h=1
Multiplying each inequality by the corresponding kj and adding
them together, we get
(25)
where
(27)
which is equivalent to
is invertible.
Otherwise, assume that à is not invertible. Denote αi as
its i-th row vector. Then, there must exist nonzero constants
k1 , . . . , km (m ≤ n − 1), and i1 , . . . , im , im+1 ∈ {1, 2, . . . , n}, such
that
αim+1 = k1 αi1 + · · · + km αim .
Jij kj < 0.
h∈Θ
j =1

w1n ln
w2n ln 
,
···
−dn + wnn ln
m
X

m
+1
s
X
X


w
l
q
+
wkij lij pij + Jk > 0,
(−
dk + wkk lk )qk +
ki
i
i

j j j



j=s+1
j=1,ij 6=k


k = i1 , . . . , is ,
s
m
+1
X
X



wkij lij pij +
wkij lij qij + Jk < 0,
(−dk + wkk lk )pk +




j=1
j=s+1,ij 6=k

k = is+1 , . . . , im .
j=1
···
···
···
···
wim+1 ij lij pij
On the other hand, from conditions (20), it also follows that
j=1
n
w12 l2
−d2 + w22 l2
···
wn2 l2
s
X
j =1
j=s+1
m
X
X
X



wij lj uj +
wij cj + Ii > 0.
−di qi + wii li qi +
−d1 + w11 l1
 w21 l1
à = 
···
wn1 l1
(26)
Then, compared with (20), we have

n
X
X


wij lj uj +
wij cj + Ii < 0,
−di pi + wii li pi +


Jij kj < 0.
j =1
j =1
j6=i
i
kh wih ij lij qij
j =1
(−dim+1 + wim+1 ,im+1 lim+1 )qim+1 +
since
j6=i
m
X
m
X
h=1
j=s+1
h=1
m
X
−kj dij +
By (25), it is equivalent to
j=1
j6=i
m h
X
i
kh wih ij lij pij +
j =1
j6=1
wij lj uj +
m
X
j =1
Similarly, fixing u1 , . . . , ui−1 , ui+1 , . . . , un (i = 2, . . . , n), we can
find a unique ūi ∈ (pi , qi ) such that
−di ūi + wii li ūi +
−kj dij +
j =1
j =1
j6=1
Multiplying each inequality by the corresponding kj and adding
them together, we get
s h
X
w1j cj + I1 = 0,
193
trace(Ã) =
n
X
(−di + wii li ) > 0.
i =1
wij lj xj (t ),
i = 1, . . . , n. (32)
194
L. Wang et al. / Neural Networks 23 (2010) 189–200
We conclude that there must be an eigenvalue λ of à such that
Re(λ) > 0. Thus, the system (32) is unstable, so is the equilibrium
v ∗∗ .
(iii) Subset Φ3
Pick
Ω̃3 =
Y
(−∞, pk ) ×
k∈N1
Y
(qk , +∞) ×
k∈N2
Y
where N1
N2
N3 = {1, 2, . . . , n}, Ni
Consider the following equations:
−d i u i +
S
S
X
wij mj +
X
j∈N1
i ∈ N1
[
wij Mj +
j∈N2
Nj = ∅, i 6= j.
wij lj uj +
j∈N3
X
wij cj + Ii = 0,
j∈N3
(33)
X
wij mj +
j∈N1
X
X
T
N2 ;
(−di + wii li )ui +
+
[pk , qk ] ⊂ Φ3 ,
k∈N3
wij lj uj +
j∈N3 ,j6=i
X
wij Mj
j∈N2
X
wij cj + Ii = 0,
i ∈ N3 .
4. Attraction basins of equilibria
Similar to the proof of case (ii), we define a map
T̃ :
Y
[pj , qj ] →
j∈N3
Fig. 5. The phase plane diagram of subsets of R2 with r = 2.
(34)
j∈N3
Y
[pj , qj ]
j∈N3
(ui1 , ui2 , . . . , uis ) → (ūi1 , ūi2 , . . . , ūis ),
in which N3 is denoted as N3 = {i1 , . . . , is }, and ūik , k = 1, . . . , s
are defined in the same way as above ūik in case (ii).
Since the map T̃ is continuous, it follows from Brouwer’s
fixed point theorem that there exists (vi∗∗∗
, vi∗∗∗
, . . . , vi∗∗∗
) ∈
s
1
2
Q
[
p
,
q
]
such
that
j
j
j∈N3
T̃ (vi∗∗∗
, vi∗∗∗
, . . . , vi∗∗∗
) = (vi∗∗∗
, vi∗∗∗
, . . . , vi∗∗∗
).
s
s
1
2
1
2
In this section, we investigate attraction basins of equilibria for
the system (1) with activation functions (2).
We begin with the case n = 2 and r = 1. Under conditions (5),
Theorem 1 tells us that there are 4 locally stable equilibrium points
uS1 , uS2 , uS3 , uS4 , in subsets S1 , S2 , S3 , S4 , respectively, and 5 unstable equilibrium points uΞ1 , uΞ2 , uΞ3 , uΞ4 , uΛ in Ξ1 , Ξ2 , Ξ3 , Ξ4 , Λ,
respectively.
Take a further look at the dynamics in subsets Ξ1 , Ξ2 , Ξ3 , Ξ4 ,
for example, in Ξ1 = [p1 , q1 ] × (q2 , ∞), system (1) takes the
following form

du1 (t )


= (−d1 + w11 l1 )u1 (t ) + w12 M2 + w11 c1 + I1 ,
By similar arguments used in the proof of case (ii), the coefficient
matrix of (34) is invertible, which implies that (vi∗∗∗
, vi∗∗∗
,...,
1
2
∗∗∗
∗∗∗
vis ) is unique. Then, replacing uj , j ∈ N3 , by vj , and solving (33),
S
we obtain vi∗∗∗ for i ∈ N1
N2 . Denote v ∗∗∗ = (v1∗∗∗ , . . . , vn∗∗∗ ). It
is just the equilibrium in Ω̃3 . By the same proof of the case (ii), we
conclude that it is unstable.
Because Ω̃3 ⊂ Φ3 is chosen arbitrarily, we conclude that in each
subset of Φ3 , there is an equilibrium point, which is unstable.
In summary, system (1) has in total 3n equilibrium points, 2n of
them are locally exponentially stable and others are unstable. where l1 , c1 are defined as in (11).
Denoting x(t ) = u(t ) − uΞ1 , we have
3. Case II: r ≥ 1
It is easy to see that
dt

 du2 (t ) = −d2 u2 (t ) + w21 l1 u1 (t ) + w22 M2 + w21 c1 + I2 ,
(36)
dt

dx1 (t )


= (−d1 + w11 l1 )x1 (t ),
dt

 dx2 (t ) = −d2 x2 (t ) + w21 l1 x1 (t ).
(37)
dt
Inspired by the discussions above, in this section, we discuss the
dynamical system (1) with activation fj described by (2).
Theorem 3. Suppose that
X

k
k
max{wij m1j , wij mrj +1 } + Ii < 0,

−di pi + wii mi +
j6=X
i
k+1
k

min{wij m1j , wij mrj +1 } + Ii > 0,
−di qi + wii mi +
(35)
j6=i
Ξ
(i) along the direction u = (u1 1 , u2 ), the equilibrium uΞ1 is
attractive;
Ξ
dx1 (t )
(ii) when u1 < u1 1 , then dt
< 0, so that u1 (t ) decreases until
u1 (t0 ) ≤ p1 for some t0 > 0; while u2 (t ) remains in (q2 , +∞)
for all t ≥ 0. Hence, (u1 (t ), u2 (t )) will be attracted to S1 ;
Ξ
(iii) similarly, when u1 > u1 1 , (u1 (t ), u2 (t )) will be attracted to
S2 .
Therefore,
Ξ
T
Ξ1 is in attraction basin of uΞ1 ;
Ξ1 T
2. {(u1 , u2 ) : u1 < u1 } Ξ1 is in attraction basin of uS1 ;
Ξ T
3. {(u1 , u2 ) : u1 > u1 1 } Ξ1 is in attraction basin of uS2 .
for i, j = 1, 2, . . . , n, k = 1, 2, . . . , r. Then, the dynamical system
(1) has (2r + 1)n equilibrium points. Among them, (r + 1)n are locally
stable and others are unstable.
1. {(u1 , u2 ) : u1 = u1 1 }
Proof. Obviously, R can be divided into (2r + 1) subsets, so that Rn
can be divided into (2r + 1)n subsets. For example, when r = 2, R2
can be divided into the following 25 subsets:
In this case, we can prove that there are 25 equilibrium points.
Among them, 9 equilibrium points located in subsets Si , i =
1, . . . , 9 are locally stable. Others are unstable. The detailed proof
is omitted (Fig. 5).
For uΞ2 , uΞ3 , uΞ4 , we can obtain similar conclusions.
In Λ, consider the dynamics of the following system
duT (t )
dt
= AuT (t ) + α T ,
where A is defined as (18) and α = (Î1 , Î2 ).
(38)
L. Wang et al. / Neural Networks 23 (2010) 189–200
Suppose that det (A) > 0. Then, all eigenvalues of A have positive
real-parts, which implies that uΛ is unstable.
Define yT (t ) = uT (t ) + A−1 α T , then
dy (t )
= AyT (t ),
dt
(39)
If Û (t ) is its fundamental matrix, then, yT (t ) = Û (t )yT (0). Similar
to time reversal, define ỹT (t ) = Û −1 (t )ỹT (0), ũT (t ) = ỹT (t ) −
A−1 α T . Then, we have
dỹT (t )
= −AỹT (t ),
dt
and
(40)
dũT (t )
= −AũT (t ) − α T ,
(41)
dt
which is an asymptotical stable system and all trajectories
converges to uΛ .
Let
Γ1 :
Ξ1
ũ(t ) = [(u1 , q2 ) + α(A )
T −1
−AT t
]e
− α(A ) ,
T −1
(42)
Ξ1
be the trajectory of system (41) with initial state (u1 , q2 ), which
is also a trajectory of the system (38) with initial state neighbored
Ξ
uΛ and passing through (u1 1 , q2 ) by reversal. Then, the attraction
basin of uΞ1 lies in (Ξ1
Similarly, define
T
Ξ
{u : u1 = u1 1 })
Γ1 .
Tt
− α(AT )−1 ,
(43)
Ξ
Tt
− α(AT )−1 ,
(44)
−AT t
− α(AT )−1 .
(45)
ũ(t ) = [(p1 , u2 2 ) + α(AT )−1 ]e−A
Γ3 :
ũ(t ) = [(q1 , u2 3 ) + α(AT )−1 ]e−A
Ξ
S
Ξ
Γ2 :
Γ4 :
and their attraction basins can be described as the interior of subsets
[ \
[
\
Ξ
Ξ
Ξ3
{u : u2 > u2 3 }
∆2 ,
{ u : u1 > u1 1 }
[
[ \
[ \
Ξ
Ξ
∆3 ,
Ξ4
{u : u1 < u1 4 }
S3
Ξ2
{ u : u2 < u2 2 }
[ \
[
[ \
Ξ
Ξ
Ξ4
{u : u1 > u1 4 }
∆4 ,
S4
Ξ3
{ u : u2 < u2 3 }
S2
T
ũ(t ) = [(u1 4 , p2 ) + α(AT )−1 ]e
we can prove that
T
S
Ξ
• the attraction basins of uΞ2 lies in (Ξ2 {u : u2 = u2 2 }) Γ2 ;
T
S
Ξ
• the attraction basins of uΞ3 lies in (Ξ3 {u : u2 = u2 3 }) Γ3 ;
T
S
Ξ
• the attraction basins of uΞ4 lies in (Ξ4 {u : u1 = u1 4 }) Γ4 .
Then, let
∆1 be the region of Λ bounded by Γ1 and Γ2 ,
∆2 be the region of Λ bounded by Γ1 and Γ3 ,
∆3 be the region of Λ bounded by Γ2 and Γ4 ,
∆4 be the region of Λ bounded by Γ3 and Γ4 .
We claim that the attraction basin of uS1
[ \
Ξ1
S1
AB (u ) = int S1
Ξ1
{ u : u1 < u1 }
[ [
\
Ξ2
Ξ2
{ u : u2 > u2 }
∆1 ,
where int(·) denotes the interior of the subset.
Based on previous discussions, what we need to prove is that
any trajectory with initial u(0) ∈ ∆1 will converge to uS1 . Suppose
u(t ) is an arbitrary solution with initial state in ∆1 . Due to the
uniqueness, u(t ) does not intersect with boundary of AB (uS1 ),
which implies that u(t ) would stay in AB (uS1 ) for all t ≥ 0. Then,
what we need to do next is to prove that u(t ) cannot be bounded
in ∆1 .
If not, u(t ) is bounded in ∆1 , and we denote ω(u) as the ωlimit set of u(t ). Then ω(u) 6= ∅, and uΛ 6∈ ω(u) obviously. By
Poincaré–Bendixson Theorem, we know that u(t ) is closed, or ω(u)
is closed. Because every trajectory of system (41) converges to uΛ
exponentially, so, there is not any closed orbit.
Hence, there must exist t0 > 0 such that u(t0 ) 6∈ ∆1 . By conditions (5), it would stay in AB (uS1 ) \ ∆, which implies u(t ) converges to uS1 . Similar conclusions can be derived for uS2 , uS3 , uS4 ,
195
[
Ξ1
respectively.
Summing up, we have the following
Theorem 4. Suppose that conditions (5) are satisfied. If det (A) >
0, then, the whole phase plane R2 can be divided into 4 parts,
the interior of which are the very attraction basins of equilibria
uS1 , uS2 , uS3 , uS4 , respectively; and the boundaries are attraction
basins of uΞ1 , uΞ2 , uΞ3 , uΞ4 , respectively. When r ≥ 1, n = 2, denote lkθ =
1, . . . , r and
(k,j)
A
−d1 + w11 lk1
=
w21 lk1
k+1
mθ −mkθ
qkθ −pkθ
, θ = 1, 2, k, j =
w12 lj2
.
−d2 + w22 lj2
Then, we have
Theorem 5. Suppose that (35) (with n = 2) is satisfied. If
det (A(k,j) ) > 0 are satisfied for all k, j = 1, . . . , r, then,
the whole phase plane R2 can be divided into (r + 1)2 parts,
the interior of which are the very attraction basins of equilibria
S
uS1 , . . . , u (r +1)2 , respectively; and the boundaries are attraction
basins of uΞ1 , . . . , uΞ2r (r +1) , respectively. In the following, we discuss the n-neural networks with
r = 1. Theorem 2 tells us that under conditions (20), there
are 3n equilibrium points, 2n of them are stable in subsets
S1 , . . . , S2n , and 3n − 2n of them are unstable and located in
Ξ1 , . . . , Ξn2n−1 , Λ1 , . . . , Λ3n −2n −n2n−1 , respectively (See Fig. 4).
Next, we are to investigate their corresponding attraction basins.
Qn
Take a look at dynamics in Ξ1 = [p1 , q1 ] × i=2 (qi , +∞) for
example, in which system (1) reduces to the following equations

n
X
du1 (t )


=
(−
d
+
w
l
)
u
(
t
)
+
w1j Mj + w11 c1 + I1 ,
1
11 1
1
 dt


j=2

n
X
dui (t )


=
−
d
u
(
t
)
+
w
l
u
(
t
)
+
wij Mj + wi1 c1 + Ii ,
i
i
i1
1
1



j =2
 dt
for i = 2, . . . , n,
where li , ci , i = 1, . . . , n, are defined as (23). Denote x(t ) =
u(t ) − uΞ1 , and we have

dx (t )

 1
= (−d1 + w11 l1 )x1 (t ),
dt
dx
(
t
)
i


= −di xi (t ) + wi1 l1 x1 (t ),
dt
(46)
for i ≥ 2.
Then, similar to the analysis on the case with n = 2, it is easy to
see that, uΞ1 is attractive only along the n − 1 dimension surface
Ξ
x1 = 0, i.e., u1 = u1 1 . Correspondingly, Ξ1 splits into 2 parts,
Ξ T
{u : u1 < u1 1 } Ξ1 is in attraction basin of uS1 , {u : u1 >
T
Ξ
u1 1 } Ξ1 is in attraction basin of uS2 , where S1 = (−∞, p1 ) ×
Qn
Qn
i=2 (qi , +∞), S2 =
i=1 (qi , +∞). Similar conclusions can be
derived for Ξ2 , . . . , Ξn2n−1 .
196
L. Wang et al. / Neural Networks 23 (2010) 189–200
Next, consider
the dynamics in the subset Λ1 = [p1 , q1 ] ×
Qn
i=3 (qi , +∞), in which system (1) reduces to the
following equations
where λj > 0, j = 1, . . . , n, k = −r , . . . , −1, 0, 1, . . . , r. And we
have
 du (t )
1


 dt = (−d1 + w11 l1 )u1 (t ) + w12 l2 u2 (t )


n

X



+
w1j Mj + w11 c1 + w12 c2 + I1 ,




j
=
3



 du2 (t )


 dt = w21 l1 u1 (t ) + (−d2 + w22 l2 )u2 (t )
n
X

+
w2j Mj + w21 c1 + w22 c2 + I2 ,




j =3




dui (t )


= −di ui (t ) + wi1 l1 u1 (t ) + wi2 l2 u2 (t )


dt


n

X



wij Mj + wi1 c1 + wi2 c2 + Ii , i ≥ 3.
 +
Corollary 2. Suppose that there are constants pkj , qkj ∈ (2k − 1, 2k +
1), k = −r , . . . , −1, 0, 1, . . . , r such that
Denote x(t ) = u(t ) − uΛ1 , and we have
It should be noted that under conditions (5), (20) and (35), the
pattern storage of neural network (1) with activation function (2)
reaches its maximum. And in this case, the conditions wii li >
di , i = 1, . . . , n, are necessary.
In fact, for any fixed index Q
i, there exists at least one
equilibrium point u∗ in the subset j∈N1 (−∞, pj ) × (−∞, pi ) ×
Q
(q , ∞), and at least one equilibrium point v ∗ in the subset
Qj∈N2 j
Q
∞) ×
(q , ∞), where the index sets
j∈N1 (−∞, pj ) × (qi ,
S S j∈N2 j
T
N1 , N2 satisfy that N1
N2 {i} = {1, . . . , n}, N1
N2 = ∅. The
∗
i-th component of u can be written as
[p2 , q2 ] ×
r
r
−∞ < p−
< q−
< pj−r +1 < qj−r +1 < · · · · · · < prj < qrj < +∞,
j
j
and
X

k
k
(2r tanh(λj ) + 1)|wij | + Ii < 0,

−di pi + wii gi (pi ) +
j
=
6
i
X
k
k

(2r tanh(λj ) + 1)|wij | + Ii > 0,
−di qi + wii gi (qi ) −
j6=i
for i, j = 1, 2, . . . , n, k = −r , . . . , −1, 0, 1, . . . , r. Then, the
dynamical system (1) has (4r + 3)n equilibrium points. Among them,
(2r + 2)n are locally stable.
j =3

dx1 (t )


= (−d1 + w11 l1 )x1 (t ) + w12 l2 x2 (t ),

 dt


dx2 (t )
= w21 l1 x1 (t ) + (−d2 + w22 l2 )x2 (t ),

dt




 dxi (t ) = −d x (t ) + w l x (t ) + w l x (t ),
dt
i i
i1 1 1
i2 2 2
(47)
i ≥ 3,
which implies that uΛ1 is attractive along the surface
(51)
(−d1 + w11 l1 )x1 + w12 l2 x2 = 0,
w21 l1 x1 + (−d2 + w22 l2 )x2 = 0.
(48)
Suppose all 2-ordered principal minor determinants of à are
positive. Then, the above surface reduces to x1 = 0, x2 = 0,
Λ
Λ
i.e. u1 = u1 1 , u2 = u2 1 , which is a n − 2 dimensional surface
in subset Λ1 . Compared with n − 1 dimension surface, it cannot
split Λ1 directly, which needs further investigation. So are other
subsets Λ2 , . . . , Λ3n −2n −n2n−1 .
5. Discussions
wii mi +
P
wij mj +
P
j∈N1
∗
ui =
wij Mj + Ii
j∈N2
di
< pi ,
(52)
> qi .
(53)
while
wii Mi +
P
wij mj +
j∈N1
vi =
∗
P
wij Mj + Ii
j∈N2
di
Then, we have
In Sections 2 and 3, we investigate the multistability of neural
networks (1) with the activation function given in (2). The
proposed method is also applicable when different neurons have
different activation functions.
In fact, let r1 , . . . , rn be constants such that the activation fj of
the j-th neuron has 2rj corner points. Then, we have
Corollary 1. Suppose that

X
rj +1
k
k

max{wij m1j , wij mj } + Ii < 0,

−di pi + wii fi (pi ) +
j6=i
X
r j +1
k
k

−
d
q
+
w
f
(
q
)
+
min{wij m1j , wij mj } + Ii > 0,

i
ii
i
i
i

(49)
j6=i
wii Mi − wii mi
di
=
wii li (qi − pi )
di
> qi − pi ,
Similarly, the method can also be used with the neural networks
(1) with multilevel sigmoid activation functions as follows:
(54)
which implies that wii li > di .
It is natural to ask what will happen when wii li ≤ di , in particular, when wii < 0, i.e., the i-th neuron is self-inhibited. In this case,
we find that system (1) may have more than one equilibrium point,
but the number must less than 3n or (2r + 1)n . We will not discuss
it in detail here. And just explain it by an example.
Consider the following 2-neuron neural network:

du1 (t )


= −2u1 (t ) + 5f1 (u1 (t )) − f2 (u2 (t )) + 1,
dt
for k = 1, . . . , ri , i, j = 1, 2, . . . , n. Then, the dynamical system (1)
has (2r1 + 1)(2r2 + 1) · · · (2rn + 1) equilibrium points in all. Among
them, (r1 + 1)(r2 + 1) · · · (rn + 1) equilibrium points are locally stable
and others are unstable.
−2r tanh(λ ) + tanh(λ (x + 2r )),
j
j


 −∞ < x ≤ −2r − 1,


2k tanh(λj ) + tanh(λj (x − 2k)),
gj (x) =
2k − 1 ≤ x ≤ 2k + 1,




2r tanh(λj ) + tanh(λj (x − 2r )),
2r + 1 < x < ∞,
vi∗ − u∗i =

 du2 (t ) = −4u2 (t ) − f1 (u1 (t )) + f2 (u2 (t )) − 1,
(55)
dt
, i = 1, 2. It has
where the activation function is fi (x) =
2
3 equilibrium points (−2, 0), (−0.4, −0.2), (10/3, −2/3) in all.
And the dynamics with more than 175 initial states are illustrated
in Figs. 6 and 7.
|x+1|−|x−1|
6. Simulations
(50)
In the following, we present three more examples to illustrate
the effectiveness of the theoretical results.
L. Wang et al. / Neural Networks 23 (2010) 189–200
197
Fig. 6. The dynamics of (55) with different initial states.
Fig. 8. The dynamics of (56) with different initial states.
Fig. 7. Details of the inset.
Example 1. Consider the following neural network with 2neurons:

du1 (t )


= −u1 (t ) + 4f1 (u1 (t )) + f2 (u2 (t )) − 1,
dt

 du2 (t ) = −2u2 (t ) + f1 (u1 (t )) + 5f2 (u2 (t )) − 1,
(56)
Fig. 9. Details of the inset/ecoltext.
dt
where the activation functions are fi (x) =
, i = 1, 2.
2
It is easy to see that the conditions (5) are satisfied. Therefore,
by Theorem 1, there exist 9 equilibria, and 4 of them are locally
stable while others are unstable. In fact, the equilibrium points are
(−6, −3.5), (−13/3, 2/3), (−4, 1.5), (2/3, −16/3), (1/4, 1/4),
(0, 2), (2, −2.5), (3, 0), (4, 2.5). And
|x+1|−|x−1|
Γ1 :
Γ2 :
Γ3 :

1
1
1

u1 (t ) = − e−2t + e−4t + ,
2
4
6
12
4
1
1
1

u2 (t ) = e−2t + e−4t + ;
2
4
4

5
5
1

u1 (t ) = − e−2t − e−4t + ,
4
5
5
1

u2 (t ) = e−2t − e−4t + ;
6
12
4

1
1
1

u1 (t ) = e−2t + e−4t + ,
2
4
4
1
1
1

u2 (t ) = − e−2t + e−4t + ;
2
4
4
Γ4 :

5
1
5

u1 (t ) = e−2t − e−4t + ,
6
12
4
5
5
1

u2 (t ) = − e−2t − e−4t + .
6
12
4
The dynamics of system (56) are illustrated in Figs. 8 and 9, where
evolutions of more than 220 initial states have been tracked. It
shows that 4 equilibrium points located in S1 , S2 , S3 , S4 are stable,
other 4 equilibrium points located in Ξ1 , Ξ2 , Ξ3 , Ξ4 are stable in
[
Γ1 ,
{u ∈ Ξ2 : u2 = 2/3}
Γ2 ,
[
[
{u ∈ Ξ3 : u1 = 2/3}
Γ3 ,
{u ∈ Ξ1 : u2 = 0}
Γ4 ,
{u ∈ Ξ1 : u1 = 0}
[
respectively, and 1 equilibrium point located in Λ is unstable, as
confirmed by theoretical derivations. In the following figures, solutions with 184 initial states in attraction basins of uS1 , uS2 , uS3 , uS4
are depicted; and other 40 solutions with initial states u1 =
0, or u1 = 2/3, or u2 = 0, or u2 = 2/3, respectively, are depicted by straight lines. The 4 curves represent Γ1 , Γ2 , Γ3 , Γ4 in Λ,
respectively.
198
L. Wang et al. / Neural Networks 23 (2010) 189–200
Fig. 10. The dynamics of (57) with different initial states/ecoltext.
Fig. 11. Details of the inset/ecoltext.
Example 2. Consider the following 2-neuron neural network:

1
du (t )

 1
= −2u1 (t ) + 6f (u1 (t )) − f (u2 (t )) − 1,
dt
2
 du2 (t ) = − 3 u (t ) − 1 f (u (t )) + 4f (u (t )) − 1 ,

2
1
2
dt
2
2
Example 3. Consider the following 3-neuron neural network:
(57)
2
where the activation function is

−1 x ≤ −1,



x − 1 ≤ x ≤ 1,
f (x) = 1 1 < x ≤ 3,



x − 2 3 < x ≤ 5,
3 x > 5.

du1 (t )


= −u1 (t ) + 5f (u1 (t )) − f (u2 (t )) − f (u3 (t )) − 1,



 dt
du2 (t )
= −2u2 (t ) − f (u1 (t )) + 6f (u2 (t )) − f (u3 (t )) + 1, (59)

dt




 du3 (t ) = −u (t ) − f (u (t )) − f (u (t )) + 6f (u (t )) + 2,
3
1
2
3
dt
with activation f (x) =
.
2
It is easy to see that the conditions (20) are satisfied. Therefore,
by Theorem 2, system (59) has 33 equilibria, 23 of them are locally
stable while others are unstable. In this paper, we will not discuss
the complete attraction basin for each equilibrium point, because it
is quite complicated and leave it as an issue for later investigation.
Here, we just show the dynamics in part of attraction basins
of uS1 , . . . , uS8 in Figs. 12–15, where evolutions of more than 350
initial states have been tracked. It can be seen that 8 equilibrium
points located in S1 , S2 , S3 , S4 , S5 , S6 , S7 , S8 are locally stable, 12
equilibrium points located in Ξ1 , . . . , Ξ12 are stable in
|x+1|−|x−1|
(58)
It is easy to see that the conditions (35) are satisfied. By
Theorem 3, there are 52 equilibria of (57), and 32 of them are locally
stable while others are unstable.
The dynamics of system (57) are illustrated in Figs. 10 and 11,
where evolutions of more than 500 initial states have been tracked.
It shows that 9 equilibrium points located in S1 , . . . , S9 are stable,
12 equilibrium points located in Ξ1 , . . . , Ξ12 are stable in the sets
{u ∈ Ξ1 : u1 = 0.625},
{u ∈ Ξ2 : u1 = 3.625},
{u ∈ Ξ3 : u2 = 3.2},
{u ∈ Ξ5 : u2 = 4},
{u ∈ Ξ4 : u2 = 3.6},
{u ∈ Ξ6 : u1 = 0.375},
{u ∈ Ξ7 : u1 = 3.375},
{u ∈ Ξ8 : u2 = 0},
{u ∈ Ξ9 : u2 = 0.4},
{u ∈ Ξ10 : u2 = 0.8},
{u ∈ Ξ11 : u1 = 0.125},
{u ∈ Ξ12 : u1 = 3.125},
respectively, and 4 equilibrium points located in Λ1 , . . . , Λ4 respectively are unstable. These figures confirm the theoretical
results. In these figures, solutions with initial states in attraction basins of uS1 , . . . , uS9 are depicted, solutions with initial
states u1 = 0.625, 3.625, 0.375, 3.375, 0.125, 3.125, or u2 =
3.2, 3.6, 4, 0, 0.4, 0.8, respectively, are depicted by straight lines.
And the 16 curves represent Γ1 , . . . , Γ16 in Λ1 , . . . , Λ4 , respectively.
{u ∈ Ξ1 : u1 = 0.75},
{u ∈ Ξ2 : u2 = −0.25},
{u ∈ Ξ3 : u2 = 0.25},
{u ∈ Ξ4 : u1 = 0.25},
{u ∈ Ξ5 : u3 = −0.4},
{u ∈ Ξ6 : u3 = 0},
{u ∈ Ξ7 : u3 = −0.8},
{u ∈ Ξ8 : u3 = −0.4},
{u ∈ Ξ9 : u1 = 0.25},
{u ∈ Ξ10 : u2 = −0.75},
{u ∈ Ξ11 : u2 = −0.5},
{u ∈ Ξ12 : u1 = −0.25},
respectively, and 7 equilibrium points located in Λ1 , . . . , Λ7
respectively are unstable, which confirm the theoretical results.
7. Conclusions
In this paper, we study the neural networks with a class of
activation functions, which are nondecreasing piecewise linear
with 2r (r ≥ 1) corner points. We prove that such neural networks
have multiple equilibria. Some of them are locally stable and others
L. Wang et al. / Neural Networks 23 (2010) 189–200
Fig. 12. Dynamics in attraction basins of uS1 , . . . , uS8 .
199
Fig. 15. Dynamics in subsets S1 , S2 , S3 , S4 , Ξ1 , Ξ2 , Ξ3 , Ξ4 , Λ1 , viewed along the
positive direction of z-label ecoltext.
the attraction basins of stable equilibrium points can be precisely
depicted.
Acknowledgements
Wang Lili is supported by Graduate Innovation Foundation
of Fudan University under Grant EYH1411028. Lu Wenlian is
supported by the National Natural Sciences Foundation of China
under Grant No. 60804044, and sponsored by Shanghai Pujiang
Program No. 08PJ14019. Chen Tianping is supported by the
National Natural Sciences Foundation of China under Grant No.
60774074, 60974015 and SGST 09DZ2272900.
References
Fig. 13. Dynamics in subsets S1 , S3 , S5 , S7 , Ξ2 , Ξ5 , Ξ7 , Ξ10 , Λ3 , viewed along the
positive direction of x-label ecoltext.
Fig. 14. Dynamics in subsets S3 , S4 , S7 , S8 , Ξ4 , Ξ7 , Ξ8 , Ξ12 , Λ6 , viewed along the
positive direction of y-label ecoltext.
are unstable. More precisely, such neural networks have (2r + 1)n
equilibria in all, (r + 1)n of which are locally exponentially stable
and others are unstable. We also give the attraction region for
each locally stable equilibrium. For the 2-neuron neural networks,
Cao, J., Feng, G., & Wang, Y. (2008). Multistability and multiperiodicity of delayed
Cohen–Grossberg neural networks with a general class of activation functions.
Physica D: Nonlinear Phenomena, 237(13), 1734–1749.
Chen, T. (2001). Global exponential stability of delayed hopfield neural networks.
Neural Networks, 14(8), 977–980.
Chen, T., & Amari, S. (2001). New theorems on global convergence of some
dynamical systems. Neural Networks, 14, 251–255.
Cheng, C., Lin, K., & Shih, C. (2007). Multistability and convergence in delayed neural
networks. Physica D: Nonlinear Phenomena, 225(1), 61–74.
Cheng, C., Lin, K., & Shih, C. (2006). Multistability in recurrent neural networks. SIAM
Journal on Applied Mathematics, 66(4), 1301–1320.
Cohen, M. A., & Grossberg, S. (1983). Absolute stability of global pattern formation
and parallel memory storage by competitive neural networks. IEEE Transactions
on Systems, Man and Cybernetics, 13(5), 815–826.
Douglas, R., Koch, C., Mahowald, M., Martin, K., & Suarez, H. (1995). Recurrent
excitation in neocortical circuits. Science, 269, 981–985.
Ellias, S. A., & Grossberg, S. (1975). Pattern formation, contrast control, and
oscillations in the short term memory of shunting on-center off-surround
networks. Biological Cybernetics, 20, 69–98.
Grossberg, S. (1973). Contour enhancement, short term memory, and constancies
in reverberating neural networks. Studies in Applied Mathematics, 213–257.
Hahnloser, R., Sarpeshkar, R., Mahowald, M., Douglas, R., & Seung, H. (2000). Digital
selection and analogue amplification coexist in a cortex inspired silicon circuit.
Nature, 405, 947–951.
Ma, J., & Wu, J. (2007). Multistability in spiking neuron models of delayed recurrent
inhibitory loops. Neural Computation, 19, 2124–2148.
Remy, É, Ruet, P., & Thieffry, D. (2008). Graphic requirements for multistability
and attractive cycles in a Boolean dynamical framework. Advances in Applied
Mathematics, 41, 335–350.
Shayer, L. P., & Campbell, S. A. (2000). Stability, bifurcation, and multistability in
a system of two coupled neurons with multiple time delays. SIAM Journal on
Applied Mathematics, 61(2), 673–700.
Wersing, H., Beyn, W. J., & Ritter, H. (2001). Dynamical stability conditions
for recurrent neural networks with unsaturating piecewise linear transfer
functions. Neural Computation, 13(8), 1811–1825.
Wilson, H. R., & Cowan, J. D. (1972). Excitatory and inhibitory interactions in
localized populations of model neurons. Biophysical Journal, 12, 1–24.
200
L. Wang et al. / Neural Networks 23 (2010) 189–200
Wilson, H. R., & Cowan, J. D. (1973). A mathematical theory of the functional
dynamics of cortical and thalamic nervous tissue. Biological Cybernetics,
55–80.
Zeng, Z., & Wang, J. (2006). Multiperiodicity and exponential attractivity evoked
by periodic external inputs in delayed cellular neural networks. Neural
Computation, 18, 848–870.
Zhang, Y., & Tan, K. (2004). Multistability of discrete-time recurrent neural networks
with unsaturating piecewise linear activation functions. IEEE Transactions on
Neural Networks, 15(2), 329–336.
Zhang, L., Yi, Z., & Yu, J. (2008). Multiperiodicity and attractivity of delayed recurrent
neural networks with unsaturating piecewise linear transfer functions. IEEE
Transactions on Neural Networks, 19(1), 158–167.