Agreement Dynamics of Memory

CHIN. PHYS. LETT. Vol. 26, No. 4 (2009) 048901
Agreement Dynamics of Memory-Based Naming Game with Forgetting Curve of
Ebbinghaus *
SHI Xiao-Ming(石晓明)1 , ZHANG Jie-Fang(张解放)1**
Institute of Nonlinear Physics, Zhejiang Normal University, Jinhua 321004
(Received 31 October 2008)
We propose a memory-based naming game (MBNG) model with two-state variables in full-connected networks,
which is like some previous opinion propagation models. It is found that this model is deeply affected by
the memory decision parameter, and then its dynamical behaviour can be partly analysed by using numerical
simulation and analytical argument. We also report a modified MBNG model with the forgetting curve of
Ebbinghaus in the memory. With deletion of one parameter in the MBNG model, it can converge to success rate
𝑆(𝑡) = 1 and the average sum 𝐸(𝑡) is decided by the size of network 𝑁 .
PACS: 89. 75. Fb, 05. 65. +b, 89. 65. Ef
In the past few years, the research of language conventions or opinion exchange, which has a distributed
group of agents and each agent negotiating with others in an open group, has developed rapidly,[1−6] especially in the region focusing on the development
of shared communication systems based on multiple
agents.[7−9] Early studies have mainly dealt with twostate variables models.[2,3,10,11] For example, in the
Sznajd model,[2] every node has a ↑ or ↓ state. Such
models can help us to understand the dynamics of real
systems.
Recently, a naming game (NG),[12] based on the
work of Baronchelli et al. [13,14] has raised a minimal
version of the NG model that can simplify the NG
model by using the restriction of evolution with one
object. This model originated from a so-called talking heads experiment.[15] Because of the final global
consensus from multi-word states, we think that it is
specially suitable for information propagation.
Then the NG model was connected with several typical networks.[13,16−22] Those results revealed
many interesting properties of the NG model. Considering the previous models that only have two variables,
we present a modified naming game model, in which
the results from exchange are determined by the ratio of opinion in memory with two-variables. In this
model, we find that some results are different from ordinary NG models. We also find some analytical conclusions for the peak value of the average sum with
different memory decision thresholds.
In the past NG models, agents would fix on the
word when they received a word that had been in their
memories. Then they would delete all other words in
their memories. However, in real society we should
think over the recollection in one lifetime. It is improper to clear up all memories because they would
influence the agents’ subsequent decisions.
In the following, we describe the evolutionary rules
of our memory-based naming game (MBNG). With
the memory parameter 𝛾, we can modify it to a twovariable model, which is strange for the NG model because it would be nonsense when the model only has
two choices in the original model. In our model, there
are only two choices ↑ and ↓ for each agent, which
is much like the Sznajd model. We would not erase
the memories in the whole evolution. We introduce
a threshold 𝛾. When the rate of one choice in somebody’s memory storeroom is beyond the threshold, the
agent would select this opinion and treat it exteriorly.
For the MBNG model, we set it in a full-connected
network and each agent has an empty memory at the
beginning. The rule for the system evolution is as follows:
(a) A speaker 𝑖 and a hearer 𝑗 are chosen at random respectively at each step. If the speaker holds no
opinion, we would give him one opinion that would
accord with overall initial rate in the whole network
and transmit it to 𝑗, for instance, ↑; otherwise, if 𝑖 already holds external one or more, he would send one
of his external notions to 𝑗.
(b) When 𝑗 receives the opinion from 𝑖, he would
add it to his own memory. Considering there are only
two variables in memory, if the rate of either opinion
in his memory exceeds the threshold 𝛾 we set, 𝑗 would
hold this opinion outwardly.
(c) If 𝑗 holds the opinion, which is the same as
𝑖, after 𝑗 receives the opinion from 𝑖, we define it
as success. Or we define it as failure. In each step,
the hearer would add the opinion from speaker to his
memory no matter whether they communicate successfully and never delete the earlier memories. If 𝑗
receives the opinion from 𝑖 and finds that neither of
two variables can arrive at the threshold, he would
add this opinion from 𝑖 to his own external opinion,
which means that he is hesitating ↕ and the value that
he holds is 0.
* Supported
by the National Natural Science Foundation of China under Grant No 10672147.
whom correspondence should be addressed. Email: jf [email protected]
c 2009 Chinese Physical Society and IOP Publishing Ltd
○
** To
048901-1
CHIN. PHYS. LETT. Vol. 26, No. 4 (2009) 048901
In our model, each node has an exterior state and
a memory storeroom. The memory storeroom records
all opinions from others during evolution. The exterior state is the result after analysis of two variables
in the memory storeroom and the choice of one that
exceeds the threshold. If the hearer 𝑗 obtains one opinion from the speaker 𝑖 but he finds the rate of neither
opinion could reach the threshold 𝛾, he would put this
received opinion to his memory storeroom and exterior separately. In this situation, we have to add the
state ↕, which means that he has two or more opinions
at surface. Of course, if 𝑗 is then chosen as a speaker,
he would select one of the opinions he holds at the surface to the hearer. In other words, nodes could only
transfer external opinions to others. In the coming
evolution, he would remain ↕ until at some time other
spoken opinions help him to reach the threshold.
In this model we have 𝐸(𝑡) as the average value
of the whole agents’ external opinions, for which we
set ↑ as 1 and ↓ as −1. Here 𝛽 is the initial rate for
the∑︀opinion ↓ in the whole network. Then 𝐸(𝑡) =
1
𝑁 𝜒𝑖 , (𝑖 = 1, 2, . . . 𝑁 ), in which 𝑁 is the size of
𝑁
the network and 𝜒𝑖 is the opinion the agent 𝑖 holds (1
or −1).
Table 1. For diverse 𝛾, all 𝛽 reaches the peak at the same time
shown here.
in Eqs. (2)–(5) 𝜆1 = 1, 𝜆2 = −1, 𝜆3 = 0.5, 𝜆4 = 0.15,
𝜇1 = 7.4, 𝜇2 = −12.9, 𝜇3 = 0.97, 𝜇4 = 2.5.
Fig. 1. Comparison of simulation results (symbols) with
theoretical predictions (lines). The network size 𝑁 is 1000.
Each simulation data point is obtained by averaging over
1000 realizations.
𝛾 0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
𝛽 7600 6280 5600 6180 5140 5420 5720 3220 3140
Table 1 shows that different 𝛾 have different peak
value times for diverse 𝛽, that is, for one 𝛾, 𝑡 is the
same for distinct 𝛽 to reach their maximum (when
𝛽 > 0.5) or minimum (when 𝛽 < 0.5). We find that
the peak value for one 𝛾 is fit to the Boltzmann distribution with 𝛽 when 𝛾 is less than 0.5. With the
increase of 𝛾, this value would be linear:
𝑌peak value time
⎧
(︁
𝛽 − 𝑥0 )︁−1
⎪
⎪ 𝐴2 + (𝐴1 − 𝐴2 ) 1 + exp
,
⎨
𝑑𝑥
=
for 𝛾 ≤ 0.5,
⎪
⎪
⎩
1 − 2𝛽, for 𝛾 > 0.5.
(1)
In the Boltzmann distribution, there are four parameters. All of them fit to a similar Gauss distribution with 𝛾. We can see the simulation results and
theoretical predictions in Fig. 1:
[︁
(︁ 𝛾 − 0.86 )︁2 ]︁
𝜇1
√︀
,
exp − 2
0.24
0.24 𝜋/2
[︁
(︁ 𝛾 − 0.86 )︁2 ]︁
𝜇2
√︀
𝐴2 = 𝜆 2 +
exp − 2
,
0.24
0.24 𝜋/2
[︁
(︁ 𝛾 − 0.86 )︁2 ]︁
𝜇3
√︀
𝑥0 = 𝜆 3 +
exp − 2
,
0.24
0.24 𝜋/2
[︁
(︁ 𝛾 − 0.86 )︁2 ]︁
𝜇4
√︀
𝑑𝑥 =,𝜆4 +
exp − 2
,
0.24
0.24 𝜋/2
𝐴1 = 𝜆 1 +
(2)
(3)
(4)
(5)
Fig. 2. Random 𝛽 and 𝛾. The network size 𝑁 is 1000.
Each simulation data point is obtained by averaging over
1000 realizations.
Next, we discuss 𝑆(𝑡), which is important in the
NG model. From Ref. [9], we know that the line has
a sharp change from 0 to 1 in the original NG model.
In order to discover the rate of successful communication, we have 𝑆(𝑡) of MBNG in Fig. 2. In our model,
𝑆(𝑡) is different from that in the past models. The
lines shake acutely all the time. When 𝑡 increases,
there is a top value which we can see precisely in the
inner figure. This is a point for this model because
unification disappears.
In the NG model, memory is pivotal because it
is infinite in theory, while finite in practice. Former works have discussed the influence of memory
length in the original model.[23] However, it is evident
048901-2
CHIN. PHYS. LETT. Vol. 26, No. 4 (2009) 048901
that those results pay much attention to mathematics
rather than reality. We adopt the Ebbinghaus curve
of retention[24] to the MBNG model and remove the
parameter 𝛾:
100𝑘
.
(log 𝑡)𝑐 + 𝑘
(6)
Equation (6) is the formula of the Ebbinghaus curve of
retention, in which 𝑘 = 1.84 and 𝑐 = 1.25. Here 𝑡 is the
time in minutes counting from one minute before the
end of the learning, 𝑏 is the saving of work evident in
relearning, the equivalent of the amount remembered
from the first learning expressed as a percentage of the
time necessary for this first learning.[24] Because 𝑡 in
the formula is minute, we have a weighted time 720𝑡
to expand the time span. Then we obtain our formula
for the Ebbinghaus curve of retention:
𝑏=
100𝑘
.
(log 720 * 𝑡)𝑐 + 𝑘
Next, we turn to 𝑆(𝑡), which is concussive all the
time in MBNG. By cancelling the parameter 𝛾, we
hope to find convergence for 𝑆(𝑡) in this modified
model.
(7)
In this situation, we simplify our rules in the
MBNG model: for step (b), when 𝑗 receives an opinion from 𝑖, he adds it to his own memory. If the rate of
the opinion ↑ or ↓ is more than the other in memory, 𝑗
would hold this opinion external. In each step of evolution, we would have the memory with Eq. (7). For
instance, if someone has ↑ at step 𝑡, the memory for it
100𝑘
is ↑ *𝑏(𝑡) = 1 *
, while in time 𝑡 + 1, the
(log 720𝑡)𝑐 + 𝑘
100 * 𝑘
memory for it is ↑ *𝑏(𝑡) = 1 *
.
(log 720 * (𝑡 + 1))𝑐 + 𝑘
Then we would obtain the sum of memory for 1 and
−1 along the whole process of evolution and compare
their absolute value to decide which opinion the agent
should hold.
Fig. 4. Evolution of 𝑆(𝑡) with 𝛾 = 0.1. Inset: convergence time 𝑇𝑐 versus 𝛽. In this simulation 𝑁 = 1000.
Each simulation data point is obtained by averaging over
1000 realizations.
12
10
8
4
Tc 10 𝑏=
𝐸(𝑡) has a symmetrical structure, in which the simulation of 𝛽 → 0 and 𝛽 → 1 fit to an exponential
expression with the network size 𝑁 :
{︂
− exp(−𝑡/𝑁 ) + 1, for 𝛽 → 0,
𝐸(𝑡) =
(8)
exp(−𝑡/𝑁 ) − 1,
for 𝛽 → 1.
6
4
2
5
3
10
N
4
3
2
1
0.6
0.9
0.3
β
Fig. 5. Convergence time as a functions of 𝑁 and 𝛽. Each
simulation data point is obtained by averaging over 1000
realizations.
Fig. 3. Theoretical prediction of Eq. (8) as well as the simulation results (symbols) for , 𝛽 → 0 with 𝑁 = 500, 1000
and 2000 [(a)–(c)]. and 𝛽 → 1 [(d)–(f)]. Each simulation
data point is obtained by averaging over 1000 realizations.
First, we observe the average sum 𝐸(𝑡) with 𝛽. In
Fig. 3, we obtain a similar result with MBNG. Also,
From Fig. 4, we find that without 𝛾 in this model,
success rate can reach 1 in the end. However, when 𝛽
increases, 𝑇𝑐 reaches a peak. In order to observe the
relationship with 𝑁 and 𝑇𝑐 , we have a 3𝐷 picture as
shown in Fig. 5. With 𝑁 rising, 𝑇𝑐 versus 𝛽 develops
quickly.
In summary, we have investigated the behaviour of
the memory-based naming game model (MBNG) with
048901-3
CHIN. PHYS. LETT. Vol. 26, No. 4 (2009) 048901
two variables in a fully connected graph. In general
we find that the evolution of this model is related to
the initial value. The peak value in this model has
a peculiar phenomenon with the parameters 𝛽 and 𝛾.
We obtain 𝑆(𝑡) in the MBNG model, which cannot
reach 0, but always passes through a peak.
Later, we add the Ebbinghaus curve of retention
to MBNG and delete one parameter. Then 𝑆(𝑡) in
this modified model would converge to 1, while 𝑆(𝑡)
in the MBNG model oscillates drastically. It is worth
emphasizing that the size of this network plays an important role in the average sum 𝐸(𝑡) and the peak
value 𝑇 (𝑐).
The influence of network structure has been discussed by recent works on naming games. It would
be interesting to apply our MBNG model to different
networks, i.e. random networks, small-world network
and scale-free networks. Also, our model is useful not
only for two variables, but also for finite or infinite
variables.
References
[1] Axelrod R 1997 J. Conflict Resolut. 41 203
[2] Sznajd-Weron K and Sznajd J 2000 Int. J. Mod. Phys. C
11 1157
[3] Liggett T M 1999 Stochastic Interacting Systems: Contact,
Voter, and Exclusion Processes (New York: Springer)
[4] Krapivsky P L 1992 Phys. Rev. A 45 1067
[5] Krapivsky P L and Render S 2003 Phys. Rev. Lett. 90
238701
[6] Deffuant G, Neau D, Amblard F and Weisbuch G 2000 Adv.
Complex Syst. 3 87
[7] Nowak M A and Krakauer D C 1999 Proc. Natl. Acad.
Sci. 96 8028
[8] Nowak M A, Plotkin J B and Jansen V A 2000 Nature 404
495
[9] Steels L 1999 Auton. Agents Multi-Agent Syst. 1 301
[10] Sznajd-Weron K and Sznajd J 2003 Physica A 324 437
[11] Sood V and Redner S 2005 Phys. Rev. Lett. 94 178701
[12] Steels L 1995 Artif. Life 2 319
[13] Baronchelli A, Felici M, Caglioti E, Loreto V and Steels L
2006 J. Stat. Mech: Theory Expt. P06014
[14] Baronchelli A, Loreto V and Steels L arxiv: 0803.0398v
[15] Steels L 1998 Auton. Agents Multi-Agent Syst. 1 169
[16] Baronchelli A, Dall’ Asta A, Barrat A and Loreto V 2006
in Artificial Life X: Proceedings of the Tenth International Conference on the Simulation and Synthesis of Living Systems (Bloomington, USA 3–7 June 2006) ed Rocha
L M, Yaeger L S, Bedau M A, Floreano D, Goldstone
R L and Vespignani A (Cambridge, MA: MIT) p 480
arXiv:physics/0511201
[17] Baronchelli A, Dall’ Asta L, Barrat A and Loreto V 2006
Phys. Rev. E 73 015102(R)
[18] Dall’ Asta L, Baronchelli A, Barrat A and Loreto V 2006
Europhys. Lett. 73 969
[19] Dall’ Asta L, Baronchelli A, Barrat A and Loreto V 2006
Phys. Rev. E 74 036105
[20] Lu Q, korniss G and Szymanski B K 2008 Phys. Rev. E 77
016111
[21] Tang C L, Lin B Y, Wang W X, Hu M B and Wang B H
2007 Phys. Rev. E 75 027101
[22] Lin B Y, Ren J, Yang H J and Wang B H
arxiv:physics/0607001
[23] Wang W X, Lin B Y, Tang C L and Chen G R 2007 Eur.
Phys. J. B 60 529
[24] Ebbinghaus H 1885 Memory: A Contribution to Experimental Psychology (New York: Columbia University Press)
048901-4