debunking_rumors_01 paper

Debunking Rumors in Networks
Luca P. Merlino and Nicole Tabasso∗
February 1, 2017
Preliminary draft: please do not circulate
Abstract
We study the diffusion of a true message and a false message when
agents can debunk the false rumor through costly verification. We find
that the diffusion of the false rumor can foster the diffusion of the true
message by triggering verification. As a result, a decrease in the cost
of verification can imply a decrease of the diffusion of the true message
via a decrease of the diffusion of the false rumor. However, the relative
prevalence of the truth vs. the rumor increases when agents socialize
more.
JEL classifications: D83, D85.
Keyowrds: Social Networks, Rumors, Verification.
∗
Merlino: Centre d’Economie de la Sorbonne, Université Paris 1-Panthéon Sorbonne,
Paris, France, [email protected]. Tabasso: University of Surrey, School of Economics, Guildford, GU2 7XH, UK, [email protected]. Many thanks for valuable comments, discussions and suggestions go to Francis Bloch and Tomàs Rodgriguez-Barraquer,
as well as seminar participants at Università Autonomà Barcelona, CTN 2016 and SAET
2016.
1
1
Introduction
It has long been recognized that social networks, both on- and off-line, play a
crucial role in disseminating information. In many instances, such as the link
between HIV and AIDS, the spread of Ebola, the severity of global warming,
and many more, both proven scientific content and unsubstantiated rumors /
conspiracy theories circulate simultaneously in a social network. The virality
of false rumors can have significant personal and social costs. Take, as an
example, the spread of HIV. Despite long-standing research into the disease,
Chigwedere et al. (2008) and Nattrass (2008) estimated that hostility against
AIDS science and antiretroviral drugs are to be held responsible for around
350, 000 deaths in South Africa. Similarly, a survey of 500 African Americans conducted by Bogart and Thorburn (2005) found that stronger belief
in conspiracy theories of HIV/AIDS was associated with less consistent use
of condoms.
Given the potential costs of believing false rumors, an important question is not necessarily how “an information” spreads on a network. Rather,
we need to know how “a false rumor” (henceforth, rumor) and “the truth”
interact, and how their prevalence and interaction depend on the network.
The increase in on-line communication, in which content is user-provided and
communication costs extremely low, might foster particularly an increase in
the virality of rumors or conspiracy theories.
To do so, we study the diffusion two distinct pieces of information (the
truth and the rumor) and study the impact of the choice of agents to verify an
information on their diffusion. In particular, we use the Susceptible-InfectedSusceptible (SIS) framework that has initially been developed in the context
of epidemiology. In recent years, it has also been applied in Economics, e.g.,
in Jackson and Rogers (2007), López-Pintado (2008), or Jackson and Yariv
(2010), but also Galeotti and Rogers (2013a,b). This framework is very
flexible, and by focusing on a mean-field approximation of the social network
it has the advantage of being highly tractable.
2
We find that the proportion of people informed of the rumor is decreasing
in the share of agents who verify, but the proportion who are informed of the
truth depends non-monotonically on this share. As a consequence of this, the
verification share which would eradicate the rumor entirely is strictly larger
than the share which maximizes the diffusion of the truth. In fact, for the
diffusion of the truth it is beneficial if the rumor circulates.
Whether or not to verify a piece of information is an individual choice.
In our model, agents can calculate the probability that an information they
receive is a rumor. They verify if the expected utility from knowing the truth
is higher than the cost of verification. We find that, unless verification is free,
rumors will never be eradicated completely. This result is in line with the
observed longevity of many rumors (e.g., the iron content of spinach). Furthermore, under endogenous verification, it is not at all clear that increases
in on-line communication do indeed foster rumors. We show that improvements in communication technology might favor the diffusion of rumors more
than they favor the diffusion of the truth. However, this is counteracted by
more people verifying the information they obtain, to the extent that under any given verification costs, the ratio of rumor to truth prevalence stays
constant. If we assume that the increase in on-line communication has also
lowered verification costs, which is likely to be the case, this in fact implies
that the prevalence of the truth might have gone down while the ratio of
rumor to truth in the network always decreases. Thus, our findings stipulate
that increases in on-line communication have in fact improved the ratio of
truth relative to rumors that circulate in a network.
As we are interested in the survival and prevalence of both the truth and
a rumor, we introduce two novel aspects into this framework: (i) There are
two messages that can be transmitted, the truth (0) and a rumor (1). (ii)
Agents have the choice to verify the information they receive, i.e., to debunk
a rumor. Verification is costly, but it will reveal the truth with certainty. It
is not certifiable. (iii) Agents are biased: a fraction x of the population is
3
biased towards the truth, while the remaining fraction 1−x is biased towards
the rumor.
The fact that agents are biased has a twofold role in our analysis. First,
agents that do not verify a message believe it only when it is in line with
their bias. Second, when agents verify a message, they will spread the truth.
Hence, while agents are biased, they seek the truth.
To deliver the main idea of our analysis in a framework that is simple
as possible, it is convenient to assume that an exogenous fraction of each of
the two biased groups in the population verifies any message they receive. In
this case, we find that the truth exhibits both a larger prevalence than the
rumor (i.e., there are always more people informed of the truth than of the
rumor), and it exhibits a positive prevalence for a wider range of parameters
than the rumor. In fact, whenever the rumor exhibits a positive steady
state, so does the truth. This is a direct consequence of verification. For any
positive verification, some agents who hear the rumor verify the content of
the message and become aware of the truth as a consequence of hearing the
rumor. Thus, as long as the rumor disseminates, so does the truth.
However, there exist networks in which only the truth survives. Quite
intuitively, any parameter that increases the diffusion rate of information
will have a positive impact on the prevalence of both the rumor and the
truth. In this sense, increases in on-line communication and social networks
foster the diffusion of rumors as well as the truth.
Even if the rumor “survives” on the network, increases in verification
will reduce its prevalence. The relationship between verification and the
prevalence of the truth, on the other hand, is more complicated. Indeed, an
increase in verification has two opposing effects on the prevalence of the truth:
(i) If more agents verify, this leads to more agents knowing the truth; (ii) as
more agents verify, the prevalence of the rumor decreases. But due to the
possibility of verification, a large prevalence of the rumor actually increases
the prevalence of the truth as well. Thus, a reduction in the prevalence of
4
the rumor is harmful to the prevalence of the truth. When this second effect
is strong enough then, an increase in verification might harm the prevalence
of the truth. In particular, this is the case when a lot of people verify, so that
the prevalence of the truth has an inverted-U relationship with the fraction
of people who verify.
This prediction is confirmed when we allow agents to decide whether to
verify a message they receive or not depending on the likelihood that the
message they received is correct. We assume that agents who are informed
of a correct message, i.e., the truth, obtain a utility of 1, while agents who
believe the rumor receive 0. However of course they do not know ex-ante
which message is true. However they can find this out by verifying the
message they received at a cost of cv . Hence, an agent who verifies receives
utility of 1 − cv . Furthermore, agents know the environment, and so they can
calculate the steady state prevalence of a true and a false message. Finally,
before verification, agents believe that the message they are biased towards
is correct (otherwise they would not be biased towards it). Consequently,
agents will only verify the message they receive if the probability that this is
a false message is larger than cv ; the proportion of agents who verify is then
determined by equating the probability of hearing a rumor with cv .
We find that, as expected, a decrease in the cost of verification increases
verification. However, with costly verification the rumor will always survive,
since it is never an equilibrium for everyone to verify. Endogenous debunking will entirely eradicate a rumor only if verification costs are exactly zero.
An exogenous verification implied that increases in on-line communication
are expected to increase a rumor’s prevalence. If verification is determined
endogenously, this case becomes much less clear-cut, for two distinct reasons.
First of all, the rise in on-line communication almost certainly has led to a
decrease in the cost of verification. This implies that more agents will verify,
which reduces the prevalence of the rumor. Second, increases in communication technology increase the likelihood that an information you hear is a
5
rumor; this actually leads to more people verifying. A priori it is therefore
unclear whether on-line communication truly increases rumor prevalence. At
the very least, we can show that as long as it reduces cv , it reduces the ratio
of rumor vs. truth prevalence. Finally, due to the non-monotonic effect of
α on the prevalence of the truth, it is not necessarily the case that reduced
verification costs will increase the prevalence of the truth. (CHECK!)
The last step in our analysis is to allow agents to decide how much to
socialize. While in the usual SIS framework agents have a exogenous number
of meetings with their neighbors in each period, we allow them to decide the
number of meeting they will have in each period given that each for each
meeting they make they have to pay a cost of ck . We assume this choice is
take at the beginning of the game, i.e., at time 0, taking into account the
steady state prevalence of a true and a false message and the probability with
which they will verify a message once they will hear one. The equilibria are
symmetric, and furthermore, there is only one symmetric interior equilibrium
level of socialization that is stable. In such an equilibrium, socialization and
verification are positively associated: the more agents will interact, due, fro
example, to a reduction of the cost of socializing, the more they will verify
later on. This indicates that the increase in socialization increases
the prevalence of the rumor more than that of the truth?
The paper proceeds as follows. Section 2 discusses the relevant literature.
Section 3 introduces the simplified model with exogenous verification that
shows the main forces behind the main results of our analysis. In Section
4, we present the main model in which agents decide whether to verify the
message they receive. In Section 5, we study how the choice of how much
to socialize interacts with verification. Section 6 concludes. All proofs are in
the appendix.
6
2
Related Literature
In a recent papers, Bloch, Demange and Kranton (2014) study the question
of why rational agents who seek the truth diffuse possible false information
to their social contacts. They find that, when there are biased agents that
try to diffuse false information, unbiased agents might serve as a filter of
the diffusion of such rumors by blocking messages coming from parts of the
network with many biased agents. As a result, the diffusion of the rumor can
benefit if biased agents are not concentrated in high numbers in a given part
of the network.
Our paper is complementary to theirs. While they focus on the decision
of agents on whether to transmit a message knowing the shape of the network and the bias of agents, in our paper study strategic debunking of false
information. To do so, we depart from the standard Bayesian framework
adopting a mean-field according to which agents decide whether to verify
or not a message depending on the population prevalence of true and false
messages.
Furthermore, we use a different notion of biased agents: in our paper,
all agents are biased in the sense that there is a state of the world that a
priori they think is correct. However, everybody seeks the truth, so that
they will transmit a false message only when they incorrectly believe it is
correct, behaving as unbiased agents in Bloch, Demange and Kranton (2014)
.
In particular, we use the SIS framework that has been used in recent
years in Economics, as that has initially been developed in the context of
epidemiology. In recent years, it has also been applied in Economics, e.g.,
in Jackson and Rogers (2007), López-Pintado (2008), or Jackson and Yariv
(2010). This framework is very flexible, and by focusing on a mean-field
approximation of the social network it has the advantage of being highly
tractable.
The most related papers in this stream of literature are Galeotti and
7
Rogers (2013a,b), who study strategic immunization against a disease in an
SIS framework. There are two major differences though between our paper
and theirs. First, while we model the diffusion of the rumor in a similar
fashion as they do, we are interested in how the diffusion of this false massage
interacts with that of the true message. Secondly, while immunization against
a disease must be done before the contact with the virus, in the context of
the diffusion of messages, verification to debunk false rumors takes place only
once the message is received.
Another paper that is closely related is Goyal and Vigier (2015), who
study the decision of agents to protect against a disease either by investing in
immunization or by reducing social interactions. They find in that individuals
who invest in protection choose to interact more relative to those who do not
invest in protection. We find a similar result studying verification when,
beyond a bad state, i.e., the disease or the rumor, socialization helps also in
diffusing the good state, i.e. the truth.
To conclude, none of these papers have included the possibility to verify
information once the message is received. Instead, the strategic decision
of agents to verify a message they receive constitutes a major part of our
analysis.
3
3.1
A Simplified Model
Information Transmission
We consider an infinite population of mass 1, whose members are indexed
by i. Each agent i represents a node on a network. Time is continuous, and
a link between two agents i and j signifies a meeting between the two. The
network is realized every period, such that a link between i and j exists only
at the specific time t. Each agent i has k meetings at t, also denoted the
degree of the agent, which is constant over time. Formally, we model the
mean-field approximation of the system.
8
Agents are of either one of two types t ∈ {0, 1}. There exist two messages
m ∈ {0, 1} that diffuse simultaneously on the network, and types determine
informational biases. We assume that mass x ∈ [0, 1] of the population are
of type t = 0 and mass 1 − x are of type t = 1. Except for their informational
biases, agents of the two types are identical.
Agents that are unaware of either m are in state S (Susceptible), while
agents who have heard and believed a message m are in state I (Infected ).
Agents transition between these states over time either when an agent in
state S becomes informed off—and believes—a message m, or when an agent
in state I forgets m. We denote by ν the per contact transmission rate of
m and by δ the recovery rate, i.e., the rate at which m is forgotten. Neither
transmission nor forgetting are affected by the values of t or m. However,
agents’ types matter for whether or not they believe a message they receive.
In particular, we assume that an agent of type t who receives message m will
believe m either in the case that (i) he verifies m and finds it to be true, or
(ii) he does not verify and m = t. In effect, while agents’ types restricts the
message to which they are susceptible, this restriction is overcome through
verification.1
While this assumption might seem extreme, we introduce it here as a
shortcut to study a model that has the same features as the full model with
endogenous verification that we introduce in Section 4.Indeed, when agents
decide whether or not to verify, when they do not verify it is because they
think their belief is likely to be correct. Hence, they will not spread the
message they believe to be unfounded and that is not in line with their
initial belief.
The true state of the world is Φ = 0, which is unknown to the agents.
Given Φ = 0, throughout the paper we refer to m = 0 as the “truth” and
m = 1 the “rumor”. For now, assume that a fixed proportion αt of group
1
Incidentally, our assumption on beliefs also implies that, as agents either verify m or
ignore m 6= t, all agents in state I believe only one message. No agent holds conflicting
beliefs, which appears sensible.
9
t ∈ {0, 1} verifies a message upon receiving it.2 Formally, we define ραt 0 as
the proportion of type 0 agents who believe message t after having verified
0
it, and ρ1−α
as the proportion of type 0 agents who believe t without having
t
verified it. We define ραt 1 and ρt1−α1 equivalently for type 1 agents. Note
that due to susceptibility to and verification of messages, it is the case that
1
0
= ρα1 1 = 0.
= ρ1−α
ρα1 0 = ρ1−α
0
1
The probability that a randomly chosen contact of an agent believes message m ∈ {0, 1}, θm , at time t is then
α1
0
θ0,t = x[α0,t ρα0,t0 + (1 − α0 )ρ1−α
0,t ] + α1 (1 − x)ρ0,t ,
θ1 = (1 − α1 )(1 −
1
x)ρ1−α
1,t .
(1)
(2)
We assume that the per contact transmission rate, ν, is sufficiently small that
an agent in state S becomes aware of message m at rate kνθt through meeting
k neighbors. The system of information transmission exhibits a steady-state
if the following equations are satisfied:
∂ρα0,t0
∂t
1−α0
∂ρ0,t
∂t
∂ρα1,t1
∂t
1
∂ρ1−α
1,t
∂t
= xα0 (1 − ρα0,t0 )kν[θ0,t + θ1,t ] − xα0 ρα0,t0 δ = 0
(3)
1−α0
0
= x(1 − α0 )(1 − ρ1−α
0,t )kνθ0,t − x(1 − α0 )ρ0,t δ = 0
(4)
= (1 − x)α1 (1 − ρα0,t1 )kν[θ0,t + θ1,t ] − (1 − x)α1 ρα0,t1 δ = 0
(5)
1−α1
1
= (1 − x)(1 − α1 )(1 − ρ1−α
(6)
1,t )kνθ1 − (1 − x)(1 − α1 )ρ1,t δ = 0.
The overall prevalence of the truth in the population at time t is given by
θ0,t and the prevalence of the rumor by θ1,t .
2
We assume throughout the paper that verification is complete, i.e., if an agent chooses
to verify a message, he finds out that Φ = 0 for sure. While complete verifiability is not
always possible for all rumors, it seems to be the most reasonable approximation for many
of the most puzzlingly long-lived rumors. For rumors who do not oppose a verifiable truth,
our results will provide lower bounds on the pervasiveness of the rumor.
10
3.2
Steady-States
We are interested in the steady state of the system, that is, the situation in
which the prevalence of the truth and of the rumor are constant over time .
We remove the subscript t to indicate the steady state value of variables.
Solving the system of equations (3)-(6), it becomes apparent that the
effects of ν, k, and δ can be summarized by a unique parameter, which we
. We obtain the following conditions for a steady-state,
define as λ = kν
δ
λ[θ0 + θ1 ]
,
1 + λ[θ0 + θ1 ]
λθ0
=
,
1 + λθ0
λθ1
=
.
1 + λθ1
ρα0 0 = ρα0 1 =
(7)
0
ρ1−α
0
(8)
1
ρ1−α
1
(9)
Given these conditions and the definitions of θ0 and θ1 , the following
remark is obtained trivially.
Remark 1 For any value of λ ≥ 0 there exists a steady-state in which θ0 =
θ1 = 0.
What is of interest are the existence and characteristics of one or more
positive steady-states for either m ∈ {0, 1}. In fact, from equations (9) and
(2), it is easy to find that a positive steady-state for θ1 is given by
1
θ1 = (1 − α1 )(1 − x) − ,
λ
(10)
which is larger than zero if and only if λ > (1−α11)(1−x) . In fact, standard
arguments (see, e.g., Jackson (2008)) show that the value of θ1 in equation
(10) is the unique positive steady-state for any value of λ > 0, and that it is
globally stable.
11
Lemma 1 The rumor does not exhibit a positive steady-state if (1 − α1 )(1 −
x) ≤ λ1 , i.e., if either a sufficient share of the population is biased against
the rumor, or if a sufficient share among those biased in favor of the rumor
verify information. If a positive steady-state for rumor prevalence exists, its
magnitude is decreasing in α1 and x, and increasing in λ. The prevalence of
the rumor is unaffected by either the prevalence of the truth, nor the fraction
of verifiers of type 0, α0 .
Lemma 1 arises immediately from equation (10). Its most striking result
is that the prevalence of the rumor is entirely independent of the prevalence of
the truth. Thus, if the aim is to decrease rumor prevalence, Lemma 1 shows
that the only way to achieve this is through an increase in α1 particularly.3
Neither an increase in θ0 , nor in α0 , will have any impact on rumor prevalence.
Unsurprisingly, any improvement in communication or meeting technology
(by increasing ν or k, and thus λ) will increase the prevalence of the rumor.
This is probably the most direct channel through which increases in online
communication are expected to increase the spread of rumors.
The steady-state of θ0 is slightly more complex than the one of θ1 . Substituting equations (7) and (8) into the expression for θ0 , we find that a
steady-state for θ0 is a fixed point θ0 = H(θ0 ) of the following expression:
λθ0 (1 − α0 )
(1 − x)α1 λ(θ0 + θ1 )
α0 λ(θ0 + θ1 )
+
+
.
H(θ0 ) = x
1 + λ(θ0 + θ1 )
1 + λθ0
1 + λ(θ0 + θ1 )
(11)
There are two conditions under which this expression simplifies considerably:
(i) If the rumor did not exist and m = 0 was the only message diffusing
on the network, and (ii) if neither group was able to verify messages, i.e.,
α0 = α1 = 0. In either of this cases, it is straightforward to solve equation
3
We assume that the proportion of types in the population is exogenous, and that λ is
difficult to influence through policy decisions.
12
(11) to find that the steady-state of θ0 is
θ0 = x + α1 (1 − x) −
1
λ
(12)
which is larger than zero if and only if λ > x+α11(1−x) . Equivalently to the
steady-state of θ1 , it can be shown that if θ0 > 0 exists, this is the unique
positive steady-state, and it is globally stable. Also mirroring θ1 is the fact
that θ0 is increasing in λ and the fraction of the population that is susceptible
to m = 0. Equations (10) and (12) are sufficient to establish the conditions
under which only the truth survives on the network.
Lemma 2 The truth is the only message with a strictly positive prevalence
in the network if and only if
1
1
<λ≤
.
x + α1 (1 − x)
(1 − α1 )(1 − x)
A necessary condition for eradicating the rumor and simultaneously having a
positive prevalence for the truth, by Lemma 2, is that x+α11(1−x) < (1−α11)(1−x) ,
or equivalently,
1
− α1
2
< x.
1 − α1
This is not necessarily a very demanding condition, e.g., it is always satisfied
if x > 0.5. Nevertheless, it is noteworthy that depending on the distribution
of types in the population and the fraction of type 1 agents who verify, it is
possible that survival of the truth is only possible under parameter values for
which also the rumor survives. Consequently, in these networks eradication
of the rumor implies simultaneously eradication of the truth.
3.3
Truth and Rumor Interaction
We know from equation (10) that rumor prevalence is independent of the
prevalence of the truth. However, the dependence of equation (11) on θ1
13
implies that the reverse does not hold. We now analyze the steady-state(s)
of θ0 if λ > (1−α11)(1−x) , i.e., the rumor does exhibit a positive steady-state,
and αt > 0 for at least one group.
Proposition 1 Let λ > (1−α11)(1−x) and αt > 0 for at least one t. There
exists a unique, globally stable, positive steady-state θ0 for any x, and all
permissible values of αt . There does not exist a steady-state in which θ0 = 0.
The most important result of Proposition 1 is that existence of a rumor
can create truth if at least some verification takes place. In particular, the
truth will exhibit a positive prevalence even if λ ≤ x+α11(1−x) . Indeed, the
moment the rumor exhibits a positive prevalence some agents will verify
m = 1 and become aware of the truth exactly because they heard the rumor.
Thus, it is possible that, far from hurting the diffusion of the truth, the
rumor in fact benefits its diffusion. Together, Lemma 2 and Proposition 1
show that it is possible that only the truth survives in the network, while it
is impossible that only the rumor survives.
The beneficial effect of the rumor that is highlighted in Proposition 1
relates to the existence of a positive steady-state for θ0 , rather than its magnitude. It is immediate from equation (11) that H(θ0 ) is increasing in θ1
everywhere. Thus, a higher rumor prevalence, ceteris paribus, always increases the prevalence of the truth. However, this does not seem to be the
correct effect to consider. More to the point is the question how the level of
verification in the population affects θ0 , which we now address.
Proposition 2 The prevalence of the truth is always increasing in α0 but it
is first increasing and then decreasing in α1 .
The non-monotonic effect that α1 has on θ0 is illustrated in Figure 1
below.
The fact that any increase in α0 benefits the prevalence of the truth is
unsurprising. As more agents of type 0 verify the information they receive,
14
Figure 1: Steady-state prevalence of the truth, θ0 as a function of α1 for
λ = 2, x = 0.25, and α0 = 0.
the prevalence of the truth among these agents increases (without harming
the rumor prevalence in this group, which is already zero). And as the truth
exhibits a higher prevalence in this group, more agents of type 1 will hear
about it. For a fixed verification level α1 , this implies also a larger prevalence
of the truth among type 1 agents. The non-monotonic impact of α1 on θ0
is more intriguing and deserves closer attention. An increase in α1 has two
effects on θ0 . First of all, it has a positive effect as it increases the pool of
verifiers, and reduces the proportion of the population that ignores m = 0
upon hearing it. But it also reduces θ1 , and thus the chance of verifiers of
type 0 to become informed of the truth through hearing the rumor (thereby
directly decreasing ρα0 0 and ρα0 1 ). Which of these two effects dominates depends on the values of the other parameters in the process. As the negative
effect of α1 works through ρα0 0 and ρα0 1 , it is larger the larger is α0 , as then
ρα0 0 has a bigger share in θ0 . Similarly, the larger is λ, the less reactive are
ρα0 0 and ρα0 1 to changes in θ1 . In fact, as shown in Appendix A, if λ is large
enough, increases in α1 will always increase θ0 . And the smaller is α1 , the less
impact have changes in ρα0 1 on θ0 . Finally, the effect of α1 on θ1 is given by
15
−(1 − x), which implies that the larger is x, the less will θ1 react to changes
in α1 .
In summary, a larger prevalence of the rumor may imply a larger prevalence of the truth. Since a rumor can create truth, an increase in the proportion of people who verify can actually lead to fewer people knowing the truth.
It is also important to distinguish the effects that verification of group 0 and
of group 1 have on the prevalence of the rumor and of the truth. If more
agents biased in favor of the truth verify, this will increase the prevalence
of the truth, but leave the prevalence of the rumor unchanged. In contrast,
if more agents of type 1 verify, this will certainly decrease the prevalence of
the rumor, but it has an ambiguous effect on the prevalence of the truth.
In the fight against rumors, it seems intuitive that making verification easier should decrease the prevalence of the rumor and increase the prevalence
of the truth. Our results suggest that the effects of more verification are
much more nuanced than this intuition. The question of how endogenous
verification affects prevalence of both messages is addressed now.
4
4.1
The Model with Endogenous Verification
Utilities
We now study the decision of agents on whether to verify or not a message
they receive. In contrast to models of disease transmissions (such as in,
e.g., Galeotti and Rogers (2013b)), verification decisions are not taken at
the beginning of the diffusion process, but once a message m is received.
We assume that for all agents, regardless of their type, believing the truth
provides them with a utility of 1, while believing a rumor provides utility of
0. At a fixed cost c ∈ (0, 1), agents can verify any m they receive. We make
the following assumptions about the agents.
16
Assumption 1 The transmission process of messages is in its steady-state.
Agents are aware of the steady-state proportions of truth, θ0 , and the rumor,
θ1 , and take these as given.
Assumption 2 Agents who receive either m become aware of the debate. An
agents’ bias implies that upon receiving either m, an agent of type t assigns
0
to t being true.
probability θ0θ+θ
1
Assumption 3 Agents are only able to transmit messages that they have
themselves received, or information they received through verification.
Assumption 1 is equivalent to assume that agents know the framework
they live in and they are hence able to calculate the prevalance of a true
and a false message in steady state. Since we wish to focus on the longrun prevalence of the rumor and the truth, focusing on the steady-state of
the system seems appropriate. As population is of mass 1, agents correctly
believe that their personal verification decision does not affect the values of
θ0 and θ1 .
Assumption 2 introduces agents’ biases into their verification decision. A
completely unbiased agent aware of the transmission process would assign
θ0
to the received message m being true, regardless of the value of m.
θ0 +θ1
Hence, we can interpret the bias of an agent as the fact that among two
states of the world, he believes that one of them is true unless she is presented
with hard evidence if the contrary. Since by Assumption 1 agents are able to
calculate the relative prevalence of the truth, before they verify, they assign
as the probability of their preferred state t being correct the prevalence of
the truth in the economy.
Finally, Assumption 3 implies that agents cannot create new information
themselves.
Under Assumptions 1-3, an agent who receives either m and does not
1
verify this message receives a utility of 1 − θ0θ+θ
. If m = t, the agent will
1
17
transmit m, while if m 6= t, the agent ignores it. On the other hand, an agent
who verifies receives a utility of 1 − c, and will transmit the verified message.
4.2
Equilibrium
We first note the general condition for verification, and that for sufficiently
low λ, no agent verifies.
Proposition 3 Agents of either group verify the message they receive if
θ1
1
> c. If λ ≤ 1−x
, no verification occurs.
θ0 +θ1
It is easy to see that in the case when the rumor does not exhibit a
positive prevalence even if nobody of type 1 verifies, there exist no incentives
1
for verification. The more interesting case to consider is one where λ > 1−x
,
i.e., in the absence of any verification the rumor exhibits a positive prevalence.
We restrict the rest of our analysis to this case.
Proposition 4 The unique, consistent, equilibrium level of verification in
the network is α0 = α1 = α∗ , and it is given by
θ1 (α∗ )
≤ cv ,
θ0 (α∗ ) + θ1 (α∗ )
with strict equality if α∗ > 0.
The result that for high enough c no agent verifies seems intuitive. Proposition 4 allows us various additional insights, particularly regarding the question of whether increases in online communication foster rumors. The two
variables of the process which have, most likely, been influenced by increased
online communication are λ and cv . Arguably, online communication is an
improvement in information technology that is expected to increase λ, while
at the same time making it easier to research sources of message and background information, thus reducing cv . Proposition 5 summarizes the impact
of these changes.
18
Proposition 5 When λ increases:
• Both θ0 and θ1 increase, ceteris paribus.
• The ratio of truth to rumor prevalence,
interior values of α∗ .
θ0
,
θ1
remains unaffected for all
When c decreases:
• α∗ increases, θ1 decreases and θ0 may in- or decrease.
• The ratio of truth to rumor prevalence,
θ0
,
θ1
increases.
• θ1 > 0 unless cv = 0.
For fixed verification levels, both θ0 and θ1 are increasing in λ. However,
α is endogenously determined by the condition in Proposition 4, which can
be rewritten as
−1
θ0 (α∗ )
+1
= cv .
(13)
θ1 (α∗ )
∗
Consequently, for any c > 0 and α∗ > 0, the ratio of truth to rumor prevalence is pinned down by the cost of verification only. As an increase in λ
increases θ0 and θ1 , agents adjust their verification decisions to keep the ratio constant. The result is that advances in communication technology will
foster the prevalence of rumors, but not their relative prevalence.
1
As θ0θ+θ
is decreasing in the unique symmetric equilibrium level of α, a de1
crease in verification costs implies that verification increases, ceteris paribus.
Importantly, given equation (13), a decrease in verification costs implies that
the ratio of truth to rumor prevalence increases. Insofar as online communication technology lowers verification costs, it might lead to a decreased
prevalence of both the rumor and the truth, but it will always harm the
prevalence of the rumor relatively more. This notwithstanding, it is important to note that even for infinitesimally small cv , the rumor will exhibit a
positive prevalence. It is simply not an equilibrium to reduce rumor prevalence to zero.
19
5
Endogenous Network
At time t = 0, before the communication process unfolds, agents decide how
much to invest in their social connections. In other words, they decide how
many (costly) meetings to have in each moment in time in the future. We
assue that when this decision is taken, agents did not decide yet whether
they will verify or not a message they receive, since this decision is taken
only once the message is received. However, they are aware of the probability
with which they will verify, that is α, because this probability depends on
the other parameters of the model and on the steady state value of the
endogenous variables, which they are able to calculate.4
0
Remember that ρα0 0 and ρ1−α
are the proportion of agents that are in
0
the different states, so they can also be interpreted as the fraction of time
during which an agent will on average stay during her lifetime. Remember
furthermore that an agent believes that her own bias is correct, but that
there is a probability that this is not the case, proxied by the prevalence of
the truth versus the rumor.
Finally, since agents are infinitesimal, they cannot change by themselves
the steady state prevalence of the truth and of the rumor. Hence, they cannot
affect θ0 and θ1 . However, they can change their own meeting rate.
Hence, at time 0 agents decide their socialization effort maximizing their
steady state flow of utility given by:
νki θ0
θ0
νki [θ0 + θ1 ]
× (1 − cv ) + (1 − α)
× 1 − ck . (14)
α
δ + νki [θ0 + θ1 ]
δ + νki θ0 θ0 + θ1
More socialization implies that there is more chance to get in contact with the
truth. For verifiers, this happens for any message they hear about, because
that will trigger to verify and hence to know the truth. On the contrary,
4
In the Appendix B, we develop an alternative model in which agents know whether
they will be verifiers or not later on. In this alternative specification, non-verifiers in
equilibrium socialize more than verifiers.
20
non-verifiers become aware of the truth only when they hear the correct
message, which happens with probability θ0 /(θ0 + θ1 ). Using (14), we derive
the following Proposition.
Proposition 6 If ck is small enough, there exist positive equilibrium levels
of socialization k ∗ given by the solutions of the expression:
θ0 + θ1
θ0
ck
α
+ (1 − α)
=
.
2
2
(1 − cv )νδ
(δ + νk ∗ (θ0 + θ1 ))
(δ + θ0 νk ∗ )
(15)
Furthermore, there exists a threshold t such that for all δ/ν > t, there are
two positive equilibrium levels of socialization k ∗ and k ∗∗ such that k ∗ > k ∗∗
and a(k ∗ ) > a(k ∗∗ ) .
The intuition behind this equilibrium condition is quite straightforward once
we realize that in equilibrium, cv = θ1 /(θ0 + θ1 ) (Proposition 4). Hence, The
effect of socialization is to increase the meeting opportunities with people
that spread information. For verifiers, any meeting leads to knowing the
truth, while non-verifiers need to meet someone spreading message 0.
The LHS of condition (15) represnts the marginal benefits of socialization,
that are depicted in Figure 2. If the rate at which people forget information
δ is big enough or the probability of communication ν is big enough, the
marginal benefits of socialization are first increasing and then decreasing in
socialization BECAUSE?. The RHS of condition (15) is instead represented
by a verical line which is increasing in ck . As a result, there exists two interior
levels of socialization.
By looking at Figure 2 however, it is immediate to see that when there
are two positive equilibrium levels of socialization, only the bigger one of the
two is stable. Indeed, if there is a small perturbation away from the lower
positive equilibrium level of socialization, the economy would converge either
to an empty network or to the higher level of socialization. These facts are
summarized in the following corollary.
21
8
6
4
2
k° 5
10
k°° 15
20
k
Figure 2: Equilibrium level of socialization , k, for cv = .03, x = .05, ck = .01,
δ = .1 and ν = .05.
Corollary 1 When there are two positive equilibrium levels of socialization
k ∗ and k ∗∗ such that k ∗ > k ∗∗ , only k ∗ is stable.
Hence, when we consider how the steady state is affected by changes in the
parameters of the model, it is natural to focus on such stable equilibrium,
since the economy is likely to move away from the unstable one after the
change.
By inspection of condition (15), we derive the following proposition.
Proposition 7 The stable interior equilibrium level of socialization k ∗ is
decreasing in the cost of socialization, ck .
In particular, notice that the LHS of condition (15) does not depend on ck ,
while the RHS is increasing in ck . This immediately implies that the higher
equilibrium levels of socialization k ∗ decreases when ck increases. Figure 3
depicts a graphical representation of such change.
22
8
6
4
2
5
10
k°
15
20
k
k°°
Figure 3: Equilibrium level of socialization , k, for cv = .03, x = .05, ck = .01,
δ = .1 and ν = .05.
6
Conclusions
In this paper we model the how a true and a false message spread into a
population of biased agents who seek the truth and can verify the trustworthiness of the messages they receive. Agents are based in the sense that if
they do not verify the content of the message they receive, there is a state of
the world they believe a priori to be correct.
In this framework, we find that the the presence of a false message can
create truth, in the sense that a larger prevalence of the rumor can lead to
a larger prevalence of the truth. As a result, when agents are more likely to
verify the message they receive because of a lower cost of verification, the
prevalence of the truth might decrease.
Most importantly, in our model the relative prevalence of the true versus
the false message increases when agents interact more frequently. As a result,
our model predicts that the recent technological improvements that made
23
cheaper to interact with others and to access information on the Web should
have lead to a lower circulation of the rumors with respect to the truth.
This result hinges on our assumption that, while every agent has a bias
with respect to the state of the world they believe ex ante to be true, they all
seek ultimate truth and are ready to change their mind if presented with hard
evidence of the wrongness of their belief. Hence we rule out the possibility
that people believe alternative facts. While this assumption seems a natural
starting point, challenging it might be an interesting route for future research.
References
Bloch, Francis, Gabrielle Demange, and Rachel Kranton. 2014. “Rumors and Social Networks.” Paris School of Economics, Working paper
2014, 15(1).
Bogart, Laura M, and Sheryl Thorburn. 2005. “Are HIV/AIDS conspiracy beliefs a barrier to HIV prevention among African Americans?” JAIDS
Journal of Acquired Immune Deficiency Syndromes, 38(2): 213–218.
Chigwedere, Pride, George R Seage III, Sofia Gruskin, Tun-Hou
Lee, and Max Essex. 2008. “Estimating the lost benefits of antiretroviral
drug use in South Africa.” JAIDS Journal of Acquired Immune Deficiency
Syndromes, 49(4): 410–415.
Galeotti, Andrea, and Brian W Rogers. 2013a. “Diffusion and protection across a random graph.” mimeo.
Galeotti, Andrea, and Brian W Rogers. 2013b. “Strategic immunization and group structure.” American Economic Journal: Microeconomics,
5(2): 1–32.
Goyal, Sanjeev, and Adrien Vigier. 2015. “Interaction, protection and
epidemics.” Journal of public economics, 125: 64–69.
24
Jackson, Matthew O. 2008. Social and Economic Networks. Princeton
University Press.
Jackson, Matthew O, and Brian W Rogers. 2007. “Relating network
structure to diffusion properties through stochastic dominance.” The BE
Journal of Theoretical Economics, 7(1).
Jackson, Matthew O, and Leeat Yariv. 2010. “Diffusion, strategic interaction, and social structure.” in Handbook of Social Economics, edited
by J. Benhabib, A. Bisin and M. Jackson.
López-Pintado, Dunia. 2008. “Diffusion in complex social networks.”
Games and Economic Behavior, 62(2): 573 – 590.
Nattrass, Nicoli. 2008. “AIDS and the scientific governance of medicine in
post-apartheid South Africa.” African affairs, 107(427): 157–176.
A
Proofs
Proof of Proposition 1. Note first that H(θ0 ) is strictly concave, since it
is a cobination of two strictly concave functions in θ0 . Furthermore, H(x +
α1 (1 − x)) < x + α1 (1 − x), and, for θ1 > 0, H(0) > 0. This implies that
H(θ0 ) crosses from above θ0 . Hence there is a unique solution of H(θ0 ) = θ0 .
This concludes the proof of Proposition 1.
Proof of Proposition 2. The partial derivative of equation (11) with
respect to α0 is always positive, which implies that an increase in α0 shifts
the H(θ0 ). An increase in α0 must therefore increase the steady-state value of
θ0 . This proves the first statement of Proposition 2. We are then interested
in how changes in α1 affect the steady-state of H(θ0 ) as defined in equation
(11) when θ1 is at its steady state value, i.e., θ1 = (1 − x)(1 − α1 ) − 1/λ.
25
By defining G = θ0 − H(θ0 ), we can find the effect of α1 on θ0 as
∂H
− ∂α
dθ0
1
=−
.
∂H
dα1
1 − ∂θ
0
As H(θ0 ) is strictly concave in θ0 , we know that at the steady-state, ∂H(θ0 )/∂θ0 <
1. Hence, dθ0 /dα1 > 0 if and only if ∂H(θ0 )/∂α1 > 0., where
∂H(θ0 )
−(1 − x)λ
λ(θ0 + θ1 )
=
, (A-1)
[α0 x + α1 (1 − x)] + (1 − x)
2
∂α1
[1 + λ(θ0 + θ1 )]
1 + λ(θ0 + θ1 )
which is positive if and only if
(θ0 + θ1 )[1 + λ(θ0 + θ1 )] > α0 x + α1 (1 − x).
(A-2)
First of all, note that the left-hand side of equation (A-2) is larger than zero
for any strictly positive values of θ0 or θ1 . However, for α1 = 0, the righthand side of equation (A-2) simplifies to xα0 . This means that by setting
either x or α0 equal to zero, condition (A-2) is satisfied for sure. Thus, as
the right-hand side is continuous in α1 , for low enough values of x, α0 , and
α1 , an increase in α1 will increase θ0 .
Second, consider equation (A-2) in the limit of α1 → 1 − 1/[λ(1 − x)]. In
this case θ1 → 0, and θ0 → 1 − λ2 . We assume λ ≥ 2 to ensure θ0 ≥ 0 even
in the limit (again, the only case relevant to us). In this case, by taking
the appropriate limits in equation (A-2) and re-arranging, we find that θ0 is
increasing in α1 if and only if −4+3/λ > −(1−α0 )x. As λ ≥ 2 by assumption
and both α0 and x are between 0 and 1 by definition, this condition is never
satisfied. Consequently, as α1 approaches the limit such that the rumor dies
out, increases in α1 have a negative effect on θ0 .
Finally, equation (A-2) shows that for any values of α0 , α1 , and x, there
exists a value of λ, which we can denote λ such that for all λ > λ an increase
in α1 will always increase θ0 . This concludes the proof of Proposition 2. 26
Proof of Proposition 3. The first part is immediate from the fact that
verification provides a utility of 1 − c and non-verification a utility of 1 −
θ1 /(θ0 + θ1 ). The second part is established straightforwardly by noting that
if λ ≤ 1/(1 − x), the rumor does not survive even if α1 = 0, thus θ1 = 0.
This concludes the proof of Proposition 3.
Proof of Proposition 4. To prove Proposition 4, we first prove that
θ1 /(θ0 + θ1 ) is strictly decreasing in α0 . Defining f (θ0 , α0 ) = H(θ0 , α0 ) − θ0 ,
using the implicit function theorem, we get
df (θ0 , α0 )/dα0
∂θ0
= −
∂α0
df (θ0 , α0 )/dθ0
The numerator of this expression is then
λ(θ0 + θ1 )
λθ0
df (θ0 , α0 )
= x
−
,
∂α0
1 + λ(θ0 + θ1 ) 1 + λθ0
which is clearly positive since θ1 ≥ 0. Regarding the denominator is negative,
in the proof of Proposition 1 we proved that H(θ0 , α0 ) is strictly concave in
θ0 and that it crosses θ0 only once, and from above, determining the unique
equilibrium value of θ0 . This implies that f (θ0 , α0 ) is decreasing in θ0 .
Hence, ∂θ0 /∂α0 is positive, which in turn implies that θ1 /(θ0 + θ1 ) is strictly
decreasing in α0 . Thus, for any given value of α1 , there is a unique level of
α0 that satisfies θ1 /(θ0 + θ1 ) = cv .
Fix the value of α1 to α̂1 . If at this value α0 is such that θ1 /(θ0 + θ1 ) > cv ,
then all type 0 agents should verify and we should observe α0 = 1. However,
agents of type 1 themselves believe that their own type is Φ, which means that
all agents of type 1 will also verify and the true value of α1 = 1. At α1 = 1,
however, θ1 = 0, which contradicts our assumption that θ1 /(θ0 + θ1 ) > cv . It
also implies that α1 6= α̂1 , so expectations about α1 are not consistent.
Next, assume that at α̂1 and α0 , θ1 /(θ0 + θ1 ) < cv . In this case, no agent has
an incentive to verify, which is an equilibrium in which expectations about
27
α1 are consistent if and only if α̂1 = α1∗ = α0∗ = 0.
Finally, if at α̂1 and α0 it is the case that θ1 /(θ0 + θ1 ) = cv , then agents
are indifferent between verification of m and no verification. There exists
a unique value α0∗ for which this holds. And this value is consistent with
expectations about the behavior of type 1 agents if and only if α̂1 = α1∗ = α0∗ .
This concludes the proof of Proposition 4.
Proof of Proposition 5. Let us now prove that θ1 /(θ0 + θ1 ) is decreasing
in α. To see this, first notice that that expression is decreasing in α if θ0 /θ1
is increasing in α. Hence, we study the sign of
1
∂(θ0 /θ1 )
= 2
∂α
θ1
dθ0
dθ0
dθ0 ∂θ1
+
+
dα0 dα1 dθ1 ∂α1
∂θ1
θ1 −
θ0
∂α1
(A-3)
Remember furthermore that θ1 is decreasing in α1 while θ0 is increasing in
α0 but it is first increasing in α1 and then decreasing. Furthermore, looking at condition (A-1), we can see that the derivative of θ0 with respect to
α1 is increasing in θ1 . So, the effect is smaller (and possibly negative) at
θ1 → 0. Calculating the expression in the square parenthesis in (A-3) using
H(θ0 , θ1 , α0 , α1 ) − θ0 = 0 to derive the effect on θ0 with the implicit function
theorem, we get
θ0 (1 − x) − θ1
h
λ(x−1)(α1 +x(α0 −α1 ))
(θ0 λ+θ1 λ+1)2
+
(1−x)(θ0 +θ1 )
λ(θ0 +θ1 )+1
+x
λ(θ0 +θ1 )
λ(θ0 +θ1 )+1
−
θ0
θ0 λ+1
i
×
1
x −
α0 λ2 (θ0 +θ1 )
α0 λ
(1−α0 )θ0 λ2
(1−α0 )λ
α λ2 (1−x)(θ0 +θ1 )
α λ(1−x)
+ λ(θ +θ
−
+ θ λ+1
− 1
+ λ(θ1 +θ )+1 −1
(λ(θ0 +θ1 )+1)2
(θ0 λ+1)2
(λ(θ0 +θ1 )+1)2
0
1 )+1
0
0
1
Taking the limit of this expression for θ1 → 0, that is, α1 → 1 − 1/[λ(1 − x)],
the expression is equal to (1 − x)θ0 , which is positive. Hence, θ1 /(θ0 + θ1 ) is
decreasing in α.
Consider now the effect of a change in λ on α. We do this by proving
that θ1 /(θ0 + θ1 ) is increasing in λ for a fixed level of α. Then, the LHS
of condition (13) would increase. Since we showed that the LHS is also
decreasing in α, the equilibirum α has to increase as a result. Note that the
28
sign of the derivative of θ1 /(θ0 + θ1 ) with respect to λ depends on the sign
of the following expression:
θ0
1
+ θ1
λ2
α(θ0 +θ1 )
(θ0 λ+θ1 λ+1)2
αλ2
(λ(θ0 +θ1 )+1)2
+
+
x(1−α)θ0
(θ0 λ+1)2
(1−α)λx
(θ0 λ+1)2
−1
,
where the first term is positive while the second term is negative (since the
function H(·)−θ0 is decreasing in θ0 around the equilibrium value of θ0 —i.e.,
the denominator is negative—and it is increasing in λ—i.e., the numerator
is negative). Furthermore, both the first term and the denominator of the
second term are increasing in θ0 while the denominator of the second term
is decreasing in θ0 . So the expression is at its smaller value for θ0 → 0, when
its value is α(θ1 )2 /[(θ1 λ + 1)2 )(αλ2 /(λ(θ1 ) + 1)2 + (1 − α)λx − 1)]. However,
since θ1 < θ0 , the value of the expression is zero. Hence, θ1 /(θ0 + θ1 ) and,
consequently, α are increasing in λ.
Finally, we prove that θ0 and θ1 are increasing in λ. Indeed, θ0 is increasing
in both λ and α, so, since α increases with λ, θ0 is also increasing in λ when
α is endogenous. Furthermore, since by condition (13), θ1 is proportional to
θ0 —since θ0 /θ1 is constant,—also θ1 increases with λ.
Proof of Proposition 6. agents decide verification after receiving the
message, so their verification choice depends only on the aggregate values
of θ0 and θ1 , that they cannot affect with their individual choice. Hence,
given the maximization problem (14), the optimal level of socialization k
should satisfy
νδα(θ0 + θ1 )
(1 − α)νδθ0 θ0
(1 − cv ) +
= ck
2
(δ + νki [θ0 + θ1 ])
(δ + νki θ0 )2 θ0 + θ1
By Proposition 3, cv = θ1 /(θ0 + θ1 ), so that the condition becomes
(1 − α)θ0
ck
α(θ0 + θ1 )
+
=
.
2
2
(δ + νki [θ0 + θ1 ])
(δ + νki θ0 )
(1 − cv )νδ
29
This condition is the same for all agents, so that condition (15) results.
Since both θ0 and θ1 are strictly increasing in k (by the result on λ in
Proposition 5), the first term of the LHS of condition (15) can be rewritten as αg(k)/[δ + νg(k)]2 where g(k) is non-negative and increasing in k and
g(k) = 0. The term is then positive, it is equal to 0 for k = 0 and it tends to
0 for k → ∞. By studying the derivative of this term with respect to k it is
trivial to see that it is first increasing and then decreasing in k if ν is small
enough or δ is big enough. Furthermore, the derivative is positive for ν = 0
and negative for δ = 0.
The same holds true for the second term of the LHS of condition (15). Hence,
the LHS of condition (15) is a convex combination of two terms with the
same properties. So the LHS is always non-negative. Furthermore, the RHS
of condition (15) is increasing in ck and equal to zero for ck = 0. We can
then conclude that if ck is small enough equilibrium existence is guaranteed.
Furthermore, there ex there is a t such that for all δ/ν > t, there exist
two positive equilibrium level of k. Note finally that k = 0 is always an
equilibrium. This concludes the proof of Proposition 6.
Proof of Proposition 7. The first of the statement v follows by inspection
of condition (15). For the second part, note that the choice of α implies that
θ0 = (1 − cv )θ1 /cv . Futhermore, θ1 = (1 − α)(1 − x)δ/(ki ν). Using these
expressions, we can rewrite condition (15) as follows
δ
]
ki ν
− kδi ν ])2
v
[(1 − α)(1 − x) −
α 1−c
cv
(δ +
νki c1v [(1
− α)(1 − x)
2
+
v)
(1 − α) (1−c
[(1 − α)(1 − x) −
cv
(δ +
v
νki 1−c
[(1
cv
− α)(1 − x) −
=
ck
νδ
δ
(1 − cv )cv (1 − α)(1 − x) −
×
ki ν
(
)
α
(1 − α)(1 − cv )
ck
+
=
δ
δ
νδ
(δcv + νki [(1 − α)(1 − x) − ki ν ])2 (δcv + νki (1 − cv )[(1 − α)(1 − x) − ki ν ])2
30
δ
]
ki ν
δ
])2
ki ν
Since α is increasing in cv for a fixed level of k by Proposition 5, the first
term is first increasing and then decreasing in cv , and equal to zero for cv
equal to zero or 1.
B
Endogenous Network When Agents Know
Whether They Will Verify
We now characterize the steady state of symmetric equilibria of the socialization effort of the population of verifiers and non-verifiers, denoted respectively by k α and k 1−α . We need to adopt the law of motion of the model
to account for the fact that verifiers and non-verifiers might have different
socialization levels. In particular, the system of information transmission
exhibits a steady-state if the following equations are satisfied:
∂ρα0 0
∂t
1−α0
∂ρ0
∂t
∂ρα0 1
∂t
1−α1
∂ρ1
∂t
= xα0 (1 − ρα0 0 )k α ν[θ0 + θ1 ] − xα0 ρα0 0 δ = 0
0
= x(1 − α0 )(1 − ρ1−α
)k 1−α νθ0 − x(1 − α0 )ρ01−α0 δ = 0
0
= (1 − x)α1 (1 − ρα0 1 )k α ν[θ0 + θ1 ] − (1 − x)α1 ρα0 1 δ = 0
1
1
= (1 − x)(1 − α1 )(1 − ρ1−α
)k 1−α νθ1 − (1 − x)(1 − α1 )ρ1−α
δ = 0.
1
1
The overall prevalence of the truth in the population is given by θ0 and the
prevalence of the rumor by θ1 .
α
1−α
Defining λV = k δ ν and λN V = k δ ν , we obtain the following conditions
31
for a steady-state:
kiα ν[θ0 + θ1 ]
,
δ + kiα ν[θ0 + θ1 ]
ki1−α νθ0
=
,
δ + ki1−α νθ0
ki1−α νθ1
=
.
δ + ki1−α νθ1
ρα0 0 = ρα0 1 =
(A-4)
0
ρ1−α
0
(A-5)
1
ρ1−α
1
(A-6)
These proportions of agents that are in the different states can also be
interpreted as the fraction of time during which an agent will on average stay
during her lifetime. Remember furthermore that an agent believes that her
own bias is correct, but that there is a probability that this is not the case,
proxied by the prevalence of the truth versus the rumor.
Finally, since agents are infinitesimal, they cannot change by themselves
the steady state prevalence of the truth and of the rumor. Hence, they cannot
affect θ0 and theta1 . However, they can change their own meeting rate.
An agent that is a verifier, will stay informed for a proportion ρα0 0 of
time, during which she will enjoy a benefit of 1 − cv (I renamed cv the cost
of verification). Hence, a verifier chooses a level of socialization k α that
maximizes
α
νkiα [θ0 (k−i
) + θ1 ]
× (1 − cv ) − ck
α
δ + νki [θ0 + θ1 ]
where ck is the cost of establishing a contact. A non verifier instead believes
0
that she will spend ρ1−α
of her time informed, and believes that this is the
0
truth with probability θ0 /(θ0 + θ1 ). Hence, a non-verifier chooses a level of
socialization k 1−α that maximizes
θ0
νki1−α θ0
× 1 − ck
1−α
δ + νki θ0 θ0 + θ1
32
The first order conditions of this maimizations are the following:
νδ
(δ +
(1 − cv ) = ck
+ θ1 ])2
θ0
νδ
= ck
1−α
(δ + νki θ0 )2 θ0 + θ1
νkiα [θ0
(A-7)
(A-8)
Using (13), we get that kiα [θ0 + θ1 ] = ki1−α θ0 , or
θ0
kiα
,
1−α =
θ0 + θ1
ki
(A-9)
meaning that non-verifiers socialize more in equilibrium. Note verifiers and
non-verifiers get the same utility, so that in equilibrium there is no incentive
for agents to change their verification choice.
33