Willful Ignorance in a World of Biased Misinformation

Willful Ignorance in a World of Biased
Misinformation
David Bjerk
Robert Day School of Economics and Finance
Claremont McKenna College
[email protected]
February 6, 2014
Abstract
Suppose individuals face a claim of uncertain validity and can only obtain
more information from sources known to sometimes report biased misinformation for or against the claim. In such an environment, it is optimal for
Bayesian expected utility maximizers to select the source biased toward their
current behavior even if they know that source more frequently reports misinformation than an oppositely biased one. Moreover, if some misinformation in
reporting is inevitable, it may be optimal for one information source to report
misinformation frequently, while the source on the other side will report as
minimal misinformation as possible.
Thanks to seminar participants at UC-Irvine, Claremont McKenna College, Claremont Graduate University, and participants at the 2013 American Law and Economics Association Meetings
for helpful comments on earlier versions of this work. Thanks to the Lowe Institute for Political
Economy for …nancial help on this project.
1
In order to maintain an untenable position, you have to be actively
ignorant. -Stephen Colbert
All I know is just what I read in the papers, and that’s my alibi for
my ignorance. -Will Rogers
1
Introduction
When deciding whether to act on a claim of uncertain validity, does it ever make
sense to inform yourself about that claim from a source you know reports incorrect
information more often than another source available at a similar cost? While such
a choice may seem odd, anecdotal examples suggest that it may not be all that rare.
For example, Wolfe, Sharpe, and Lipsky (2002) found that of the twenty-two
English language anti-vaccination websites they identi…ed, 100 percent claimed vaccinations increase the likelihood of autism. However, the only truly scienti…c source
supporting this claim was a 1998 Lancet article by Andrew Wake…eld that was later
shown to be fraudulent and was fully retracted by the Lancet’s editors by 2010
(Wallis 2010). Despite this, one of the leading information sources promoting the
claim that vaccines increase the risk of autism, Generation Rescue, continues to cite
Wake…eld’s study and conclusions (McCarthy 2012). More broadly, in summarizing the behavior of anti-vaccine advocates, the journalist Emily Willingham states
“those who insist that autism and vaccines are linked resurrect old information,
repackage it in their skewed agenda, and misrepresent the relevance of court rulings
to make it look like there’s a link”(Willingham 2013).
Yet, annually, an estimated 17,000 children do not receive vaccinations in the
United States, with a majority of parents who chose not to vaccinate their children
citing the reason as a belief that vaccines increase the likelihood of autism (Smith,
Chu, and Barker 2004). Moreover, in a related study, the majority of parents who
chose not to vaccinate their children due to a belief that vaccines are related to
autism cite “the internet” as providing the primary support for these beliefs, even
though they acknowledge that such sources were likely to be biased (Fredrickson
et al. 2004). However, it is notable that this study also found that parents who
chose not to vaccinate their children were likely to say they believed information
on childhood immunizations coming from the CDC may be biased. Indeed, Wolfe,
Sharpe, and Lipsky (2002) also found that roughly 90 percent of the anti-vaccination
2
websites they identi…ed present some version of the claim that government sources
misreport and otherwise unfairly criticize reports claiming to …nd a link between
vaccines and autism due to in‡uence from drug companies.
Given an environment such as this, where individuals believe information sources
on both sides of a claim sometimes report biased misinformation rather than new
actual information that arises, I show that while seemingly odd, the behavior described above may not actually be irrational. In particular, it turns out to be optimal
for a Bayesian expected utility maximizing individual to inform himself about an
uncertain claim from a source he knows to be biasing its misinformation in the direction he is already leaning or behaving, even if he is fully aware that the source he
is choosing reports misinformation more frequently than an alternative one biasing
its misinformation toward the other side of the issue. In other words, in a world of
biased misinformation, it can be rational to be willfully ignorant.
The speci…c environment I consider is one in which individuals face a claim that
says by taking a costly action they can increase the likelihood of experiencing a
positive utility shock. However, individuals do not know whether or not the claim is
true at the time they must choose whether or not to act on the claim. Rather, they
simply have a belief regarding whether or not the claim is true. Before making their
decision however, they can consult one of two information sources, both of whom
observe unbiased informative signals regarding the validity of the claim. However,
both information sources are not fully truthful in their reporting of these signals
to individuals. One is biased toward the claim being true and reports the actual
underlying signal with some probability less than one, and a …ctitious signal in
support of the claim with some probability greater than zero. The other is biased
against the claim and reports the actual underlying signal with some probability less
than one, and a …ctitious signal in opposition to the claim with some probability
greater than zero.
In this rather general environment, the …rst part of this paper shows it to be
optimal for Bayesian expected utility maximizing individuals to choose the information source biased toward how he would initially choose to act, even if he knows
that source reports biased misinformation more often than the other. The intuition
for the above result rests on the key insight that information is only important to
an expected utility maximizer to the extent to which it can alter his or her beliefs
enough to change his or her actions. The frequency and bias of misinformation from
3
an information source is important in that it a¤ects the degree to which information from that source can alter a person’s beliefs enough to actually a¤ect behavior.
Information against the claim from the source that biases its misinformation towards the claim being true will have a greater impact on a person’s belief, and
potentially his or her behavior, than similar information from the source biasing its
misinformation towards the claim being false.
To better understand the underlying intuition, consider an individual who buys
organic food because he believes it will make his family healthier, and suppose this
person believes Fox News sometimes reports misinformation against the health bene…ts of organic food, but National Public Radio sometimes reports misinformation
promoting the health bene…ts of organic food. Hearing a report suggesting that
eating organic food has little impact on child health likely has a bigger impact on
the individual’s beliefs if it is heard on NPR than on Fox News, since he knows NPR
will only report such information if it is actually true, while Fox might report such
information even if new evidence arises in support of the health bene…ts of organic
food. In this way, such a report on NPR may have a bigger in‡uence on the individual’s behavior regarding whether to continue buying organic food than the same
report on Fox News. Alternatively, information in support of organic food coming
from either source will not a¤ect this person’s behavior, as such information will
never cause the individual to stop buying organic food. Therefore, information from
a source biasing its misinformation toward the way one is already acting can actually
be more valuable than information from a source o¤ering less frequent misinformation but biased in the opposite direction, as it will have a greater instrumental value
with respect to changing behavior.
While the model obviously applies to individuals attempting to inform themselves from media sources regarding issues ranging from the causes of autism, nutrition, homeopathic medicine, and climate change, it is also general enough to apply
to a wide range of other domains too, such as a politician seeking policy advice on
a given issue, voters deciding which political party to listen to regarding whether to
support a given issue, a consumer deciding which salesperson to consult regarding
a given purchase, or an investigator seeking informants in a criminal or civil case.
In all such examples, if the decisionmaker believes all of the available information
sources are prone to o¤ering biased misinformation, it will generally be rational for
him to choose the one whose bias corresponds to his own initial beliefs, even if he
4
knows that source is less trustworthy than an alternately biased one.
The second part of this paper then considers the behavior of information sources
with opposing biases on a given issue. Using simulations, I show two general patterns that arise under relatively generic assumptions. First, if one assumes a little
misinformation in reporting is inevitable, and one side chooses to report with the
minimal possible misinformation, then the other side may do better in the long run
by reporting misinformation with a relatively high likelihood. Alternatively, if one
side chooses to report misinformation quite frequently, the alternately biased source
may improve its long run objectives by not responding in kind, but rather reporting
misinformation as little as possible.
In general, this model shows that even in a world of rational expected utility
maximizers, a sizeable fraction of the population can continue to believe in and act
on a claim, and inform themselves about the claim from a source known to be often
o¤ering biased misinformation for extended periods of time, even when the rest of
the population is quite certain that the claim is untrue. However, while choosing to
inform oneself from a source that o¤ers more misinformation than a competing one
may be optimal for an individual, and o¤ering misinformation with high frequency
may be optimal for an information source, neither will generally be optimal for
society at large.
2
Related Literature
The perception that information sources are often biased is quite widespread. For
example, only 26 percent of Americans say that news organizations are careful that
their reporting is not politically biased (Pew Research Center 2009). Evidence
that these perceptions have some basis in reality is presented Grosclose and Milyo
(2005) and Gentzkow and Shapiro (2010), who develop distinct measures that both
suggest most major news outlets indeed report with a bias. Moreover, DellaVigna
and Kaplan (2007) present evidence that the introduction of a news source with a
known bias can impact behavior.
This paper builds on the relatively recent theoretical literature on media bias,
developed in part by the novel work of Baron (2006), Mullainathan and Shleifer
(2003), and Gentzkow and Shapiro (2006). Baron (2006) considers a supply side
explanation, where personal preferences or career prospect concerns may cause jour5
nalists to accept a lower wage in return for less oversight and greater tolerance for
biased misinformation in their reporting from their superiors.1 This contrasts with
the more demand side explanations posed by Mullainathan and Shleifer (2003) and
Gentzkow and Shapiro (2006). In Mullainathan and Shleifer (2003), there exist
individuals who are assumed to want to know the truth and individuals who care
about the truth but also have signi…cant preferences for not hearing information
inconsistent with their own prior beliefs. The presence of these news consumers
with a preference for some biased misinformation can cause news sources to o¤er
only biased misreporting in the case of a monopoly, or polarizing viewpoints in a
duopoly setting. In Gentzkow and Shapiro (2006), news sources want to build a
reputation for quality reporting of the truth, but quality is not directly observable.
However, rational consumers will tend to place more faith in news sources whose
reports are generally consistent with their prior beliefs. This allows a demand for
biased misinformation to arise endogenously, causing news sources to sometimes
provide misinformation with a particular bias on a given issue to build a reputation
as a higher quality source, especially concerning issues that will take a long time to
reveal their true state.
As will be seen, the model developed below contains elements of all these approaches. Like in Baron (2006), information providers are allowed ideologies or
preferences beyond pro…t maximization. Like in Gentzkow and Shapiro (2006) and
Mullainathan and Shleifer (2003), consumers may end up choosing to consume information from a source that o¤ers biased misinformation more often than other
available sources. However, in contrast to Baron, the model below does not hinge
on it being costly to an information source to reduce misinformation from its reporters. Unlike Mullainathan and Shleifer (2003), the model below does not require
the assumption that some consumers incur substantial disutility from reading information inconsistent with their beliefs. Finally, unlike Gentzkow and Shapiro (2006)
where consumers explicitly do not know the relative likelihood with which each
source is reporting misinformation, consumers in the model below know exactly the
relative likelihood with which each information source o¤ers biased misinformation
rather than actual new information. This is not to say that the frictions assumed in
these previous papers are ‡awed. Rather this paper simply focuses on a di¤erent,
1
In some sense, Crawford and Sobel’s (1982) “Cheap Talk”model also has some aspect of biased
information from the supply side, where a biased information source may choose to “coarsen” the
underlying information it delivers to the recipient.
6
potentially additional friction.
Biased information reporting also arises in Prendergast’s (1993) theory of “Yes
Men,” which is somewhat similar to Gentzkow and Shapiro’s (2006) model in that
workers/advisors may …nd it optimal to misrepresent the information they uncover
to be closer to their manager’s …ndings in an e¤ort to be perceived as providing
more accurate information. Again, this di¤ers from the model below in that the
consumers of information in the model developed below will know exactly the bias,
and therefore relative accuracy, of each of the available sources.
This paper is also closely related to the novel work of Calvert (1985) and Suen
(2004). In both these papers, individuals are essentially uncertain about a certain
claim that they must decide whether to act on. Each period noisy continuous
signals regarding the veracity of the claim arise, but individuals cannot observe this
new information directly, rather they must choose an intermediary (e.g., an advisor
or a newspaper) to “interpret” new information, where this intermediary reveals
whether the emitted signal of information exceeded a given threshold and di¤erent
intermediaries o¤er di¤erent thresholds. These models reveal that individuals will
generally choose an intermediary that has a threshold biased towards the individual’s
prior (i.e., the source will only report information in con‡ict with the individual’s
prior if the observed signal is strongly against the individual’s prior).
Like Calvert ’s (1985) and Suen’s (2004) models, in the model developed below,
individuals cannot observe the available information about a claim directly, but
rather they must go through a possibly “biased” information intermediary. However, key to this analysis is that an intermediary’s “bias” works quite di¤erently
than in Calvert’s and Suen’s models. In those models, by selecting an information source with more “bias” an individual knows he will be less likely to get new
information regarding a claim in any given period, but when he does learn new
information, that information will be more precise regarding the strength of the
underlying evidence for the claim. Hence, a more biased information source is not
o¤ering more misinformation, rather it is just o¤ering less frequent but more precise
information. By contrast, in the model below, a more biased source o¤ers information just as frequently as a less biased source, but that information is more likely to
be misinformation rather than any actual new evidence.
This facet of the model where individuals potentially choose an information
source that they know is objectively less informative than another, or where indi-
7
viduals choose to be willfully ignorant, arises to some extent in several papers in
the economics literature in somewhat di¤erent contexts. For example, Benabou and
Tirole (2002) develop a model where individuals choose coarser information about
their ability in order to keep their self-con…dence high enough to overcome their tendency to procrastinate or fail to undertake potentially bene…cial actions. Carrillo
and Mariotti (2000) also consider a model where individuals weight current payo¤s
disproportionately high relative to future payo¤s. Such dynamic inconsistency may
mean that plans that are optimal for the current “self” may no longer be optimal
for “future”selves which becomes a problem when individuals are unable to commit
to a given consumption path. Therefore, individuals may choose to forgo better
information regarding the likelihood of di¤erent outcomes from current choices because the current self cannot hide that information from his future self and therefore
cannot trust how that future self will act on that information.
A certain amount of willful ignorance also arises in Dal Bo and Tervio’s (2008)
model of individual corruption. Namely, individuals may try to stay willfully ignorant of whether they are a good type or a bad type by choosing to resist temptations,
even though they will immediately learn if they are bad type if they do not resist,
since only bad types will succumb to temptations. In Koszegi (2006), individuals may avoid certain potentially productive tasks in order to avoid learning bad
information about one’s ability, and the associated costs to one’s ego. Somewhat
similarly, Karlsson, Loewenstein, and Seppi (2009) model what they term the “ostrich e¤ect” in the context of investors. Speci…cally, under some parameterizations
of their model, investors may choose to put o¤ learning important information about
their returns until a later period, even though such learning would be costless.2
2
There are also several related models where individuals do not choose to be underinformed
or misinformed, but rather become misinformed due to their underlying pscyhological tendencies.
For example, Blomberg and Harrington (2000) consider a model where individuals update their
beliefs regarding a certain issue, but some are a¤ected by this new information more than others.
Kopczuk and Slemrod (2005) develop a model where individuals willingly repress (i.e., forget)
relevant information regarding the likelihood of their own mortality in order to reduce fear of
death. Experimental work by Eil and Rao (2011) shows that subjects often discounted negative
information about themselves, and subjects avoided learning potentially useful information when
that information might be costly to their self-image.
8
3
Model
Suppose individuals will incur a bene…t of size v at the end of a given period t with
probability p0 2 (0; 1): They also encounter a claim stating that if they take some
action at a cost c, they will increase the likelihood of incurring the bene…t v that
period to p1 2 (0; 1); where p1 > p0 :3 However, when choosing whether to act on
the claim the individual does not know whether the claim is true or false.4 Rather,
upon entering any period t each person has a belief t 1 2 (0; 1) that the claim is
true. Moreover, before choosing his action, an individual can consult one of two
information sources reporting on the veracity of the claim. At the beginning of a
period, both information sources observe a signal from nature regarding whether or
not the claim is true that individuals cannot observe and/or interpret by themselves.
In particular, both information sources observe a signal 2 fP; N g each period,
where Pr( = P jT rue) =
(in words, the probability of observing a P signal
given the claim is actually true equals ); for some
2 (0:5; 1).5 Continuing,
Pr( = N jT rue) = 1
, Pr( = P jF alse) = 1
; and Pr( = N jF alse) = .
Intuitively, if the claim is true, then there is a higher likelihood of observing a P (or
“positive”) signal than an N (or “negative”) signal, while if the claim is false, there
is a higher likelihood of observing an N signal than a P signal. The parameter
is simply the precision of the information nature can o¤er regarding the validity of
the claim.
While both information sources observe a distinct underlying signal from nature
each period, suppose both are biased in their reporting of the underlying signal they
observe. One of the sources is positively biased ( hereafter also referred to as the
positive bias source), while the other is negatively biased ( hereafter also referred to
as the negative bias source). In practice, bias here refers to the likelihood that instead
of reporting the true observed signal , a source simply reports a signal corresponding
to its bias. Therefore, if we let the parameter bP capture the likelihood of reporting
3
Note that all the results are also consistent with a set-up where individuals incur a negative
shock of size v each period with probability p0 ; but encounter a claim stating that if they take
some action at a cost c, they will decrease the likelihood of incurring the shock each period to p1 ;
where p1 < p0 ; which may be a set-up more consistent with some motivating examples. However,
this latter way requires accounting for and keeping track of numerous negative signs which provides
needless complexity to the math below.
4
A claim is “false” if the person takes on the action at cost c in the period, but the likelihood
of incurring the bene…t v that period remains p0 :
5
Note, it need not be the case that both sources observe the same signal in any given period.
9
biased misinformation by the positive bias source, then the reported signal from
the positive bias source is a P with probability bP , and equals with probability
1 bP : Similarly, if we let the parameter bN capture the likelihood of reporting
biased misinformation by the negative bias source, then the reported signal from
the negative bias source is an N with probability bN , and equals with probability
1 bN .
There are several interpretations of this tendency to report biased misinformation. The most direct interpretation is simply that a source sometimes chooses to
outright misrepresent or ignore newly available evidence. However, there are also
more nuanced views. For example, one can consider bi to be a parameter that captures the depth of reporting for source i, where that source uncovers new information
about the claim with probability 1 bi and simply repackages old information consistent with their bias with probability bi . Or, we can assume reporters all have their
own biases a la Baron (2006), and managers of any given information source cannot
fully eradicate such biases in their editing and oversight. Finally, one can presume
information sources su¤er from cognitive bias of a form similar to that described
by Rabin and Schrag (1999). Namely, each information source su¤ers from a cognitive bias that impedes their ability to correctly evaluate newly available evidence
regarding the claim, and instead causes them to interpret it in a manner consistent
with their own bias.
While any of these interpretations work for the analysis of individuals that follows, only the …rst is consistent with the later analysis where information sources
choose their likelihood of reporting misinformation. However, as will be discussed
later, the preferred interpretation is that limitations on a source’s ability to uncover new information, an inability to eradicate reporter bias, and a cognitive bias
among information sources, all can cause information sources to always report with
at least some minimal level of misinformation consistent with their bias. However,
information sources can also choose to report more misinformation if they so desire.
After choosing an information source and observing the reported signal regarding
the veracity of the claim from that source, individuals update their beliefs to t using
Bayes’rule, then based on these updated beliefs decide whether or not to act on the
claim in period t and observe whether or not they incur bene…t v. Individuals then
move on to period t + 1 at which point they face the same claim as before and must
again choose an information source and then an action regarding the claim, but now
10
enter the period with beliefs t . Individuals play a total of N periods, where N is
any positive integer.
Finally, assume that p1 and p0 are su¢ ciently close such that the information
content of observing whether or not the bene…t v is incurred at the end of a period
after choosing to act on a claim is negligible. This allows us to consider the decisions
in a one period setting rather than in a fully dynamic multi-period framework where
individuals may choose to act on the claim in a given period order to obtain further
information regarding its veracity for use in later periods. While such a model
would be interesting, it is considering a di¤erent type of problem than is of interest
here. Given this, we can analyze the model using backward induction regarding the
choices that must be made in any given single period. Therefore, in the analysis
that follows, the period subscript t’s are suppressed.
3.1
Choosing an Action
Given this set-up, an individual who has beliefs at the beginning of a period will
incur an expected utility of (p1 v) + (1
)(p0 v) c if he acts on the claim, and
an expected utility of simply p0 v if he does not act on the claim. Therefore, if an
individual is an expected utility maximizer, he will act on the claim if an only if
(p1 v) + (1
)(p0 v) c > p0 v; or if and only if > c=(p1 v p0 v): If we de…ne
to equal p1 v p0 v (i.e., equals the expected gross bene…t of acting on the claim
if it is indeed true), then each person’s optimal action with respect to the claim is
summarized in Proposition 1 below:
Proposition 1 An individual will act on the claim in a given period if and only if
his beliefs that period exceed
= c= :
The above proposition is quite intuitive and straightforward. The remainder
of this section analyzes an individual’s decision regarding choosing an information
source to consult regarding the validity of the claim.
11
3.2
Choosing an Information Source Regarding the Validity
of the Claim
For the interest of this paper, I will assume that individuals know the bias of each
source, and that one information source reports misinformation than the other— in
particular assume bP > bN : As stated in the introduction, the question of interest
is when (if ever) will it be optimal for an individual to choose such an information
source that reports misinformation with higher frequency than an alternative one
that is available?
3.2.1
Belief Updating
To analyze the key question stated above, let us …rst consider how individuals update their beliefs after observing the signal reported by a given information source.
To isolate the impact of biased misinformation among information sources, let us
suppose individuals update their beliefs optimally and unbiasedly using Bayes’rule
and perfect information regarding the underlying signal process (as well as the extent of each source’s treatment of the underlying signals). Given this assumption,
let us …rst consider an individual with initial beliefs at the beginning of a period
who obtains information from the positive bias source. If he observes a P signal
from this source, his updated beliefs will equal
bP (P ) =
((1 bP ) + bP )
bP ) + bP ) + (1
)((1 bP )(1
((1
) + bP )
:
(1)
Similarly, if this individual observes an N signal from this positive bias source, his
updated beliefs will equal
or simplifying,
bP (N ) =
(1
(1
bP )(1
bP (N ) =
(1
bP )(1
) + (1
)
)(1
(1
)
) + (1
)
:
bP )
;
(2)
As can be seen from equations (1) and (2) above, the positive bias (bP ) a¤ects an
individual’s beliefs upon observing a P signal from the positive bias source since he
knows there is some chance that it is a “false”signal, but does not a¤ect his beliefs
upon observing an N signal from the positive bias source since he knows that must
12
indeed correspond to the true underlying signal.
Next consider an individual with initial beliefs at the beginning of a period
who obtains information from the negative bias source. If he observes a P signal
from this source, his updated beliefs will equal
bN (P ) =
+ (1
)(1
)
(3)
:
Similarly, if this individual observes an N signal from this negative bias source, his
updated beliefs will equal
bN (N ) =
((1
((1 bN )(1
) + bN )
bN )(1
) + bN ) + (1
)((1
bN ) + bN )
:
(4)
So with respect to the negative bias source, the misinformation a¤ects an individual’s
beliefs upon observing an N signal (since such a signal might be “false”), but does
not a¤ect an individual’s beliefs upon observing a P signal since such a signal must
be indicative of an underlying P signal from nature.
Given the above equations, one can easily con…rm that the following Proposition
holds:
Proposition 2 For any initial beliefs ; bP (N ) < bN (N ) <
< bP (P ) < bN (P ):
Intuitively, given the individual knows the bias of each source, observing an N
(or “negative”) signal from the positive bias source will cause him to downward
adjust his beliefs more than he would if he observed an N signal from the negative
bias source, since he knows that an N signal from the negative bias source might
be the result of misinformation. Similarly, observing a P (or “positive”) signal from
the positive bias source will cause him to upward adjust his beliefs by less than he
would if he observed a P signal from the negative bias source, since he knows that
a P signal from the positive bias source might be the result of misinformation.
3.2.2
Reacting to New Information
Given the belief updating speci…ed above, we can now consider how individuals’
behavior will respond to di¤erent sorts of new information from the two information
sources. In doing so, we will only consider individuals whose beliefs entering a
13
period are such that > , or in words, individuals whose beliefs are such that
they must have found it optimal to act on the claim last period. This is without loss
of generality as one could make an argument essentially analogous to what follows
for those with < :
First note that for individuals acting on the claim, observing a P signal from either information source will strengthen their beliefs that the claim is true. However,
this will not have any impact on their behavior, as they will simply continue to act
on the claim as they were before. Next consider the consequence of observing an
N signal from the positive bias source. Note that an individual acting on the claim
will only change his behavior in response to observing such information if his initial
beliefs are such that when he updates his beliefs after seeing this signal, bP (N ), are
less than : Applying equation (2) and the de…nition of
from Proposition 1, the
aforementioned condition amounts to the following:
(1
)
) + (1
(1
)
<
c
:
Re-arranging and simplifying the above expression we get
<
(
c
c)(1
)+c
:
So, we can say that an individual who is acting on the claim will stop acting on the
claim if he observes an N , or “negative,”signal from the positive bias source and his
current beliefs are such that < 1 ; where 1 is de…ned by the following equation:
1
=
(
c
c)(1
)+c
(5)
:
Next, consider the consequence of observing an N signal from the negative bias
source. An individual acting on the claim will change his behavior if and only if his
beliefs going into the period are such that his updated beliefs bN (N ) upon observing
such a signal are less than : Again, applying equation (4) and the de…nition of
from Proposition 1, this condition amounts to if an only if
((1
((1 bN )(1
) + bN )
bN )(1
) + bN ) + (1
)((1
bN ) + bN )
Re-arranging and simplifying the above expression we get
14
<
c
:
<
(
c((1 bN ) + bN )
bN )(1
) + bN ) + c((1
c)((1
bN ) + bN )
:
So, we can say that an individual who is acting on the claim will stop acting on
the claim if he observes an N signal from the negative bias source and his current
beliefs are such that < 2 ; where 2 is de…ned by the following equation:
2
=
(
c((1 bN ) + bN )
bN )(1
) + bN ) + c((1
c)((1
bN ) + bN )
:
(6)
Simple comparisons between equation (5), equation (6), and the de…nition of
in Proposition 1 will con…rm the following proposition:
Proposition 3
<
2
<
1
for all 0 < bN < 1:
Given the above de…nitions and Proposition 2, we can now stratify those acting
on the claim into three groups: (i) the True Believers are individuals whose beliefs
coming into a period exceed 1 , meaning they act on the claim and would not switch
behavior this period regardless of what signal they observe from either information
source; (ii) the Strong Believers are individuals whose beliefs coming into a period
are between 2 and 1 , meaning they act on the claim, but would switch their
behavior if they observed an N signal from the positive bias source but not if they
observed a similar N signal from the negative bias source; (iii) …nally, the Tentative
Believers are individuals whose beliefs coming into a period exceed
but are less
than 2 ; meaning they act on the claim but they would switch their behavior upon
observing an N signal from either information source. Figure 1 summarizes this
information.
3.2.3
Choosing an Information Source
We can now analyze which information source would be optimal for a person to
choose depending on his beliefs coming into a period, assuming individuals can
consume only one source. To do so, we will analyze the three groups de…ned above
separately.
True Believers (
1
< )
15
These individuals will not change their behavior regardless of what they observe
from either information source. Therefore, at the beginning of a period before
observing any new information, their expected utility from choosing the negative
bias source is EU (N egative) = (p1 v c) + (1
)(p0 v c): Recalling that we
earlier de…ned = p1 v p0 v, the previous expression simpli…es to
EU (N egative) = p0 v +
c:
However, since these individuals will not change their behavior even if they observe
an N signal from the positive bias source, their expected utility from choosing the
positive bias source is also EU (P ositive) = (p1 v c) + (1
)(p0 v c); or more
simply
EU (P ositive) = p0 v +
c:
Given the expected utility from choosing the positive bias source is identical to the
expected utility from choosing the negative bias source, True Believers are indifferent between information sources regardless of how much more biased one is than
the other. This is actually quite obvious as there is no instrumental value of information for True Believers since they will continue to do this period what they did
last period regardless of any information they observe. However, if one makes the
assumption that individuals incur even the slightest bit of utility from observing
information consistent with their current behavior, or incur the slightest bit of disutility from observing information in con‡ict with their current behavior, then True
Believers would always choose the positive bias information source regardless of its
relative bias to the negative bias source (i.e., regardless of the di¤erence between bP
and bN ):
Strong Believers ( 2 < < 1 )
These individuals will change their behavior if they observe an N signal from
the positive bias source, but not if they observe an N signal from the negative bias
source. Therefore, like the True Believers above, at the beginning of a period their
expected utility from choosing the negative bias source is
EU (N egative) = p0 v +
c:
(7)
However, given these individuals will stop acting on the claim if they observe an N
16
signal from the positive bias source, but otherwise continue what they are doing,
their expected utility from choosing the positive bias source is
EU (P ositive) =
[((1
+(1
Again noting
= p1 v
bP ) + bP )(p1 v
c) + (1
)[((1
) + bP )(p0 v
bP )(1
bP )(1
)p0 v]
c) + (1
bP ) p0 v]:
p0 v, we can simplify the previous expression to
EU (P ositive) = p0 v + ((1 bP ) +bP )
((1 bP )(
+(1
)(1
))+bP )c: (8)
Clearly, an individual will have higher expected utility by consuming the positive
bias source if and only if EU (P ositive) EU (N egative) > 0: Given equations (7)
and (8) above this will be true if and only if
[ ((1
bP ) + bP )
]
[(1
bP )(
+ (1
)(1
)) + bP
1)c > 0:
(9)
Simplifying the above expression we get
(1
bP )[(1
(1
)(1
))c
(1
) ] > 0:
Since bP < 1; the above expression only holds if (1
(1 )(1 ))c (1 ) >
0: Simplifying this once more, we can say that equation (9) will hold if and only if
<
(
c
c)(1
)+ c
:
Now, note that from equation (5) above we know 1 = ( c)(1c )+ c : So the above
argument implies that EU (P ositive) EU (N egative) > 0 for anyone with beliefs
< 1 , which in turn is true by de…nition for the group of Strong Believers. Therefore, Strong Believers strictly prefer to obtain information from the positive bias
information source over the negative bias information source regardless of how large
the bias of the positive bias source is relative to that of the negative bias source
(i.e., regardless of how large bP is relative to bN ).
The intuition for this result is that individuals in this group have strong enough
17
beliefs that the claim is true such that no information from the negative bias source
can be su¢ cient to sway their behavior. Hence, information from the negative bias
source simply doesn’t have any instrumental value. On the other hand, even if the
positive bias source is known to report misinformation more frequently than the
negative bias source, such information can still be valuable to the Strong Believers,
since there is still a possibility of observing a “negative”signal (i.e., N signal) from
this source, which would indeed be valuable as it would be su¢ cient to change their
behavior.
While Strong Believers will …nd it optimal to choose the positive bias information source over the negative bias information source regardless of how much more
frequently the positive bias source reports misinformation than the negative bias
source, it is still worth considering how the expected utility of the Strong Believers
is a¤ected by the frequency of misinformation from the positive bias source. Intuition suggests that more misinformation from the positive bias source cannot be
helpful for Strong Believers, as this information is of value to them, so making this
information less accurate cannot be helpful. We can con…rm this intuition by taking
the derivative of equation (8) with respect to bP and checking whether or not it is
negative, or con…rming that
@EU (P ositive)
= (1
@bP
)
(1
(
+ (1
)(1
))c < 0:
With some manipulation, the above expression will be true as long as
<
(
c
c)(1
)+ c
:
will be negative as long as < 1 , which
The above expression implies @EU (P@bositive)
P
is again true by de…nition for Strong Believers.
So indeed, while Strong Believers are willing to choose the positive bias source
over the negative bias source no matter the relative frequency of misinformation from
the positive bias source relative to the negative bias source, but Strong Believers
are still better o¤ with less misinformation from the positive bias source.
Tentative Believers ( < < 2 )
These individuals will change their behavior if they observe an N signal from
either the positive bias source or the negative bias source. Therefore, at the beginning of a period before observing any new information, their expected utility from
18
choosing the negative bias source is
EU (N egative) =
[(1
+(1
bN ) (p1 v
)[(1
c) + (bN + (1
bP )(1
)(p0 v
bN )(1
)p0 v]
c) + (bN + (1
bN ) p0 v]:
+ (1
))c:
Simplifying, the above expression becomes
EU (N egative) = p0 v + (1
bN )
(1
bN )(
)(1
(10)
Similarly, at the beginning of a period before observing any new information,
their expected utility from choosing the positive bias source is
EU (P ositive) =
[((1
+(1
bP ) + bP )(p1 v
c) + (1
)[((1
) + bP )(p0 v
bP )(1
bP )(1
)p0 v]
c) + (1
bP ) p0 v];
)(1
))+bP )c: (11)
which can again be simpli…ed to
EU (P ositive) = p0 v+ ((1 bP ) +bP )
((1 bP )(
+(1
Like before, an individual will have higher expected utility by consuming the
positive bias source if and only if EU (P ositive) EU (N egative) > 0;which given
equations (10) and (11) above will only be true if
[((1 bP ) +bP ) (1 bN ) ]
[(1 bP )(
+(1
)(1
))+bP (1 bN )(1
Simplifying this equation we get
bP (
c) + (bN
bP )[(
c) + ( (1
) + (1
) )c] > 0:
Manipulating the above expression a bit further we get the condition
19
)(1
)]c > 0:
bP
bN
bN
<
(
( (1
c)
) + (1
) )c
:
Therefore, if we de…ne Acceptable Excess Misinformation (AEM) by the following
equation
AEM ( ) =
(
( (1
c)
) + (1
) )c
;
(12)
then Tentative Believers are better o¤ choosing the positive bias information source
as long as the percentage di¤erence in frequency of misinformation between the
positive bias source and the negative bias source is less than AEM ( ); otherwise
they should choose the negative bias source.
A couple of things to note here. First, since
c > 0 for all those acting on the
claim (i.e., all those for whom > ) as shown in Proposition 1, it must be true
that AEM ( ) > 0 for all Tentative Believers. This implies that even the Tentative
Believers will …nd it optimal to choose the positive bias information source than the
negative bias information source even if they know it reports misinformation more
frequently (just not “too”much more frequently).
The second thing to note is that lim AEM ( ) = 0 (since
= c= ) and
@AEM ( )
@
!
> 0: This implies that those whose beliefs are such that they act on the
claim, but are almost indi¤erent between acting and not acting on the claim, are
willing accept only a negligible amount of “excess misinformation”from the positive
bias information source. However, as their beliefs in the validity of the claim become stronger, even Tentative Believers are willing to accept more an more “excess
misinformation” from the positive bias information source relative to the negative
bias information source.
The intuition for the results with respect to the Tentative Believers is a bit
subtle. Essentially, it comes down to Type I versus Type II errors. Given these
individuals are choosing to act upon the claim, they are more worried about not
acting on the claim if it is actually true than acting on the claim though it is actually
false. Therefore, they are slightly more worried about observing a “false” negative
signal that would cause them to cease acting on the claim when it is actually true,
than a “false” positive signal that would cause them to keep acting on the claim
even though it is not true. This means such individuals are willing to accept a bit
more misinformation from the source that tilts toward the claim being true than
20
the source that tilts toward the claim being false. The relative balance of these tips
more and more toward the former as the individual’s belief that the claim is true is
stronger.
Finally, given equation (11) and equation (8), we know the expression for EU (P ositive)
is the same for Tentative Believers as it was for Strong Believers. Therefore, since
we know that < 1 for all Tentative Believers, once again it will be true that
@EU (P ositive)
will be negative for all Tentative Believers. In words, Tentative Believ@bP
ers will also be made worse o¤ the greater the frequency of misinformation from the
positive bias source.
3.2.4
Summarizing Behavior with Respect to Choosing an Information
Source
The above argument shows that as long as all information sources report biased
misinformation about a given claim, everyone who chooses to act on the claim will
actually …nd it optimal (in a Bayesian expected utility maximizing sense) to choose
the information source that biases its misinformation toward the claim, even if that
source is known to report misinformation more frequently than an alternatively
biased source. In other words, in a world of biased misinformation, it can be rational
to be willfully ignorant.
While all those acting on the claim are willing to accept some excess misinformation from the positive bias source, they are not all the same. True Believers …nd
the positive bias source superior no matter how much more frequently it reports
misinformation than the negative bias source, and moreover, are actually made better o¤ the more misinformation the positive bias source reports if they get even the
smallest amount of disutility from hearing information in con‡ict with their current behavior (a la Mullainathan and Shleifer (2003)). Strong Believers also …nd
the positive bias source superior no matter how much more frequently it reports
misinformation than the negative bias source, but would prefer less misinformation
from the positive bias source. Alternatively, Tentative Believers have their limits,
only …nding the positive bias source superior if it doesn’t report misinformation
“too” much more frequently than the negative bias source, and are made worse
o¤ the more frequent the misinformation from the positive bias source. However;
this willingness to accept more misinformation from the positive bias source among
the Tentative Believers increases in the strength of their own beliefs regarding the
21
validity of the claim.
Since True Believers and Strong Believers choose the positive bias source regardless of its relative frequency of misinformation, and Tentative Believers choose the
positive bias source as long as its relative frequency of misinformation is within their
Acceptable Excess Misinformation (AEM) range (which itself is increasing in beliefs
), we can state the following proposition that fully characterizes choice regarding
information source:
Proposition 4 For any given information source frequencies of misinformation bP
@ 3 (bP )
and bN , there exists a threshold belief 3 (bP ) >
such that @b
> 0 and:
P
1. If 3 (bP )
2 , then an expected utility maximizing individual will choose the
positive bias information source if and only if his beliefs
3 (bP ):
2. If 3 (bP ) > 2 , then an expected utility maximizing individual will choose the
positive bias information source if and only if his beliefs
2:
Proof. In Appendix.
Propositions 1 - 4 are summarized graphically in Figure 2.
The above results also let us consider the evolution of each individual’s tolerance
for bias. In particular, they imply that the longer an individual continues to act
on a claim, the more misinformation he is willing to tolerate from the information
source biased toward the claim relative to a source biased against a claim.
To see why, consider an individual who has acted on the claim for n 1 periods.
If he was a True Believer at the beginning of the n 1th period, then he is either still
a True Believer or at least a Strong Believer at the beginning of the nth period.
Either way, he will tolerate more misinformation from the the positively biased
information source since he will not switch to the alternately biased source regardless
of the frequency of misinformation from the positive bias source. Alternatively, if he
was a Strong Believer or Tentative Believer at the beginning of the n 1th period,
then for him to have still acted on the claim in the n 1th period, he must have
observed a positive signal from the positively biased source. Such information will
cause his beliefs to strengthen. This may cause him to become a Strong Believer, or
simply cause him to continue to be a Strong Believer. In either case he will tolerate
more misinformation from the the positively biased information source since he will
22
again not switch to the alternately biased source this period. Finally, if he was a
Tentative Believer in the n 1th period, then the strengthening of his beliefs due
to observing a positive signal (even from the positive biased source) will cause him
to also be willing to accept more misinformation from the positive bias source since
the Acceptable Excess Misinformation function, AEM ( ); is increasing in beliefs :
4
Behavior of Information Sources
As discussed above, among the individuals for whom information is valuable (i.e.,
Strong Believers and Tentative Believers), less misinformation is better. Indeed, if
an information source could credibly never report misinformation, the above results
would go away for these two groups. But, more interesting implications arise if one
assumes some frequency of misinformation, b > 0; is inevitable from information
sources. Given a source biased one way always reports some misinformation, how
much misinformation would one biased in the opposite way want to choose?
On some dimensions, the answer to this question depends on the objective of
the information source. On the one hand, the information source could be a pro…t
maximizer, primarily caring about its number of consumers (assuming marginal cost
of production is negligible). On the other hand, the information source may have
a di¤erent goal, such as maximizing the number of people acting in accordance
with a given claim. Furthermore, depending on its time preferences, an information
source may care about short-run pro…ts or number of people acting on the claim,
or long-run/steady state pro…ts or number of people acting on the claim. Instead
of assuming one of these to be the “correct” objective function for information
sources, I use simulations of the basic model to show how choice of frequency of
misinformation interacts with all of these di¤erent objectives.
Before doing so though, it is illustrative to consider …rst the tensions an information source encounters when facing an alternative source that is reporting with the
minimal possible frequency of misinformation. For example, suppose the negative
bias information source reports information with the minimum possible misinformation, how should the positive bias source respond? To answer this, it is useful
to think about the trade-o¤ the positive bias source is making by increasing its
frequency of misinformation. From Proposition 4 we know that as the frequency of
misinformation of the positive bias source, bP , rises, so will 3 (bP ): Therefore, as the
23
positive bias source increases its frequency of misinformation, it will attract fewer
Tentative Believers. Intuitively, as the positive bias source increases its frequency of
misinformation, it will surpass the Acceptable Excess Misinformation threshold for
more and more Tentative Believers. On the other hand, by increasing its frequency
of misinformation, it is more likely to deliver a “P ”signal, meaning it will be more
likely to strengthen the beliefs of those who do choose to use it, making them more
likely to continue to choose it again the next period. In a sense, by increasing its
frequency of misinformation, it is trading o¤ the number of initial users for the loyalty of those initial users. The following simulation results reveal that the latter will
generally dominate the former under relatively generic parameterizations.
The following simulation results show the mean results of 10 repetitions of populations of 300 individuals, where initial beliefs for each individual are randomly
drawn from a uniform distribution over [0,1]. In each case, = 2 and c = 1, implying that individuals act on the claim as long as their beliefs that the claim is true in
that period exceed 0.5 (see Proposition 1). Moreover, it is assumed that the claim
is false in actuality, and set equal to 0.75. This means that the likelihood of an
information source observing a “positive” signal in any given period is 0.25, while
the likelihood of an information source observing a “negative” signal in any given
period is 0.75. Finally, the frequency of misinformation by the negative information source is assumed to be bN = 0:15, meaning that instead of reporting the true
underlying signal they simply report a “negative” signal 15% of the time, which is
assumed to be the lower bound on misinformation.
Figure 3 shows how the fraction of the population choosing the positive bias information source evolves under three di¤erent frequencies of misinformation for the
positive bias source— low (bP = 0:16); medium (bP = 0:50), and high (bP = 0:80):
As can be seen in early rounds of Figure 3, the positive bias information source is
initially consumed far more often when it is known to report misinformation relatively infrequently (bP = 0:16). However, as the truth starts getting out due to this
relatively infrequent reporting of misinformation, more and more individuals switch
to the negative bias source. On the other hand, when reporting misinformation very
frequently (bP = 0:80), the fraction of individuals who consume the positive bias
source stays relatively constant, so that after about …ve rounds of reporting, the
fraction of individuals choosing to consume the positive bias source is roughly the
same whether it reports misinformation relatively frequently or infrequently, and in
24
later rounds a higher fraction of individuals choose the positive bias source when it is
known to report misinformation very frequently rather than relatively infrequently.
This point is made even stronger in Figure 4, which shows the evolution of the
fraction of the population acting on the claim under the three di¤erent frequencies
of misinformation from the positive bias information source. As can be seen, up
through about three rounds of reporting, the fraction of individuals acting on the
claim is about the same regardless of the relative frequency of misinformation by
the positive bias source. Intuitively, by reporting with misinformation relatively
infrequently, the positive bias source gets more consumers, but this is o¤set by the
fact that these consumers are then more likely to hear information in con‡ict with
the claim, causing them to switch sources, relative to what would have happened if
the reported positively biased misinformation more frequently. However, after round
4, the fraction of the population acting on the claim is increasing in the frequency of
misinformation from the positive bias source. Indeed, after nine rounds of reporting,
only about 15% of individuals continue to act on the (false) claim when the positive
bias source chooses to report misinformation with close to the minimal frequency.
On the other hand, by reporting with a very high frequency of misinformation, the
positive bias information source is able to keep almost 30% of the population acting
on the claim (even though it is actually false).
These results suggest that if an information source biased toward the correct
side of the claim is reporting misinformation with the minimal frequency, but this
minimal frequency is still signi…cant, an information source biased toward the other
side of the claim will generally maximize the long-run number of individuals consuming that source and the number of individuals who continue to act in accordance
with that information source’s bias by reporting misinformation with a very high
frequency. As a point of comparison, the dotted grey line in Figure 4 shows the
evolution of the fraction of the population acting on the claim if individuals themselves could simply access and interpret the raw unbiased information rather than
go through a biased misinformation intermediary. As can be seen, after 10 rounds,
just over 5% of the population would continue to act on the claim. This is less than
one-…fth of the fraction that would be acting on the claim when the negative bias
source reports misinformation with a minimal possible frequency (bN = 0:15) and
the positive bias source reports misinformation with high frequency (bP = 0:80):
As a counterpoint, it is also instructive to consider di¤erent choices the negative
25
bias information source could make regarding its own frequency of misinformation
when the positive bias information source reports biased misinformation quite frequently. The simulations summarized in Figures 5 and 6 strongly suggest that the
negative bias information source su¤ers in the long run if it gets into a misinformation arms race. In particular, the simulations underlying Figures 4 and 5 are similar
to those described above, but the frequency of misinformation of the positive bias
source is …xed at bP = 0:8, and three di¤erent frequencies of misinformation are
considered for the negative bias source— low (bN = 0:15); medium (bN = 0:50), and
high (bN = 0:75):
As can be seen in Figure 5, the fraction of the population consuming the positive
bias source (and therefore not consuming the negative bias source) is always higher
when the negative bias source chooses to report misinformation frequently rather
than attempts to minimize its misinformation. In other words, by responding to
frequent misinformation from the positive bias side with a lot of oppositely biased
misinformation of its own, the negative bias side loses consumers relative to what
would happen if they had tried to report with as little misinformation as possible.
Similarly, as can be seen in Figure 6, when the positive bias source reports misinformation with high frequency, the fraction of the population acting on the claim is
always higher when the negative bias source also chooses to report misinformation
with high frequency (bN = 0:75) than when it chooses to report misinformation with
the lowest frequency possible (bN = 0:15).
Moreover, again comparing these results in Figure 6 to the dotted grey line showing the fraction of the population acting on the claim if individuals themselves could
simply access and interpret the raw unbiased information themselves rather than go
through a biased intermediary, we can see that the misinformation in reporting
from available information sources can lead to up to seven times more individuals
continuing to act on the (false) claim after 10 periods.
Finally, Figure 7 con…rms the relatively intuitive result that the presence of
misinformation from information sources can exacerbate the divergence in the beliefs
between those who act on the claim and those who do not. Speci…cally, the solid
lines in Figure 7 show the average beliefs for those who do and do not act on the
claim in simulations where the positive biased source reports misinformation with
high frequency (bP = 0:80) and the negative bias source reports misinformation
with low frequency (bN = 0:15): These lines can be compared to the dashed lines
26
in Figure 6 which show the average beliefs for those who do and do not act on the
claim in simulations where individuals can directly observe and interpret the raw
unbiased information from nature themselves rather than having to go through one
of the biased intermediary sources. While average beliefs from these two di¤erent
groups diverge under both simulations, this divergence is greater in the world when
biased misinformation is inevitable.
In general, these simulation results suggest that if both informatoin sources care
about the long-run number of users and/or the long-run number of people acting in a
manner consistent with the information sources bias, the information biased toward
the correct side of the claim would …nd it optimal to report misinformation with
the minimal possible frequency, while the information biased toward the incorrect
side of the claim would …nd it optimal to report misinformation with a very high
frequency. Moreover, in such an environment, as time passes, those acting on each
side of the claim will become more and more convinced their actions are correct and
the other actions of the other side are wrongheaded and hard to fathom.
However, two important caveats arise with respect to these simulations. First,
presumably the information sources do not know the actual truth about the claim
since even they only observe noisy signals regarding the veracity of the claim. However, this may not be a very big issue as given enough time, each information source
should observe enough signals so as to be pretty sure whether the claim is true or
false (as can be inferred for example from the dotted grey line in Figures 4 and
6). Second, and arguably more importantly, if consumers know the frequency with
which each source reports biased misinformation (as assumed here), and they observe one side reports misinformation more frequently than the other, then this
constitutes new information which they should use to update beliefs. This is not
something incorporated into the present model, meaning these simulations implicitly
consider only semi-rational consumers— i.e., they act rationally given their beliefs
and they update their beliefs correctly given what they know about information
source’s biases, but they do not consider the strategic decisionmaking that determines the extent to which each information source delivers misinformation. More
fully modelling the interaction between information souce decisions and consumer
beliefs constitutes an important extension of this work.
27
5
Summary and Conclusion
Why would someone choose to inform himself about an important issue from an information source he knew lied to him more often than other available sources? Or,
as termed in this paper, why would someone be willfully ignorant? As this paper
argues, such behavior is actually optimal for a Bayesian expected utility maximizing
individual when all information sources report biased misinformation with at least
some frequency. Intuitively, either an individual’s beliefs are such that further information simply isn’t valuable in the sense of a¤ecting behavior (the True Believers),
or information from the oppositely biased source is just less valued because either
it would be su¢ ciently discounted so as not to be able to change behavior (Strong
Believers), or because the individual is more concerned about misinformation causing him to not act on the claim when it is true than causing him to act on the claim
when it is actually false (Tentative Believers).
One thing that comes out of this analysis is that optimal behavior for rational
individuals and information sources may not be optimal for society as a whole.
Speci…cally, if a claim is false, and acting on the claim is costly to individuals and/or
society at large, then the greater the frequency of misinformation from the source
biased toward the claim, the greater will be the number of people who continue to
act on the claim, thereby increasing the total societal ine¢ ciency. In other words,
“free speech”is not free.
By concealing or misrepresenting the scienti…c …ndings regarding vaccines, antivaccination information sources increase the unvaccinated and thereby threaten the
health of others. By overstating the harmful e¤ects of marijuana, anti-drug advocates may be inducing more individuals to use it. By overstating the bene…ts of
mammograms, breast cancer awareness groups may be causing excessive medical
interventions with few bene…ts. By surrounding himself with advisors who were
overly zealous in reporting that Saddam Hussein was producing and using weapons
of mass destruction, United States President George W. Bush gave orders to invade
Iraq. While some may argue that the war in Iraq was in fact the correct foreign
policy decision, such an assessment certainly cannot be made on the grounds of
con…scating weapons of mass destruction.
A …nal thing to come out of this analysis is a consideration for how information
sources should determine their frequency of misinformation, when a little misinformation is inevitable. On the one hand, the results suggest the rather unhopeful
28
conclusion that if some misinformation in reporting is unavoidable, then even if one
side tries to report misinformation to the minimal extent possible, advocates of the
other side my …nd it optimal to respond with frequent reporting of misinformation
(leading to the types of situations described in the preceding paragraph). On the
other hand, the analysis also suggests that it is generally not helpful to …ght gross
misinformation from one side with further misinformation of your own. Rather than
engaging in such an arms race, it is better to try to deliver information as unbiasedly
as possible (if you are indeed on the correct side). However, the analysis also shows
that in a world of biased misinformation, as time goes on, the beliefs of those acting
on each side of the claim will become stronger and stronger that they are right and
those on the other side are wrong. This will mean that as time passes, those acting on one side of a controversial claim will become more an more convinced that
the actions and beliefs of those acting on the other side of the claim are simply
“crazy,” and moreover, those on each side are willing to tolerate more and more
misinformation from their information source of choice.
In general, this paper shows that there are signi…cant costs associated with the
reality, or even perception, that unbiased information is not possible. To the extent
to which biased misinformation can be mitigated, society will likely engage in less
ine¢ cient behavior.
29
6
Appendix - Proof of Proposition 4
As shown in the text, all those whose beliefs exceed 2 (i.e., True Believers and
Strong Believers) will …nd it optimal choose the positive bias information source
regardless of bP relative to bN . Moreover, as shown in the text, those whose beliefs
are between
and 2 (i.e., Tentative Believers) will …nd it optimal to choose the
positive bias source as long as the di¤erence between bP and bN is less than or equal
to their Acceptable Excess Misinformation (AEM ( )). As can be con…rmed from
equation (12), as beliefs approach
from above, AEM ( ) converges to zero.
However, as also can be con…rmed from equation (12), AEM ( ) is continuous and
strictly increasing in . Therefore, for any given bN and bP such that 0 < bN <
bP < 1 (meaning bPbNbN < 1); the intermediate value theorem implies that there
must exist a 3 (bP ) such that AEM ( 3 (bP )) = bPbNbN and AEM ( ) > bPbNbN for all
> 3 (bP ):
This in turn means that if for a given bP and bN ; 3 (bP )
2 , then for all those
whose beliefs exceed 3 (bP ) the percentage di¤erence between bP and bN will be
less than their Acceptable Excess Misinformation (AEM ( )), meaning they will also
…nd it optimal to choose the positive bias source, while those whose beliefs are less
than 3 (bP ) the percentage di¤erence between bP and bN will be greater than their
Acceptable Excess Misinformation (AEM ( )), meaning they will …nd it optimal to
choose the negative bias source.
On the other hand, if for a given bP and bN , 3 (bP ) > 2 , then percentage di¤erence between bP and bN will be greater than the Acceptable Excess Misinformation
(AEM ( )) for all those for whom
< < 2 ; meaning all Tentative Believers will
…nd it optimal to choose the negative bias source.
Finally, recall that 3 (bP ) was de…ned to be the value of such that bPbNbN =
AEM ( 3 (bP )); which we can re-write as bN = AEM ( bP(bP ))+1 . Again noting that
3
AEM ( ) is strictly increasing in , we know from this equation that for any …xed
bN , it will be true that 3 (bP ) must increase with bP :
30
References
[1] Baron, David. (2006). “Persistent Media Bias. ”Journal of Public Economics
90: 1-36.
[2] Benabou, Roland and Jean Tirole. (2002). “Self-Con…dence and Personal Motivation.”Quarterly Journal of Economics 117(3): 871-915.
[3] Blomberg, S. Brock, and Joseph Harrington. (2000). “A Theory of Flexible
Moderates and Rigid Extremists with an Application to U.S. Congress.”American Economic Review 90: 605-620.
[4] Calvert, Randall. (1985). “The Value of Biased Information.” The Journal of
Politics 47(2): 530-555.
[5] Carrillo, Juan D. and Thomas Mariotti. (2000). “Strategic Ignorance as a SelfDisciplining Device.”Review of Economic Studies 67: 529-544.
[6] Crawford, Vincent P., and Joel Sobel. (1982). “Strategic Information Transmission.”Econometrica 50: 1431-51.
[7] Dal Bo, Ernesto and Marco Tervio. (2008). “Self-Esteem, Moral Capital, and
Wrongdoing.”NBER Working Paper 14508.
[8] DellaVigna, Stefan and Ethan Kaplan. (2007). “The Fox News E¤ect: Media
Bias and Voting.”Quarterly Journal of Economics 122(3): 1187-1234.
[9] Eil, David and Justin M. Rao. (2011). “The Good News-Bad News E¤ect:
Asymmetric Processing of Objective Information about Yourself.” American
Economic Journal: Microeconomics 3: 114-138.
[10] Fredrickson, Doren D., Terry C. Davis, Connie L. Arnold, Estela M. Kennen,
Sharon G. Humiston, J. Thomas Cross, Joseph A. Bocchini Jr. (2004). “Childhood Immunization Refusal: Provider and Parental Perceptions.”Family Medicine 36(6): 431-439.
[11] Gentzkow, Matthew and Jesse M. Shapiro. (2006). “Media Bias and Reputation.”Journal of Political Economy 114(2): 280-316.
31
[12] — –. (2010). “What Drives Media Slant? Evidence from U.S. Daily Newspapers.”Econometrica 78(1): 35-71.
[13] Groseclose, Tim, and Je¤rey Milyo. (2005). “A Measure of Media Bias.”Quarterly Journal of Economics 120(4): 1191-1237.
[14] Karlsson, Niklas, George Loewenstein, and Duane Seppi (2009). “The Ostrich
E¤ect: Selective Attention to Information.” Journal of Risk and Uncertainty
38: 95-115.
[15] Kopczuk, Wojciech and Joel Slemrod. (2005). “Denial of Death and Economic
Behavior.”Advances in Theoretical Economics 5(1): Article 5.
[16] Koszegi, Botond. (2006). “Ego Utility, Overcon…dence, and Task Choice.”Journal of The European Economic Association 4(4): 673-707.
[17] Mullainathan, Sendhil and Andrei Shleifer. (2003). “The Market for News.”
American Economic Review 95(4): 1031-1053.
[18] Pew Research Center. (2009). “Public Evaluations of the News Media: 19852009.
[19] — –. (2010). “Ideological News Sources: Who Watches and Why?”.
[20] Predergast, Canice. (1993). “A Theory of Yes Men”. American Economic Review 83(4): 757-770.
[21] Rabin, Matthew, and Joel L. Schrag. (1999). “First Impressions Matter: A
Model of Con…rmatory Bias.”Quarterly Journal of Economics 114(1): 37-82.
[22] Smith, Philip J., Susan Y. Chu, Lawrence E. Barker. (2004). “Children Who
Have Received No Vaccines: Who Are They and Where Do They Live?”Pediatrics 114(1): 187-195.
[23] Suen, Wing. (2004). “The Self-Perpetuation of Biased Beliefs.”Economic Journal 114: 377-396.
[24] Willingham, Emily. (2013). “Court Rulings Don’t Con…rm Autism-Vaccine
Link.”http://www.forbes.com/sites/emilywillingham: 8/09/2013.
32
[25] Wolfe, Robert M., Lisa K. Sharp, Martin Lipsky. (2002). “Content and Design
Attributes of Antivaccination Web Sites.” Journal of the American Medical
Association 287(24): 3245-3248.
33
Fig 1: Summary of "types"
Possible Beliefs entering a period (π)
π*
0 π2*
Will not act on claim
π1*
1 Will act on claim
Tentative
Believers
Strong
Believers
True
Believers
Tentative Believers: Will cease acting period on the claim this period upon hearing a negative signal from either source. Strong Believers: Will cease acting on the claim this period upon hearing a negative signal from positive bias source but not negative bias source. True Believers: Will not cease acting upon the claim this period regardless of what information they hear from either source. Fig 2: Graphical Summary of Propositions 1 - 4 (for bP > bN)
If π3*(bp) < π2*:
π*
0 π3*(bp)
Will not act on claim
π2*
π1*
1 Will act on claim
Tentative
Believers
Strong
Believers
True
Believers
Choose Negative Bias source
Choose Positive Bias source
If π3*(bp) > π2*:
π2*
π*
0 Will not act on claim
1 Will act on claim
Tentative
Believers
π1*
Strong
Believers
True
Believers
Choose Negative Bias source
Choose Positive Bias source
Fig 3: Fraction of Population Consuming Positive Bias Source (negative bias source with low bias: bn = 0.15)
0.60
0.50
Bias of positive
bias source:
0.40
Low (bp = 0.16)
0.30
Medium (bp = 0.5)
0.20
high (bp = 0.8)
0.10
0.00
1
2
3
4
5
6
7
8
9
Round
Fig 4: Fraction of Population Acting on Claim
(negative bias source with low bias: bn = 0.15)
0.60
0.50
Bias of positive
bias source:
0.40
Low (bp = 0.16)
0.30
Medium (bp = 0.5)
0.20
high (bp = 0.8)
0.10
Unbiased source
0.00
1
2
3
4
5
6
7
8
9
10
Round
Fig 5: Fraction of Population Consuming Positive Bias Source (positive bias source with high bias: bp = 0.80)
0.50
0.45
0.40
0.35
0.30
0.25
0.20
0.15
0.10
0.05
0.00
Bias of negative
bias source:
Low (bn = 0.15)
Medium (bn = 0.5)
High (bn = 0.75)
1
2
3
4
5
6
7
8
9
Round
Fig 6: Fraction of Population Acting on Claim
(positive bias source with high bias: bp = 0.80)
0.60
0.50
Bias of negative
bias source:
0.40
Low (bn = 0.15)
0.30
Medium (bn = 0.5)
0.20
High (bn = 0.75)
0.10
Unbiased source
0.00
1
2
3
4
5
6
7
8
9
10
Round
Fig 7: Evolution of Beliefs
Average Belived 1.00
Those acting on claim (biased world)
0.80
0.60
Those not acting on claim (biased world)
0.40
Those acting on claim (unbiased world)
0.20
0.00
1
2
3
4
5
6
7
8
9
10
Those not acting on claim (unbiased world)
Round