Naive Player`s Double Role in Observational Learning Model

Confounded Observational Learning with Common
Values
Xuanye (UT Austin)
June 30, 2017
Abstract
We modify the standard herding model so that a fraction of players are naive and
rely exclusively on private information. The rest players are rational and uncertain
about proportion of naive players. We find that learning could be confounded in the
long run, despite private signal strength could be unbounded.
This paper examines the standard model of observational learning by Bayesian rational
players with common values, pioneered by Banerjee (1992) and Bikhchandani et al. (1992).
Nature chooses the state of the world once and for all, and every player wants to choose the
action that matches the state. We modify this model so that a fraction of the players are
naive or “lone-wolves” and rely exclusively on their own private signals, rather than public
information. The existence of players who overweight private signals are suggested in several
experimental literature. See Duffy et al. (2016),Weizsäcker (2010) and Ziegelmeyer et al.
(2013). When the proportion of lone wolves is commonly known, their presence ensures that
social learning never stops. This permits the rational players to eventually learn the true
state of the world, even when private signals are boundedly informative. What happens to
social learning when rational players are uncertain about the proportion of lone wolves, and
have a common prior on this proportion, as in a standard Bayesian game? Our key finding
is that confounded learning is a possibility, even though players have common values, and
agree on the action to be taken in each state of the world. That is, after some histories,
rational players will forever choose both actions with positive probability, as a function of
their private signal. Since uncertainty is two-dimensional – players are uncertain about the
state as well as the proportion of lone wolves – history may fail to allow them to disentangle
1
the two components separately. We also show that there will either be complete learning or
confounded learning – incorrect learning is impossible in our Bayesian game.
While there is an extensive literature on observational learning, our paper is most closely
related to the papers by Smith and Sørensen (2000) (hereafter, simply SS) and Bohren
(2016). SS allow for heterogeneous preferences and were the first to show that confounded
learning is possible in herding models when players have divergent preferences. 1 A fraction
of players would like their action to match the state, while the remaining fraction prefer to
mismatch action and state. While our analysis utilizes many of their techniques and insights,
our underlying economic environment giving rise to confounded learning is very different –
every player would like her action to match the state. Bohren allows for lone wolves, and
assumes that rational players have wrong but certain beliefs about the proportion of lone
wolves. She finds that if the beliefs are not too wrong, asymptotic learning is ensured, but
for larger errors, beliefs may assign probability zero to the true state or fail to converge. Our
results are very different (there cannot be incorrect learning, and there can be confounded
learning), since rational players use history to revise their beliefs on the true proportion of
lone wolves.
Our paper is also related to the literature of observational learning with mis-specified
model. Eyster and Rabin (2010) allows every rational player mistakenly think other players
are naive. They find incorrect herding could happen even with continuum actions and
unbounded signals. Bohren and Hauser (2017) allows each player to have a general misspecification of how other player process information. Our paper could be viewed as an
attempt to answer what if players can learn and correct their mis-specified model.
1
Model
At t = 0, nature moves and chooses the economic fundamental from the set {A, B}, and the
proportion of lone wolves, p, from pL , pH (0 < pL < pH < 1). For simplicity, we assume each
element of {A, B} × {pL , pH } is chosen with equal probability. No player observes nature’s
choice. There is a countable infinity of players indexed by t ∈ N , who must choose, in
sequence, either action a or action b. A player gets a payoff of 1 if her action matches the
economic fundamental, and zero otherwise.
The player’s public information is as follows. With a constant probability p, player t is a
1
Easley and Keifer examine individual learning (rather than social learning) and find that confounded
learning is possible, for non-generic parameters.
2
lone wolf and does not observe the actions of any previous players. With probability 1 − p,
she observes the actions of all previous players.
Both types of players observe a private signal, whose distribution is state-dependent and
i.i.d across players. Following SS, we identify one player’s private signal with one’s posterior belief after observing the signal, and use st to denote the private posterior probability
that player t believes economics fundamental to be A. Here hst i is conditional i.i.d. with
distribution denoted as F ω (s), ω = {A, B}. We assume signal strength is unbounded, that
is, supp(F A (s)) = supp(F B (s)) = (0, 1). With unbounded signal strength, one player could
be arbitrarily sure about one state after observing private signal. We also assume f ω (s) > 0
for all ω ∈ {A, B}, s ∈ (0, 1). This will guarantee the standard assumption that no private
signal fully reveal the underlying state.
2
Main Result
There are four potential states in total. For notation abbreviation, denote zAL = {A, pL },
zAH = {A, pH }, zBL = {B, pL }, zBH = {B, pH }. Without loss of generality, we assume the
true state to be zAL . All arguments apply to other cases directly.
For a rational player who observes history up to period t − 1, her public belief can be
AH |ht−1 )
AH
BH
BL
= Pr(z
summarized by a triple of likelihood ratios πt = hλAH
t , λt , λt i. Here λt
Pr(zAL |ht−1 )
BH
is the conditional likelihood ratio of state zAH over true state zAL , similarly for λBL
t , λt .
Furthermore, the public likelihood ratio of economic fundamental B over A, after observing
λBL +λBH
t−1 )
= t1+λAHt .
ht−1 , is given by λt = Pr(B|h
Pr(A|ht−1 )
t
t ,ht−1 )
t
If player t is rational, then she will choose action a if and only if Pr(B|s
= 1−s
λt < 1,
Pr(A|st ,ht−1 )
st
t
.
which is equivalent to st > λtλ+1
If player t is a lone wolf, then she doesn’t observe public history ht−1 . She will choose
t)
t
action a if and only if Pr(B|s
= 1−s
< 1, which is equivalent to st > 12 .
Pr(A|st )
st
Let ψ(α|zi , πt ) be the probability of observing action α ∈ {a, b} under state zi and public
belief πt . We can explicitly write out ψ(α|zi , πt ) for each i ∈ {AH, BL, BH}. For example,
1
λt
ψ(b|ZAH , πt ) = pH F A ( ) + (1 − pH )F A (
).
2
λt+1
We observe that such probability only depends on the likelihood ratio of econ fundamentals
λt , hence ψ(α|zi , πt ) = ψ(α|zi , λt ).
Therefore, after observing player t’s action α ∈ {l, h}, likelihood ratios are bayesian
3
updated to
λit+1 (α) = λit
ψ(α|zi , πt )
.
ψ(α|zAL , πt )
(1)
As standard in the literature, we say learning is complete if public belief assigns all weight
to true state zAL in the long run, that is, hπt i converges to h0, 0, 0i. Otherwise, learning is
incomplete.
As standard in the literature, in defining the likelihood ratio, we intentionally put
probability associated with true state AL in the denominator. It is immediate that for
i ∈ {AH, BL, BH}
E[λit+1 |πt , ZAL ] = λit .
(2)
Therefore, λit is a conditional martingale. As in standard argument, λit converges almost
surely to a finite r.v. λi∞ due to martingale convergence theorem.
BH
BL
Theorem B.2 in SS characterized limit points of process hλAH
t , λt , λt i. It claims if
BL
BH
process of belief hλAH
t , λt , λt i converges to limit π∞ = hπAH , πBL , πBH i with positive
probability, then any action that is observable under limit belief must keep the limit belief
unchanged. Because lone wolves never herd, both actions are observable in the long run.
Therefore, if π∞ could be a limit belief, then
πi = πi
ψ(α|π∞ , zi )
ψ(α|π∞ , zAL )
(3)
for all i ∈ {AH, BL, BH} and α ∈ {a, b}. Furthermore, martingale convergence theorem
requires πi < ∞ for all i.
Obviously, πi = 0 for all i is a potential candidate of limit belief, which assign all weight
to true state {A, pL }. And learning is complete under such limit belief. Any other limit point
must contain at least one non-zero component, and represents incomplete learning. From
equation 3, if πi 6= 0, then ψ(α|π∞ , zi ) = ψ(α|π∞ , zAL ) for α ∈ {a, b}. That is, both actions
must equally like to be observed across states zi and zAL under limit belief π∞ . However,
this is not always possible. For example, state zBL , zAL specify the same lone wolf population
but different economic fundamental, and action b is strictly more plausible under state zBL
for all potential limit belief. Following lemma states that only πBH could be non-zero. That
is, if learning is incomplete, then positive weight can only be assigned to the true state, and
the state which specifies both wrong economic fundamental and lone wolf population.
4
Lemma 1 If πt converges to π∞ = hπAH , πBL , πBH i with positive probability, then πAH , πBL =
0; πBH is either 0 or solves
(1 − pL )F A (
1
πBH
1
πBH
) + pL F A ( ) = (1 − pH )F B (
) + pH F B ( ).
1 + πBH
2
1 + πBH
2
(4)
Proof. If πAH 6= 0, according to equation 3, then ψ(b|zAH , π∞ ) = ψ(b|zAL , π∞ ), which is
1
λ∞
1
λ∞
pL F A ( ) + (1 − pL )F A (
) = pH F A ( ) + (1 − pH )F A (
).
2
λ∞ + 1
2
λ∞ + 1
+πBH
Here λ∞ = πBL
. Because pH > pL , above equation holds only if λ∞ = 1. It is direct to
1+πAH
check that λ∞ = 1 implies ψ(b|zBL , π∞ ) 6= ψ(b|zAL , π∞ ) and ψ(b|zBH , π∞ ) 6= ψ(b|zAL , π∞ ).
Apply equation 3 again, πBL , πBH = 0. This contradicts λ∞ = 1.
If πBL 6= 0, then ψ(b|zBL , π∞ ) = ψ(b|zAL , π∞ ), which is
λ∞
1
λ∞
1
) = pL F A ( ) + (1 − pL )F A (
).
pL F B ( ) + (1 − pL )F B (
2
λ∞ + 1
2
λ∞ + 1
This equation is never true because F B (c) > F A (c) for any c ∈ (0, 1). See SS lemma A.1 (c)
for a proof. Intuitively, if a player’s decision rule is to choose action b if private signal s < c,
then the probability of choosing action b under state B must be bigger than the probability
of choosing action b under state A.
If πBH 6= 0, then ψ(b|zBH , π∞ ) = ψ(b|zAL , π∞ ), which is equation 4.
Therefore, if equation 4 has no non-zero solution, then πt must converge to h0, 0, 0i and
learning is complete. To understand when incomplete learning could happen, we need to
roughly understand when equation 4 has a solution. Following lemma states if and only if
condition for equation 4 to has unique solution.
Lemma 2 Equation 4 has an unique non-zero solution if and only if
1
1
pL F A ( ) + (1 − pL ) > pH F B ( ) + (1 − pH ),
2
2
(5)
Proof. If rational players decision rule is given by cutoff p (choosing action b iff s < p), then
probability of observing action b under state ZAL is Pr(b|ZAL , p) = pL F A ( 12 ) + (1 − pL )F A (p).
Similarly Pr(b|ZBH , p) = pH F B ( 21 ) + (1 − pH )F B (p).
5
When cutoff p is 0 (rational players herd on action a), then
1
1
Pr(b|zAL , 0) = pL F A ( ) < Pr(b|zBH , 0) = pH F B ( ).
2
2
This is not surprising because state BH specifies more naive players and econ fundamental
B in favor of action b.
As cutoff p increases, marginal increases of probability of action b are given as
d
Pr(b|zAL , p) = (1 − pL )f A (p)
dp
d
Pr(b|zBH , p) = (1 − pH )f B (p).
dp
B
(p)
d
d
Because ff A (p)
= 1−p
(see SS lemma A1.(a)), dp
Pr(b|zBH , p) > dp
Pr(b|zAL , p) if and only if
p
1−pH
p < 2−pL −pH .
1−pH
Therefore, Pr(b|zAL , p) 6= Pr(b|zBH , p) for any p ∈ (0, 2−p
). This simply because
L −pH
Pr(b|zBH , p) is bigger at p = 0 and increase faster on this region.
Assume Pr(b|zAL , 1) > Pr(b|zBH , 1), then Pr(b|zAL , p) = Pr(b|zBH , p) for some p ∈
1−pH
( 2−pL −pH , 1). These two probabilities can’t intersect twice because Pr(b|zAL , p) increases
1−pH
strictly faster on ( 2−p
, 1).
L −pH
It is relatively easy to find a set of primitives satisfy the sufficient condition in Lemma 2.
For example, we can choose pH = 0.6, pL = 0.4, F B ( 21 ) = 0.5, F A ( 12 ) = 0.4. Also, the set of
four tuple {pH , pL , F A ( 21 ), F B ( 12 )} solve strict inequality 5 is obviously open. Hence we can
roughly say incomplete learning arise with a robust set of primitives.
When inequality 5 is satisfied, then belief πt can converge either to h0, 0, 0i or h0, 0, πBH i.
We remark that the convergence to h0, 0, πBH i is a revisit of confounded learning in SS.
However, the underlying economic environments are quite different. In SS, confounded
learning arise because players have different preferences, that is, certain fraction of players
want to mismatch the state.
The next natural question is, provided existence of h0, 0, πBH i, will belief πt converges to
both limits h0, 0, 0i and h0, 0, πBH i with positive probability? Following Lemma 3 states that
if the belief is within a small neighborhood of h0, 0, 0i, it will converge there with positive
probability. In other words, h0, 0, 0i is local stable. Lemma 4 states a sufficient condition for
h0, 0, πBH i to be local stable.
BL
BH
Lemma 3 There exist ε > 0 such that if k(λAH
t , λt , λt ) − (0, 0, 0)k < ε, then
BL
BH
Pr[(λAH
t , λt , λt ) → (0, 0, 0)] > 0.
6
Proof. See Appendix.
πBH
πBH
Lemma 4 If πBH solves equation 4, and 1+ (πBH
G and 1+ (πBH
G are non-negative;
+1)2 1
+1)2 2
where
πBH
πBH
(1 − pL )f A ( πBH
) − (1 − pH )f B ( πBH
)
+1
+1
,
G1 =
πBH
)]
pL [1 − F A ( 12 )] + (1 − pL )[1 − F A ( πBH
+1
and
G2 =
πBH
πBH
−(1 − pL )f A ( πBH
) + (1 − pH )f B ( πBH
)
+1
+1
πBH
pL [F A ( 12 )] + (1 − pL )[F A ( πBH
)]
+1
.
BL
BH
Then there exists ε > 0 such that if k(λAH
t , λt , λt ) − (0, 0, πBH )k ≤ ε, then
BL
BH
Pr[(λAH
t , λt , λt ) → (0, 0, πBH )] > 0.
Some readers may ask whether there exists primitives satisfying those sufficient condition.
We claim that f A (p) = 2p, f B (p) = 2(1 − p), and (pL , pH ) within a small neighborhood of
(0.2, 0.8) are sets of such primitives.
To summarize, we have obtained our main result.
Theorem 5 Assume underlying state to be {A, pL }, and private signal strength to be unbounded. In the long run, public belief hπt i almost surely converges to one of following two
limits: (1) h0, 0, 0i; (2) h0, 0, πBH i where πBH solves equation 4. Furthermore, h0, 0, 0i is
always local stable.
3
Appendix
Here both proofs follows the arguments in Appendix C. of SS. We sketch the idea and
reproduce details for reader’s convenience.
Proof. The proof primarily consists of three steps. First, construct an auxiliary process
which dominate the original process in norm on a small neighborhood around limit point.
Second, show that auxiliary process moves toward the limit point in expectation if restricted
to potentially a smaller neighborhood. Third, using law of large numbers to conclude the
auxiliary process (restricted on the smaller neighborhood) will converge to the limit point a.s,
hence will stay within that smaller neighborhood with positive probability. Thus original
process, if started within the smaller neighborhood, will converge to the limit point with
positive probability.
7
For notation convenience, denote π1∗ = h0, 0, 0i, π2∗ = h0, 0, πBH i. The original process
BL
BH
moves from πt = hλAH
t , λt , λt i to
φ(α, πt ) = hλAH
t
ψ(α|zAH , πt ) BL ψ(α|zBL , πt ) BH ψ(α|zBH , πt )
,λ
,λ
i
ψ(α|zAL , πt ) t ψ(α|zAL , πt ) t ψ(α|zAL , πt )
with probability ψ(α|zAL , bt ) for α ∈ {a, b}. For notation convenience, let A1 = Dπ φ(a, π1∗ ),
B1 = Dπ φ(b, π1∗ ); and A2 = Dπ φ(a, π2∗ ), B2 = Dπ φ(b, π2∗ ).
(Step 1)In this step, we construct an auxiliary process which dominates original process
in norm on a small neighborhood around limit point. Without loss of generality, we take
construction around π1∗ as an example. For any η which is slightly larger than 1, we can find
a small neighborhood N (η) around π1∗ such that
kφ(a, π) − π1∗ k = kA1 π + γ(π)k ≤ kηA1 πk,
and
kφ(b, π) − π1∗ k = kB1 π + γ(π)k ≤ kηB1 πk,
provided that π ∈ N (η).
BL
BH
For πt = hλAH
t , λt , λt i ∈ N (η), we must have associated λt =
λBH
+λBL
t
t
very close to 0.
1+λAH
t
, λ̃1,BL
, λ̃1,BH
i
π̃t1 = hλ̃1,AH
t
t
t
t
Let σ(η) = supπt ∈N (η) λtλ+1
. We construct following auxiliary process
as



ηA π̃ 1 , if player is lone wolf and st > 12 , or if player is rational and st > σ(η);

 1 t
1
π̃t+1
= ηB1 π̃t1 , if player is lone wolf and st < 21 ;



ηC π̃ 1 , otherwise.
1 t
Here C1 is defined to be max{A1 , B1 } component-wise. For any sequence of private signals,
if πt moves to φ(a, πt ), then π̃t1 moves to either ηA1 π̃t1 or ηC1 π̃t1 ; if πt moves to φ(b, πt ), then
π̃t1 moves to either ηB1 π̃t1 or ηC1 π̃t1 . If starting with π̃t10 = πt0 ∈ N (η), above construction
guarantees kπ̃t1 k ≥ kπt k, provided that π̃t1 ∈ N (η) for t ≥ t0 .
Following exactly the same process, we can construct auxiliary process π̃t2 such that
kπ̃t2 − π2∗ k ≥ kπt2 − π2∗ k provided that π̃t2 stays within a small neighborhood of π2∗ .
(Step 2) This step involves some computation. We first directly compute A1 , B1 , A2 , B2 .
8
We have
pH [1−F A ( 12 )]+(1−pH )
1
 pL [1−F A ( 2 )]+(1−pL )


A1 = 

0
0
pL [1−F B ( 21 )]+(1−pL )
pL [1−F A ( 12 )]+(1−pL )
0
pH F A ( 12 )
1
 pL F A ( 2 )

B1 = 

0
0
pL [F B ( 12 )]
pL [F A ( 12 )]
0
0

0




pH [1−F B ( 21 )]+(1−pH )
pL [1−F A ( 12 )]+(1−pL )
0

0
0

0




pH [F B ( 12 )]
pL [F A ( 12 )]
and



A2 = 


πBH
)]
BH +1
πBH
1
A
A
pL [1−F ( 2 )]+(1−pL )[1−F ( π
)]
BH +1

pH [1−F A ( 12 )]+(1−pH )[1−F A ( π
0
πBH
)2 G1
−( πBH
+1



B2 = 


0
G
πBH
)]
BH +1
π
1
BH
A
A
pL [F ( 2 )]+(1−pL )[F ( π
)]
BH +1
0
0
and
G2 =
1+
πBH
G
(πBH +1)2 1
0
πBH
)]
BH +1
π
1
pL [F A ( 2 )]+(1−pL )[F A ( π BH+1 )]
BH
pL [F B ( 12 )]+(1−pL )[F B ( π
πBH
−( πBH
)2 G2
+1
G1 =
0






pH [F A ( 21 )]+(1−pH )[F A ( π
Here
0
π
pL [1−F B ( 12 )]+(1−pL )[1−F B ( π BH+1 )]
BH
π
1
pL [1−F A ( 2 )]+(1−pL )[1−F A ( π BH+1 )]
BH
πBH
1
2
(πBH +1)
πBH
G
(πBH +1)2 2
0
1+
πBH
πBH
(1 − pL )f A ( πBH
) − (1 − pH )f B ( πBH
)
+1
+1
πBH
)]
pL [1 − F A ( 12 )] + (1 − pL )[1 − F A ( πBH
+1
,
πBH
πBH
−(1 − pL )f A ( πBH
) + (1 − pH )f B ( πBH
)
+1
+1
πBH
pL [F A ( 12 )] + (1 − pL )[F A ( πBH
)]
+1
πBH
G
(πBH +1)2 2





.
The big terms look scary but the logic is relatively simple. For example, we can compute
the expected change of log λ̃1,BH
, which is the third component of π̃t1 . We have
t
1,BH
E[log λ̃1,BH
] − log λ̃1,BH
t
t+1 | log λ̃t
pH [1 − F B ( 21 )] + (1 − pH )
1
= {pL [1 − F A ( )] + (1 − pL )[1 − F A (σ)])} log η
2
pL [1 − F A ( 12 )] + (1 − pL )
pH [F B ( 21 )]
pH [1 − F B ( 12 )] + (1 − pH )
pH [F B ( 12 )]
1
A
+[pL F ( )] log η
+ [(1 − pL )F (σ)] max{log η
, log η
}
2
pL [F A ( 12 )]
pL [1 − F A ( 21 )] + (1 − pL )
pL [F A ( 12 )]
A
9
It is immediate that above formula is continuous in η, σ. By let η → 1 (which forces σ → 0),
the limit goes to
pH [1 − F B ( 12 )] + (1 − pH )
pH [F B ( 21 )]
1
A 1
+
[p
F
(
)]
log
{pL [1 − F A ( )] + (1 − pL )} log
L
2
2
pL [1 − F A ( 21 )] + (1 − pL )
pL [F A ( 21 )]
1
1
< log{pH [1 − F B ( )] + (1 − pH ) + pH [F B ( )]} = 0.
2
2
Therefore, by properly shrinking neighborhood N (η), we conclude λ̃1,BH
moves to 0 in ext
1,AH
1,BL
pectation. We can similarly show this for λ̃t
, λ̃t .
2,AH
2,BL
2,BH
2
If we consider π̃t = hλ̃t
, λ̃t , λ̃t
i, a slight attention is needed at computing ex2,BH
pected movement of λ̃t
− πBH . To use the trick of taking log, we need to consider
2,BH
2,BH
|λ̃t
− πBH | instead of λ̃t
− πBH . Directly computation shows that
2,BH
Et [log |λ̃2,BH
− πBH |
t+1 − πBH |] − log |λ̃t
is continuous in η, σ, λ2,AH
, λ2,BL
. Let η → 1 (which forces σ, λ2,AH
, λ2,BL
→ 0), the limit
t
t
t
t
goes to
πBH
πBH
1
)]} log |1 +
G1 |
{pL [1 − F A ( )] + (1 − pL )[1 − F A (
2
πBH + 1
(πBH + 1)2
1
πBH
πBH
+ {pL [F A ( )] + (1 − pL )[F A (
)]} log |1 +
G2 |
2
πBH + 1
(πBH + 1)2
πBH
πBH
G and 1 + (πBH
G are non-negative, then above limit is strictly negative,
If 1 + (πBH
+1)2 1
+1)2 2
following Jensen’s inequality.
(Step 3) Finally, we prove a real-valued linear stochastic process xt converge to 0 almost
surely if it moves to 0 in expectation. Consider a sequence of signal realization ω such that
limt→∞ xt (ω) 6= 0. Then for any t, ∃t0 such that
lim sup[xt0 +j (ω) − xt0 (ω)] ≥ 0.
j→∞
P
However, xt0 +j − xt0 = j[ 1j ji=1 (xt0 +i − xt0 +i−1 )], which is strictly negative for large j.
Therefore we have shown that each component of π̃t1 converges to 0 almost surely. Hence
Pr(∪k ∩n≥k (kπ̃t1 k ≤ ε)) = 1. By Borel-Cantelli lemma, with positive probability π̃t1 stay
within a smaller ε-neighborhood of 0. From above construction, π̃t1 → h0, 0, 0i if the sequence
stays within a ε-neighborhood of 0. Same argument works with π̃t2 .
10
(Conclusion) We have π1∗ is always local stable, and π2∗ is local stable if 1 +
πBH
G are non-negative.
and 1 + (πBH
+1)2 2
πBH
G
(πBH +1)2 1
References
Banerjee, A. V. (1992): “A Simple Model of Herd Behavior,” The Quarterly Journal of
Economics, 107, 797.
Bikhchandani, S., D. Hirshleifer, and I. Welch (1992): “A Theory of Fads, Fashion,
Custom, and Cultural Change as Informational Cascades,” Journal of Political Economy,
100, 992–1026.
Bohren, J. A. (2016): “Informational herding with model misspecification,” Journal of
Economic Theory, 163, 222 – 247.
Bohren, J. A. and D. N. Hauser (2017): “Bounded Rationality and Learning: A
Framework and a Robustness Result,” Working paper.
Duffy, J., E. Hopkins, and T. Kornienko (2016): “Lone Wolf or Herd Animal? An
Experiment on Choice of Information and Social Learning,” Working paper.
Eyster, E. and M. Rabin (2010): “Naive Herding in Rich-Information Settings,” American Economic Journal: Microeconomics, 2, 221–43.
Smith, L. and P. Sørensen (2000): “Pathological Outcomes of Observational Learning,”
Econometrica, 68, 371–398.
Weizsäcker, G. (2010): “Do We Follow Others When We Should? A Simple Test of
Rational Expectations,” American Economic Review, 100, 2340–60.
Ziegelmeyer, A., C. March, and S. Krügel (2013): “Do We Follow Others When
We Should? A Simple Test of Rational Expectations: Comment,” American Economic
Review, 103, 2633–42.
11