Social Learning with Network Uncertainty
Ilan Lobel and Evan Sadler∗
April 14, 2012
Extended Abstract
We consider the perfect Bayesian equilibrium of a model of social learning on networks
where agents do not know the topology of the social network. Each agent receives a signal
about an underlying state of the world, observes actions of her neighbors and subsequently
chooses an action herself. The topology of the social network is drawn from a commonly
known distribution but each agent only observes her own neighborhood. We characterize
properties of social networks that lead to or preclude the successful aggregation of information
in society.
The model is a generalization of the one introduced by Acemoglu et al. [2011] where we
allow for arbitrary distributions over the space of social networks (Acemoglu et al. [2011]
assumes neighborhoods of different individuals are drawn independently from each other).
Our model allows agents to have arbitrarily correlated neighborhoods and, in doing so,
allows us to model real-world network phenomena such as clustering. It also allows us to
consider the performance of social learning in widely used models of social networks such as
preferential attachment models.
Allowing the agents’ neighborhoods to be arbitrarily correlated introduces a key new
element into our model: network uncertainty. When agents observe who their neighbors are
and what actions these neighbors have chosen, they form beliefs about the overall structure
of the social network. These beliefs may differ substantially from the beliefs of other agents.
In turn, agents also form beliefs about their neighbors’ beliefs about the network structure
and, in a world with network uncertainty, such higher order beliefs play an important role
in determining whether the social learning process leads to information aggregation or not.
While varying beliefs about the network structure also exist in Acemoglu et al. [2011], differences in beliefs are washed away by a “law of large numbers” effect as the network becomes
large due to the assumption of independently realized neighborhoods. In contrast, when the
joint distribution of the agents’ neighborhoods is arbitrary as in our model, network uncertainty remains an important feature of the model even as we study its asymptotic behavior
in the limit as the number of agents grows large.
∗
Stern School of Business, New York University – {ilobel, esadler}@stern.nyu.edu
1
At the heart of the existing literature on social learning, there is a distinction between private signals leading to “bounded” or “unbounded” private beliefs. A consensus has emerged
that unbounded private beliefs—meaning the existence of arbitrarily strong signals about
the state of the world—robustly lead to the successful aggregation of information in a variety
of models and contexts.1 According to the prevailing consensus in the literature, the failure
of asymptotic learning in the case of unbounded private beliefs requires strong constraints
on the social information available to each agent2 . Likewise, in the case of bounded private
beliefs, herding outcomes prevail, and there is a non-trivial chance that society settles on a
sub-optimal choice.3 Some examples exist of models that can aggregate information drawn
from bounded private beliefs4 , but they all require that some agents observe unboundedly
many others within otherwise carefully crafted networks. One important contribution of our
work is to challenge this consensus on both counts. We show that settings with network
uncertainty can create unexpected failures of learning with unbounded beliefs, but they can
also salvage learning with bounded private beliefs. While the distinction between bounded
and unbounded beliefs is still important, we show that the structure of the network becomes
an equally critical determinant of learning success in this setting.
In the presence of network uncertainty, there are many phenomena that can preclude
learning even with unbounded private beliefs and expanding observations. For example,
an agent may be completely certain to observe a well-informed neighbor but be unable to
learn from her because the agent cannot identify which among several neighbors is the wellinformed one. Alternatively, even when the issue of identification is absent, an agent at the
end of a long information path may make poor a decision because the agent and her neighbor
have vastly different beliefs about the overall structure of the social network. There are even
cases in which an agent can observe unboundedly many others, and yet learn nothing due
to correlations between the other agents’ decisions.
The examples above illustrate that in settings with network uncertainty it is not enough
that a path exists for information to travel between agents. One of our main results is a set
of conditions that leads to learning when private beliefs are unbounded. We show that it is
sufficient for learning that long information paths exist and that agents are able to identify
a “high-quality” neighbor. To be a high-quality neighbor means that two conditions are
satisfied: the neighbor must be a well-informed agent and she must also be a “low-distortion”
one. An agent is considered low distortion when being observed does not significantly alter
the informativeness of that agent’s action. The idea of controlling the distortion introduced
1
Smith and Sorensen [2000] show that learning occurs when preferences are homogeneous and the entire
history is observed. Acemoglu et al. [2011] and Smith and Sorensen [2008], respectively, show that unbounded
beliefs lead to learning in a large class of networks and sampling regimes. Mossel et al. [2012] show that
unbounded beliefs lead to learning in a setting with repeated interactions.
2
In Smith and Sorensen [2008], this condition is captured by the concept of “over-sampling” the past. In
Acemoglu et al. [2011], the corresponding constraint is given by “non-expanding observations.”
3
Banerjee [1992], Bikhchandani et al. [1992], Smith and Sorensen [2000] show that this happens when
the entire history is observed. Celen and Kariv [2004] show that the same result is obtained when only the
last action is observed. Smith and Sorensen [2008] and Acemoglu et al. [2011] show that bounded beliefs
preclude learning in a wide range of networks.
4
Most notably the example of Theorem 4 from Acemoglu et al. [2011]
2
by an agent’s observation is one of the key technical breakthroughs of our paper. We also
show that in special cases, this pair of conditions is not only sufficient, but also necessary
for asymptotic learning of the state of the world.
References
Daron Acemoglu, Munther Dahleh, Ilan Lobel, and Asuman Ozdaglar. Bayesian learning in
social networks. Review of Economic Studies, 78:1201–1236, 2011.
A. Banerjee. A simple model of herd behavior. The Quarterly Journal of Economics, 107:
797–817, 1992.
S. Bikhchandani, D. Hirshleifer, and I. Welch. A theory of fads, fashion, custom, and cultural
change as information cascades. The Journal of Political Economy, 100:992–1026, 1992.
B. Celen and S. Kariv. Observational learning under imperfect information. Games and
Economic Behavior, 47(1):72–86, 2004.
E. Mossel, A. Sly, and O. Tamuz. From agreement to asymptotic learning. Working paper,
2012.
L. Smith and P. Sorensen. A simple model of herd behavior. Econometrica, 68:371–398,
2000.
L. Smith and P. Sorensen. Rational social learning with random sampling. unpublished
manuscript, 2008.
3
© Copyright 2026 Paperzz