Document

Discovery and Diffusion of
Knowledge in an
Endogenous Social Network
Myong-Hun Chang and Joe Harrington
Social Accumulation of Knowledge
► Essential
elements of the process
 Generation of Ideas by Individuals
 Adoption of Ideas by Individuals
 Diffusion of Ideas through Social Networks
Social
Diffusion
Network
Process
► Generation
of Knowledge
 Research: process of searching for better ways
of doing things (finding better solution to a
problem?)
► Decentralized
Research  Parallel Search
► How is this search carried out by an
individual agent?
 Innovation (individual learning)
►Production
of “new” ideas
 Imitation (social learning)
►Adoption
of “existing” ideas of others ( diffusion)
► Underlying
social system: A population of
autonomous agents (= problem-solvers),
each endowed with a goal unknown to her
ex ante
 Goal: “Optimal” solution to the given problem
 Goals can differ: Diversity in solution due to
diversity in local environments
 Goals can change: Local environments may be
subject to inter-temporal fluctuations 
Problems may change
Decision-making by individual agents
► Should
we model them as being hyper-rational
with perfect foresight, etc.?
 No. The decision environment is too complex.
 Full rationality – too demanding
► Boundedly
rational and engage in myopic search
for the unknown goal (local optimum)
 Adaptive and capable of learning from past experiences
 Search alone (innovation) or search by learning from
others (imitation)
Research Questions
► Individual
network
learning versus social learning via
 How do individuals choose among the two learning
mechanisms?
► Network
as an outcome of interactive choices
among individuals as to whom to observe and
whom to ignore
 What are the determinants of their emergent structure?
► Performance
at the individual and community level
 How does the reliability of communication technology
affect the performance?
 How does the innovativeness of the population affect
the performance?
The Model
system: L individuals
► Each individual engages in an operation
with H separate tasks
► For each task, there is a fixed number of
possible methods that can be used to
perform the task.
► Social
 A given method is a string of d bits – 0’s and 1’s
 2d possible methods per task
individual i in period t characterized by
a binary vector of Hd dimensions: zi(t)
► Example: H = 4 and d = 4
► An
 Z1= 1101 0100 1111 1001
 Z2= 0101 1101 0011 1000
► Distance
between two vectors
 Hamming distance: D(z1, z2)
 No. positions for which the corresponding bits
differ  D(z1, z2) = 6 for the above vectors
exists a goal vector (binary) of Hd
dimensions for each agent in t: gi(t)
► There
 Unique optimal solution to the problem agent
i is facing
►Inter-agent
diversity
 It is possible that gi(t)≠ gj(t) for i ≠ j.
►Inter-temporal
variability
 It is possible that gi(t)≠ gi(t’) for t ≠ t’.
 Individuals uninformed about gi(t) ex ante,
but engage in “search” to get as close to it as
possible.
g1(t)
g2(t)
D(z1(t), g1(t))
g4(t)
g3(t)
z1(t)
D(z4(t), g4(t))
D(z3(t), g3(t))
D(z2(t), g2(t))
z2(t)
z3(t)
z4(t)
► Period-t
performance of agent i
 How close is agent i to his current optimum?
 πi(t) = Hd – D(zi(t), gi(t))
► Period-t
performance of the social system
 Sum of all agent’s performances.
► Decision-Making
Sequence (in t) for an Agent
μiin
Innovate
Choose to
Innovate
qi(t)
1 - μiin
Idle
pi1(t)
i
pi2(t)
μiim
1 - qi(t)
Observe j = 1
Observe j = 2
Observe j = 3
Observe
pij(t)
Observe j ≠ i
Choose to
Imitate
piL(t)
1-
μiim
Observe j = L
Idle
► If
fail to generate an idea or fail to connect to
the network, zi(t+1) = zi(t).
► Otherwise, there exists an idea, zi’(t), proposed
under innovation or imitation such that zi’(t) is
adopted iff it gets i closer to gi(t).
 Innovation: Randomly chosen method for a randomly
chosen task
Z1= 1101 0100 1111 1001
Z’1= 1101 0100 1011 1001
 Imitation: Method used by another agent for a
randomly chosen task
Z1= 1101 0100 1111 1001
Z2= 0101 1101 0011 1000
Z’1= 1101 0100 0011 1001
Evolving Choices
Probabilities: (μiin, μiim)
► Endogenous Probabilities: (qi(t), {pij(t)}j≠i)
► (qi(t), {pij(t)}j≠i) evolve via reinforcement
learning
► Exogenous
 A positive outcome realized from a course of
action reinforces the likelihood of that same
action being chosen again [ExperienceWeighted Attraction Learning – Camerer & Ho,
Econometrica, 1999]
Evolving qi(t)
► Attractions
for the available courses of action
 Biin(t): i’s attraction measure for innovation
 Biim(t): i’s attraction measure for imitation
► Evolution
of the attractions
 Biin(t+1)= Biin(t) + 1, if innovation successful in t
Biin(t+1)= Biin(t), otherwise
 Biim(t+1)= Biim(t) + 1, if imitation successful in t
Biim(t+1)= Biim(t), otherwise
qi(t) =
1 - qi(t) =
Biin(t)
Biin(t) + Biim(t)
Biim(t)
Biin(t) + Biim(t)
Evolving pi j(t)
► pi j(t):
Probability with which i observes j in t
► ∑j≠ipi j(t) = 1 for all i
► Aij(t): agent i’s attraction to another agent j
► Evolution of the attractions
 Aij(t+1) = Aij(t) + 1, if i successfully imitated j in t
Aij(t+1) = Aij(t), otherwise
for all j ≠ i and for all i.
pi j(t) =
Aij(t)
∑i≠jAi j(t)
for all j ≠ i and for all i.
social network of agent i
{pij(t)}j≠i
Agents with Uniform Learning
Capabilities
μiin = μin for all i (innovativeness)
► μiim = μim for all i (reliability of the network)
►
► Improved
productivity in innovation: a rise
in μin
► Improved communication: a rise in μim
Initial Conditions at t = 0
► All
agents have equal probabilities of
innovation vs. imitation: qi(0) = 1- qi(0) =
.5 for all i
► All agents have equal probabilities of
observing all other agents in the population:
pij(t) = 1/(L-1) for all j≠i, for all i
Goals, Groups, and Networks
► Similarity
or diversity in the objectives drive the
formation of networks
► Partition the population into a fixed number of
groups
 Agents belonging to the same group tend to have more
similar goals – i.e., working on similar problems – than
those belonging to different groups.
 Two sources for social learning
► Another
agent in the same group
► Another agent in a different group
► Efficacy
of social learning dependent on the
tightness of goals within a given group relative to
the tightness of the goals between different
groups.
How are the goals specified?
Goal Diversity
► Intra-group
diversity: κ
► Inter-group diversity: X
Shift Dynamics for the Goals
► gi(t)
stays the same with probability σ and
changes with probability (1-σ).
► If gi(t+1) is different from gi(t), it is chosen
from the set of points that lie both within
the Hamming distance ρ of gi(t) and within
Hamming distance κ of the original group
seed vector gk.
Parameters – {σ, ρ, κ, X}
►σ
 The greater is σ, the more stable is an agent’s goal
vector.
►ρ
 The greater is ρ, the more variable is the change in an
agent’s goal vector.
►κ
 The higher is κ, the lower is the intra-group goal
congruence.
►X
 The higher is X, the greater is the inter-group goal
diversity.
Specs for Computational
Experiments
► 20
agents
► 4 groups
► 24 tasks, each with 4 bits
 16 methods in each task
 Search space contains over 7.9 x 1028 possibilities
► Biin(0)=Biim(0)=1
for all i
► Aij(0)=1 for all i and all j ≠i
► Horizon: 20,000 periods
Baseline Parameter Values
► μim
= μin = 0.5
► σ = 0.75
►ρ = 4
► κ = 16
► X = 16
Performance Time Series (Baseline)
Steady-State Values of Endogenous
Variables
► Average
over the 5,000 periods from
t=15,001 to t=20,000
► Let’s look at the learning patterns: pi j(t)
 Do agents choose their learning partners
randomly?
 Or do they learn from a small subset of other
agents?
 Do agents go to those who also come to them?
►MUTUAL
LEARNING? Are pi j(t) and pji(t) correlated?
pi
j(t)
and
i
pj (t)
Mutuality in Learning
► Correlation
positive for all parameter values
► Correlation increases in X
 The greater the inter-group diversity in goals 
The stronger the mutuality in learning
► Correlation
decreases in κ
 The lower the intra-group diversity  The
stronger the mutuality in learning
Okay, but who is learning from whom?
What drives the tendency for mutual learning?
Can we be more specific? How about tracking
the evolution of the imitation probabilities?
- Very complex. 20 agents, each with probabilities of
observing 19 other agents in each period (380
probabilities per period over the horizon of 20,000
periods).
Visualization of
j
pi (t)s
20
i
(observer)
13
9
1
1
9
j (observed)
18
20
► Four




► At
groups
Group
Group
Group
Group
1
2
3
4
=
=
=
=
{1, 2, 3, 4, 5}
{6, 7, 8, 9, 10}
{11, 12, 13, 14, 15}
{16, 17, 18, 19, 20}
t=0, everyone observes everyone else
with an equal probability.
Steady-state pij(t)s averaged over 20
replications
is mutual and more active among
agents sharing similar goals.
► Intra-group mutual learning is more
intensive when the groups are more
segregated and isolated from one another.
► Learning
What about the aggregate
performance of the social system?
How is it affected by various
parameters?
Comparative Dynamics
► How
is performance influenced by
 Reliability of the communication technology supporting
the network – μim?
 Productivity of agents engaging in innovation – μin?
► How
is the above relationship affected by features
of the environment?
 Turbulence in the task environments – σ and ρ
 Inter-agent goal diversity – X and κ
► Focus
on steady-state (t=15,000 to t=20,000) –
Long run properties of the system
σ = 0.75; ρ = 4; κ = 16; X = 16
The rate of innovation declines in μim
σ = 0.75; ρ = 4; κ = 16; X = 16
Performance of the social system rises in μim
► Is
this universal?
► Does the steady-state performance always
rise in the reliability of communication
technology?
► Try:
 Lower μin to .25
►Agents
less productive at innovation
 Lower κ to 4
►Degree
of intra-group goal congruence higher
μin = .25
σ = 0.75; ρ = 4; κ = 4; X = 16
► An
improvement in communication
technology can lead to a deterioration in
performance.
► Is this a general property? Do we observe
this for different parameter configurations?
 YES
Property 1
► When
the reliability of the network is sufficiently
low, steady-state performance is increasing in
reliability.
► When the reliability of the network is sufficiently
high AND
- task environment is sufficiently volatile (σ low, ρ high)
- goal diversity among groups is sufficiently great (X high)
- intra-group goal diversity is sufficiently low (κ low)
performance decreases in reliability.
More reliable network (higher μim)
Imitation substituted for innovation
Faster diffusion of ideas
Formation of more structured network
Agents in a network (and in a group) become more alike
(homogenization of the networks)
LACK OF DIVERSITY IN THE SYSTEM!!!
► More
stable environment
 Lack of diversity is not a problem
 Faster social learning is more critical (speeding
up convergence to local optima)
► Volatile
environment
 Agents must modify their tasks continually to
solve new problems.
 As network becomes more reliable, imitation
crowds out innovation and the ensuing lack of
diversity makes it less likely that there will be
useful ideas that serve the new environment.
► What
(X)?
about the role of inter-group diversity
 Low X  the optima of agents from different
groups overlap to a greater extent  social
learning is more global (agents observe other
agents in different groups more frequently) 
inter-agent diversity survives over time 
superior adaptability to changing environment
Summary
► Simple
improvement in the reliability of the
network may harm long-run performance.
► Enhanced communication technology can
induce too much imitation  Loss of
diversity in responding to future
environments
► An individual’s capacity to carry on
independent innovation is crucial in
supplying the necessary fuel for the
effective operation of the social network.
Individual Choices and Social
Outcomes
► At
the individual level, innovation and imitation are
substitutes.
 Agents can choose to allocate effort to discovering new
ideas or to observing the ideas of others.
► At
the social level, innovation and imitation are
complements.
 Innovation provides a pool of ideas that imitation can
then spread.
► Reliability
of communication can reduce overall
performance by generating excessive uniformity.
What if agents have heterogeneous learning
capabilities?
Questions
► What
if agents are heterogeneous in their capacity
for innovating and imitating?
 Some agents more creative and more productive in
generating ideas
 Some agents more sociable or more capable to
understanding the ideas of others
► To
what extent does this heterogeneity lead to
more order?
 Do agents form links with those who are most
innovative or those who are most imitative?
► Underlying
social system: A population of
autonomous agents with heterogeneous
skills and capabilities
 Some more skilled as innovators
 Some more skilled as imitators
► Example:
The Scientific Revolution
 Inventors and Innovators: Copernicus, Kepler,
Galileo, …, Newton (sources of original ideas)
 Imitators/Connectors: Mersenne, Hartlib, …,
Oldenburg (hubs of scientific correspondence
networks)
Research Objectives
► Observe
the network architecture that
evolves in the search process
 Who learns from whom?
 What parameters affect the network
architecture and how?
► Socially
optimal combination of innovators
and imitators?
The Model
system: L individuals
► There exists a common goal vector (binary) of Hd
dimensions for the population in t: g(t)
► g(t) may fluctuate over time
► Social
 Each period, g(t) stays the same with probability σ and
changes with probability 1-σ.
 If it changes, the new goal vector is an iid selection
from the set of points that lie within the Hamming
distance ρ of the old goal vector.
z1(t)
D(z1(t), g(t))
g(t)
z4(t)
z2(t)
z3(t)
Agent Heterogeneity
Case: L = 50
► Three types of agents
► Baseline
 Type N: Super-Innovators: (μiin, μiim) = (1, 0)
 Type M: Super-Imitators: (μiin, μiim) = (0, 1)
 Type R: Regular Agents: (μiin, μiim) = (.25, .25)
► Assume
|R| = 40
► |N|+|M| = 10
Initial Conditions at t = 0
► All
agents start with the same methods
vector: z1(0) = z2(0) = … = zL(0)
► All agents have equal probabilities of
innovation vs. imitation: qi(0) = 1- qi(0) =
.5 for all i
► All agents have equal probabilities of
observing all other agents in the population:
pij(t) = 1/(L-1) for all j≠i, for all i
Steady-State Measures
►T
= 15,000
► Steady-state performance of agent i
 Average of πi(t)s over the last 5,000 periods
► Steady-state
imitation probabilities
 Average of {pij(t)}j≠i over the last 5,000 periods
► 100
replications
► All measures are the averages over the 100
replications
► How
do these measures respond to changes
in the values of:
 |N|:|M| : Composition of the super-types in the
population
 σ: Frequency of goal change
 ρ: Magnitude of goal change
Steady-State Measures
►T
= 15,000
► Steady-state performance of agent i
 Average of πi(t)s over the last 5,000 periods
► Steady-state
imitation probabilities
 Average of {pij(t)}j≠i over the last 5,000 periods
► 100
replications
► All measures are the averages over the 100
replications
► How
do these measures respond to changes
in the values of:
 |N|:|M| : Composition of the super-types in the
population
 σ: Frequency of goal change
 ρ: Magnitude of goal change
j
pi (t):
Prob. Of i observing j
► Let
agents be distributed as follows:
 N = {1, 2, 3, 4}
 M = {5, 6, 7, 8, 9, 10}
 R = {11, 12, 13, …… , 48, 49, 50}
t=1
t = 501
t = 1001
t = 3001
t = 6001
t = 10001
t = 14901
Steady-State pij with σ = .8, ρ = 1, and |N|:|M|=4:6
(average over 100 replications)
Network Structure
Chain learning
Innovators generate ideas
Imitators learn from Innovators
Regular Agents learn from Imitators.
Group-Mean Learning Probabilities
► fgg’
= The probability with
which an average agent in
group g learns from an
average agent in group g’
► Network structure dictated
by (fMN, fMM, fMR) and (fRN,
fRM, fRR)
fNN fNM
fNR
fMN fMM fMR
fRN fRM
fRR
Property 1
► Chain
Learning
 fMN > fMM >> fMR
 fRM > fRN >> fRR
Regular Agent learns through Imitators
rather than directly from Innovators.
►A
 Imitators as the integrators of knowledge
 More productive for an Regular Agent to
observe an Imitator than multiple Innovators
Property 2
► |N|
plentiful relative to |M|
 An increase in environmental volatility (lower σ and
higher ρ) induces Imitators and Regular Agents to
connect more with Innovators rather than Imitators.
► More
► |N|
stability  Imitators as the central hub
scarce relative to |M|
 An increase in volatility (lower σ and higher ρ) induces
Regular Agents to connect more with Innovators but
induces Imitators to connect less with Innovators.
► Regular
Agents engage in more innovation (useful source of
ideas for Imitators).
► More
stability  Chain-learning
fMN – fMM as a function of the |N|:|M| ratio
for varying values of σ
fRM – fRN as a function of the |N|:|M| ratio
for varying values of σ
Flow of Knowledge
- The Baseline Case -
NMR
Connected Innovators and
Innovative Imitators
► Assume:
 (μiin, μiim) = (.75, .25) for all i in N
 (μiin, μiim) = (.25, .75) for all i in M
 (μiin, μiim) = (.25, .25) for all i in R
► Do
Innovators learn from each other?
► What
does the flow of knowledge look like?
Flow of Knowledge
N↔MR
Socially Optimal Mix of Agent Types
► When
is the social performance at its
maximum?
► Optimal Mix of Super-types
 All innovators? (10 Newtons?)
 All imitators? (10 copycats?)
 A Mixture?
Aggregate Performance as a function
of Super-type Composition
Heterogeneous Mixture is Optimal!
5 Innovators and 5 Imitators
preferable to 10 Innovators!!!