Network Games with Incomplete Information∗

Network Games with Incomplete Information∗
Joan de Martí†
Yves Zenou‡
February 9, 2015
Abstract
We consider a network game with strategic complementarities where the individual
reward or the strength of interactions is only partially known by the agents. Players
receive different correlated signals and they make inferences about other players’ information. We demonstrate that there exists a unique Bayesian-Nash equilibrium. We
characterize the equilibrium by disentangling the information effects from the network
effects and show that the equilibrium effort of each agent is a weighted combinations of
different Katz-Bonacich centralities where the decay factors are the eigenvalues of the
information matrix while the weights are its eigenvectors. We then study the impact
of incomplete information on a network policy which aim is to target the most relevant
agents in the network (key players). Compared to the complete information case, we
show that the optimal targeting may be very different.
Keywords: social networks, strategic complementarities, Bayesian games, key player
policy.
JEL Classification: C72, D82, D85.
∗
We would like to thank Itay Fainmesser, Theodoros Rapanos, Marc Sommer and Junjie Zhou for very
helpful comments that help improve the paper. Yves Zenou gratefully acknowledges the financial support
from the French National Research Agency grant ANR-13-JSH1-0009-01.
†
Universitat Pompeu Fabra and Barcelona GSE. E-mail: [email protected].
‡
Corresponding author: Stockholm University, IFN, Sweden, and CEPR. E-mail: [email protected].
1
1
Introduction
Social networks are important in numerous facets of our lives. For example, the decision of an
agent to buy a new product, attend a meeting, commit a crime, find a job is often influenced
by the choices of his or her friends and acquaintances. The emerging empirical evidence on
these issues motivates the theoretical study of network effects. For example, job offers can
be obtained from direct and indirect acquaintances through word-of-mouth communication.
Also, risk-sharing devices and cooperation usually rely on family and friendship ties. Spread
of diseases, such as AIDS infection, also strongly depends on the geometry of social contacts.
If the web of connections is dense, we can expect higher infection rates.
Network analysis is a growing field within economics1 because it can analyze the situations described above and provides interesting predictions in terms of equilibrium behavior.
A recent branch of the network literature has focused on how network structure influences
individual behaviors. This is modeled by what are sometimes referred to as “games on networks”or “network games”.2 The theory of “games on networks” considers a game with 
agents (that can be individuals, firms, regions, countries, etc.) who are embedded in a network. Agents choose actions (e.g., buying products, choosing levels of education, engaging in
criminal activities, investing in R&D, etc.) to maximize their payoffs, given how they expect
others in their network to behave. Thus, agents implicitly take into account interdependencies generated by the social network structure. An important paper in this literature is
that of Ballester et al. (2006). They compute the Nash equilibrium of a network game with
strategic complementarities when agents choose their efforts simultaneously. In their setup,
restricted to linear-quadratic utility functions, they establish that, for any possible network,
the peer effects game has a unique Nash equilibrium where each agent effort’s is proportional
to her Katz-Bonacich centrality measure. This is a measure introduced by Katz (1953) and
Bonacich (1987), which counts all paths starting from an agent but gives a smaller value to
connection that are farther away.
While settings with a fixed network are widely applicable, there are also many applications where players choose actions without fully knowing with whom they will interact. For
example, learning a language, investing in education, investing in a software program, and so
forth. These can be better modeled using the machinery of incomplete information games.
This is what we do in this paper.
To be more precise, we consider a model similar to that of Ballester et al. (2006) but
1
For overviews on the network literature, see Goyal (2007), Jackson (2008, 2011), Ioannides (2012),
Jackson et al. (2014) and Zenou (2015a).
2
For a recent overview of this literature, see Jackson and Zenou (2015).
2
where the individual reward or the strength of interactions is partially known. In other
words, this is a model where the state of world (i.e. the marginal return of effort or the
synergy parameter) is common to all agents but only partially known by them. We assume
that there is no communication between the players and that the network does not affect
the possible channels of communication between them.
We start with a simple model with imperfect information on the marginal return of effort
and where there are two states of the world. All individuals share a common prior and
each individual receives a private signal, which is partially informative. Using the same
condition as in the perfect information case, we show that there exists a unique BayesianNash equilibrium. We can also characterize the Nash equilibrium of this game for each agent
and for each signal received by disentangling the network effects from the information effects
by showing that each effort is a weighted combination of two Katz-Bonacich centralities
where the decay factors are the eigenvalues of the information matrix times the synergy
parameter while the weights involve conditional probabilities, which include beliefs about
the states of the world given the signals received by all agents. We then extend our model to
any number of the states of the world and any signal. We demonstrate that there also exists a
unique Bayesian-Nash equilibrium and give a complete characterization of equilibrium efforts
as a function of weighted Katz-Bonacich centralities and information aspects. We also derive
similar results for the case when the strength of interactions is partially known.
We then study a policy analyzed by Ballester et al. (2006, 2010). The aim is to solve the
planner’s problem that consists in finding and getting rid of the key player, i.e., the agent
who, once removed, leads to the highest reduction in aggregate activity.3 If the planner has
incomplete information about the marginal return of effort (or the strength of interactions),
then we show that the key player may be different to the one proposed in the perfect information case. This difference is determined by a ratio that captures all the (imperfect)
information that the agents have, including the priors of the agents and the planner and the
posteriors of the agents.
The paper unfolds as follows. In the next section, we relate our paper to the network
literature with incomplete information. In Section 3, we characterize the equilibrium in the
model with perfect information and show under which condition there exists a unique Nash
equilibrium. Section 4 deals with a simple model with only two states of the world and
two signals when the marginal return of effort is partially known. In Section 5, we analyze
the general model when there is a finite number of states of the world and signals for the
case when the marginal return of effort is unknown. In Section 6, we discuss the case when
3
See Zhou and Chen (2015) who introduce the concept of key leader, which is related to that of key player.
3
the information matrix is not diagonalizable. Section 7 analyzes the key player policy when
information is complete and incomplete. Finally, Section 8 concludes. Throughout the paper,
we will use the same two examples to illustrate all our results. The main assumptions of
the model and their implications are stated in Appendix A.1. In Appendix A.2, we analyze
the general model when there is a finite number of states of the world and signals for the
case when the strength of interactions is unknown. In Appendix A.3, we provide some
results for the Kronecker product used in the paper. Appendix A.4 deals with case when the
information matrix is not diagonalizable and where we resort to the Jordan decomposition.
The proofs of all lemmas and propositions in the main text can be found in Appendix A.5.
2
Related literature
Our paper is a contribution to the literature on “games on networks” or “network games”
(Jackson and Zenou, 2015). We consider a game with strategic complements where an
increase in the actions of other players leads a given player’s higher actions to have relatively
higher payoffs compared to that player’s lower actions. In this framework, we consider a
game with imperfect information on either the marginal payoff of effort or the strength
of interaction. There is a relatively small literature which looks at the issue of imperfect
information in this class of games.
Galeotti et al. (2010) and Jackson and Yariv (2007) are related to our paper but they
study a very different dimension of uncertainty–i.e., uncertainty about the network structure. They show that an incomplete information setting can actually simplify the analysis of
games on networks. In particular, results can be derived showing how agents’ actions vary
with their degree.4
There is also an interesting literature on learning in networks. Bala and Goyal (1998)
were among the first to study this issue and show that each agent in a connected network
will obtain the same long-run utility and that, if the network is large enough and there are
enough agents who are optimistic about each action spread throughout the network, then the
probability that the society will converge to the best overall action can be made arbitrarily
close to 1. More recently, Acemoglu et al. (2011) study a model where the state of the
world is unknown and affects the action and the utility function of each agent. Each agent
forms beliefs about this state from a private signal and from her observation of the actions
of other agents. As in our model, agents can update their beliefs in a Bayesian way. They
show that when private beliefs are unbounded (meaning that the implied likelihood ratios
4
See Jackson and Yariv (2011) for an overview of this literature.
4
are unbounded), there will be asymptotic learning as long as there is some minimal amount
of “expansion in observations”.5
Compared to the literature of incomplete information in networks, our paper is the first
to consider a model with a common unknown state of the world (i.e. the marginal return of
effort or the synergy parameter), which is partially known by the agents and where there is
neither communication nor learning.6 We first show that there exists a unique Bayesian-Nash
equilibrium. We are also able to completely characterize this unique equilibrium. This characterization is such that each equilibrium effort is a combination of different Katz-Bonacich
centralities, where the decay factors are the corresponding eigenvalues of the information
matrix while the weights are the elements of matrices that have eigenvectors as columns. We
are able to do so because we could diagonalize both the adjacency matrix of the network,
which lead to the Katz-Bonacich centralities, and the information matrix. We believe that
this is one of our main results and has not been done before.
3
The complete information case
3.1
The model
The network Let I := {1     } denote the set of players, where   1, connected by
a network g. We keep track of social connections in this network by its symmetric adjacency
matrix G = [ ], where  =  = 1 if  and  are linked to each other, and  = 0,
otherwise. We also set  = 0. The reference group of individual  is the set of ’s neighbors
P
given by  = { 6=  |  = 1}. The cardinal of the set  is  = =1  , which is known
as the degree of  in graph theory.
Payoffs Each agent takes action  ∈ [0 +∞) that maximizes the following quadratic
utility function:

X
1 2
 (  − ; G) =  −  + 
  
(1)
2
=1
where  is the marginal return of effort and  is the strength of strategic interactions (synergy
parameter).
5
For overviews on these issues, see Jackson (2008, 2011) and Goyal (2011).
Bergemann and Morris (2013) propose an interesting paper on these issues but without an explicit
network analysis. Blume et al. (2015) develop a network model with incomplete information but mainly
focus on identification issues.
6
5
The first two terms of the utility function corresponds to a standard cost-benefit analysis
without the influence of others. In other words, if individual  was isolated (not connected
in a network), she will choose the optimal action ∗ = , independent of what the other
agents choose. The last term in (1) reflects the network effects, i.e. the impact of the agents’
links aggregate effort levels on ’s utility. As agents may have different locations in a network
P
and their friends may choose different effort levels, the term =1    is heterogeneous
in . The coefficient  captures the local-aggregate endogenous peer effect. More precisely,
bilateral influences for individual   ( 6= ) are captured by the following cross derivatives
 2  (  − ; G)
= 
 
(2)
As we assume   0, if  and  are linked, the cross derivative is positive and reflects
strategic complementarity in efforts. That is, if  increases her effort, then the utility of 
will be higher if  also increases her effort. Furthermore, the utility of  increases with the
number of friends.
In equilibrium, each agent maximizes her utility (1). From the first-order condition, we
obtain the following best-reply function for individual 
∗ =  + 

X
 ∗
(3)
=1
The Katz-Bonacich network centrality measure Let G be the th power of G,
[]
with coefficients  , where  is some nonnegative integer. The matrix G keeps track of the
[]
indirect connections in the network:  ≥ 0 measures the number of walks of length  ≥ 1
in  from  to .7 In particular, G0 = I , where I is the  ×  identity matrix.
Denote by max (G) the largest eigenvalue of G.
Definition 1 Consider a network g with adjacency −square matrix G and a scalar   0
such that   1max (G).
() Given a vector u ∈ R+ , the vector of u -weighted Katz-Bonacich centralities of parameter  in g is:
+∞
X
bu ( G) :=
  G u = (I −G)−1 u
(4)
=0
7
A walk of length  from  to  is a sequence h0    i of players such that 0 = ,  = ,  6= +1 , and
[]
 +1  0, for all 0 ≤  ≤  − 1, that is, players  and +1 are directly linked in g. In fact,  accounts
for the total weight of all walks of length  from  to . When the network is un-weighted, that is, G is a
[]
(0 1)−matrix,  is simply the number of paths of length  from  to .
6
() If u = 1 , where 1 is the −dimensional vector of ones, then the unweighted KatzBonacich centrality of parameter  in g is:8
b( G) :=
+∞
X
=0
  G 1 = (I −G)−1 1
(5)
If we consider the unweighted Katz-Bonacich centrality of node  (defined by (5)), i.e.
 ( G), it counts the total number of walks in g starting from  and discounted by distance.
By definition, b( G) ≥ 1, with equality when  = 0. The u -weighted Katz-Bonacich
centrality of node  (defined by (4)), i.e.  ( G), has a similar interpretation with the
additional fact that the walks have to be weighted by the vector u .
We have a first result due to Ballester et al. (2006).
Proposition 1 If   0 and 0    1max (G), then the network game with payoffs (1)
has a unique interior Nash equilibrium in pure strategies given by
∗ =   ( G)
(6)
The equilibrium Katz-Bonacich centrality measure b( G) is thus the relevant network
characteristic that shapes equilibrium behavior. This measure of centrality reflects both the
direct and the indirect network links stemming from each agent.
To understand why the above characterization in terms of the largest eigenvalue max (G)
works, and to connect the analysis to what follows below, we provide now a characterization
of the solution using the fact that the adjacency matrix G is diagonalizable. The system
that characterizes the equilibrium of the game (6) is:
x∗ =  (I −G)−1 1
To resolve this system we are going to diagonalize G, which is assumed to be symmetric
and thus diagonalizable. We have that G = CDG C−1 , where DG is a  ×  diagonal matrix
with entries equal to the eigenvalues of the matrix G, i.e.
⎛
⎞
1 0 · · · 0
⎜
⎟
⎜ 0 . . . . . . ... ⎟
⎟
DG = ⎜
⎜ .. . . . .
⎟
.
. 0 ⎠
⎝ .
0 · · · 0 
8
To avoid cumbersome notations, when u = 1 , the unweighted Katz-Bonacich centrality vector is denoted by b( G) and not b1 ( G) and the individual one by  ( G) and not 1 ( G). For any other
weighted Katz-Bonacich centralities, we will use the notations bu ( G) and  ( G).
7
P
where 1 , 2 , ... ,  are the eigenvalues of G. The Neuman series ≥0   (DG ) converges
P

−1

if and only if   1max (G). If it converges, then +∞
=0  (DG ) = (I −G) . In such
case, we have that
#
" +∞
+∞
X
X
(I −G)−1 =
  G = C
  (DG ) C−1
=0
=0
Observe that the “if and only if condition” is due to the fact that the diagonal entries of
P


≥0  (DG ) are power series of rates equal to  (G), and all these power series converge
if and only if max (G)  1, which is equivalent to the condition written in Proposition 1.
These terms are obviously very easy to compute. Indeed, they are equal to 1−1 (G) for each
P


 ∈ {1     }. The off-diagonal elements of +∞
=0  (DG ) are all equal to 0.
3.2
Example 1
Consider the network g in Figure 1 with three players, where agent 1 holds a central position
whereas agents 2 and 3 are peripherals.
t
2
t
1
t
3
Figure 1: Star network
The adjacency matrix for this network is the following:9
⎡
⎤
0 1 1
⎢
⎥
G = ⎣ 1 0 0 ⎦ 
1 0 0
Assume that  = 1. We can now compute the players’ centrality measures. We obtain:10
+∞
X
¡
¢
£ 2 
¤
1 + 2

1  G
=
 2 +  2+1 2+1 =
1 − 2 2
=0
9
+∞
X
¡
¢
£ 2 
¤
1+

2  G
= 3 ( G) =
 2 +  2+1 2 =
1 − 2 2
=0
The superscript  refers to the star network.
√
Note that these centrality measures are only well-defined when   1 2 (condition on the largest
eigenvalue).
10
8
According to intuition, player 1 has the highest centrality measure. All centrality measures
 s increase with , and so does the ratio 1 2 of agent 1’s centrality with respect to any
other agent, as the contribution of indirect walks to centrality increases with . We obtain
the following efforts at equilibrium (Proposition 1):
¶
µ
1 + 2
∗
(7)
1 = 
1 − 2 2
and
¶
1+
(8)
=
=
1 − 2 2
As expected, the effort exerted by player 1, the most central player, is the highest one.
∗
2
4
µ
∗
3
The incomplete information case: A simple model
when  is unknown
We develop a simple model with common values and private information where there are
only two states of the world and two signals.
4.1
The model
Assume that the marginal return of effort  in the payoff function (1) is common to all
agents but only partially known by the agents. Agents know, however, the exact value of
the synergy parameter .11
Information We assume that there are two states of the world, so that the parameter
 can only take two values:    . All individuals share a common prior:
P ({ =  }) =  ∈ (0 1)
Each individual  receives a private signal,  ∈ { }, such that
P ({ = } | { =  }) = P ({ = } | { =  }) =  ≥ 12
where { = } and { = } denote, respectively, the event that agent  has received a signal
 and . Assume that there is no communication between the players and that the network
does not affect the possible channels of communication between them. Denote by
P ({ = }) = P ({ = } | { =  }) P({ =  }) + P ({ = } | { =  }) P({ =  })
11
We consider the case of unknown  in Appendix A.2.
9
and
P ({ = }) = P ({ = } | { =  }) P({ =  }) + P ({ = } | { =  }) P({ =  })
Then, if agent  receives the signal  = , using the Bayes’ rule, we have:
P ({ =  } | { = }) =
P ({ = } | { =  }) P({ =  })
 (1 − )
=
P ({ = })
 (1 − ) + (1 − ) 
and
P ({ =  } | { = }) = 1 − P ({ =  } | { = }) =
(1 − ) 
 (1 − ) + (1 − ) 
(9)
(10)
Similarly, if agent  receives the signal , then using the Bayes’ rule, we have:
P ({ =  } | { = }) =
(1 − ) (1 − )
P ({ = } | { =  }) P({ =  })
=
(11)
P ({ = })
(1 − ) (1 − ) + 
and
P ({ =  } | { = }) = 1 − P ({ =  } | { = }) =

(1 − ) (1 − ) + 
(12)
The Bayesian Game Given that there is incomplete information about the state of
the world  and about others’ information, this is a Bayesian game. Agent  has to choose an
action  ( ) ≥ 0 for each signal  ∈ { }. The expected utility of agent  can be written
as:

X
1
2
 E [ | ]
E [ | ] = E [| ]  ( ) − [ ( )] +  ( )
2
=1
4.2
Equilibrium Analysis
The first-order conditions are given by
X
¤
£
E [ | ]
= E [| ] − ∗ ( ) + 
 E ∗ | = 0
∀ ∈ I,

=1

Hence, the best reply of agent  is given by
∗ ( ) = E [| ] + 

X
=1
£
¤
 E ∗ |
(13)
Using (9) and (10), we easily obtain that

b  := E [| { = }] = P ({ =  } | { = })  + P ({ =  } | { = }) 
10
(14)
=
 (1 − )
(1 − ) 
 +

 (1 − ) + (1 − ) 
 (1 − ) + (1 − ) 
Similarly, using (11) and (12), we get

b  := E [| { = }] = P ({ =  } | { = })  + P ({ =  } | { = }) 
=
(15)
(1 − ) (1 − )

 +

(1 − ) (1 − ) + 
(1 − ) (1 − ) + 
Denote by  :=  ({ = }) :=  (), the action taken by agent  when receiving signal 
and  :=  ({ = }) :=  (), the action taken by agent  when receiving signal . Using
(13), for all  ∈ I, the equilibrium actions are given by:
b + 
∗ = 
= 
b + 
and
̄∗
= 
b + 
= 
b + 

X
=1

X
=1

X
£
¤
 E ∗ | { = }
£
¤
 P ({ = } | { = }) ∗ + P ({ = } | { = }) ̄∗
=1
 E [ | { = }]

X
£
¤
 P [{ = } | { = }] ∗ + P [{ = } | { = }] ̄∗
=1
Let us have the following notations:   := P ({ = } | { = })  1−  = P ({ = } | { = }),
  := P ({ = } | { = }), and 1−  = P ({ = } | { = }). Then, the optimal actions
can be written as:
∗
and
̄∗
=
b + 
=
b + 

X
£
¤
   ∗ + (1 −   ) ̄∗

X
£
¤
 (1 −   ) ∗ +   ̄∗
=1
=1
We can easily calculate   and   as follows:
11
(16)
(17)
Lemma 1 We have:
and
(1 − ) 2 +  (1 − )2
 =
 (1 − ) + (1 − ) 
(18)
(1 − ) (1 − )2 + 2
 + (1 − ) (1 − )
(19)
 =
   )T and x := (1    )T are
Let us introduce the following
à !notations: xà := (1!

b  1
x
b :=
are 2−dimensional vectors, and
and α
−dimensional vectors, x :=
x

b  1
Ã
!
G
(1 −   ) G
Ω :=
(1 −   ) G
G
is a 2 × 2 matrix. Then the 2 equations of the best-reply functions (16) and (17) can be
written in matrix form as follows:
b +  Ω x∗
x∗ = α
If I2 −  Ω is invertible, then we obtain
! "
!
Ã
!#−1 Ã
Ã
∗

b  1
(1 −   ) G
G
x
= I2 − 
∗
(1 −   ) G
x
G

b  1
(20)
or, equivalently,
b
x∗ = [I2 −  Ω]−1 α
(21)
We can rewrite the equilibrium conditions (20) as follows:
à !
Ã
!
x∗
̂
1
 
= [I2 −  Γ ⊗ G]−1
x∗
̂ 1
where
Γ :=
Ã

1 − 
1 − 

(22)
!
is a stochastic matrix and Γ ⊗ G is the Kronecker product of Γ and G (see Appendix A.3
and, in particular, Definition 4 of the Kronecker product). Γ is called the information matrix
since it keeps track of all the information received by the agent about the states of the world
while G, the adjacency matrix, is the “network” matrix since it keeps track of the position
of each individual in the network. Our main result in this section can be stated as follows:
12
Proposition 2 Consider the network game with payoffs (1) and unknown parameter  that
can only take two values: 0     . Then, if 0    1max (G), there exists a unique
interior Bayesian-Nash equilibrium in pure strategies given by
(1 −   )
x∗ = 
(b
 − 
b b ( G) −
b  ) b ((  +   − 1)  G)
(23)
(2 −   −   )
where
x∗ = 
b b ( G) +
(1 −   )
(b
 − 
b  ) b ((  +   − 1)  G)
(2 −   −   )
(1 −   ) 
b  + (1 −   ) 
b

(2 −   −   )
  and   are given by (18), and (19) and 
b  and 
b  by (14) and (15).

b≡
(24)
(25)
The following comments are in order. First, the condition for existence and uniqueness of
a Bayesian-Nash equilibrium (i.e. 0    1max (G)) is exactly the same as the condition
for the complete information case (see Proposition 1). This is due to the fact that the
information matrix Γ is a stochastic matrix and its largest eigenvalue is thus equal to 1.
Second, we characterize the Nash equilibrium of this game for each agent and for each
signal received by disentangling the network effects (captured by G) from the information
effects (captured by Γ). We are able to do so because G is symmetric and Γ is of order 2
(i.e., it is a 2 × 2 matrix) and thus both are diagonalizable. We show that each effort is a
combination of two Katz-Bonacich centralities where the decay factors are the eigenvalues of
the information matrix Γ times the synergy parameter  while the weights are the conditional
probabilities, which include beliefs about the states of the world given the signals received
by all agents. To understand this result, observe that the diagonalization of G leads to the
Katz-Bonacich centrality while the diagonalization of Γ to a matrix A with eigenvectors as
columns. The different eigenvalues of Γ determine the number of the different Katz-Bonacich
centrality vectors (two here) and the discount (or decay) factor in each of them (1 ×  for the
first Katz-Bonacich centrality and (  +   − 1) ×  for the second Katz-Bonacich centrality,
where 1 and   +   − 1 are the two eigenvalues of Γ) and A and A−1 characterize the
(1−  )
weights (i.e. 
b and 2−
(b
 − 
b  )) of the different Katz-Bonacich centrality vectors in
 − 
equilibrium strategies.
Third, observe that   and   , which are measures of the informativeness of private
signals, enter both in Γ and therefore in the Kronecker product of Γ ⊗ G and in the vector
b . So, when   is close to 1 and   is close to 1, which means that the signals are very
α
informative, the gap between both eigenvalues12 (which is a measure of the entanglement of
12
The largest eigenvalue of Γ is always 1 while the other eigenvalue is   +   − 1, so the gap between
these two eigenvalues is   +   − 2.
13
actions in both states) tends to 0. More generally, we should expect this to be also true in the
case of  different possible states of the world (we will show it formally in Section 5.3 below),
bearing resemblance with the analysis in Golub and Jackson (2010, 2012), where they show
that the second largest eigenvalue measures the speed of convergence of the DeGroot naive
learning process, which at the same time relates to the speed of convergence of the Markov
process. In our case, if the powers of Γ stabilize very fast, we can approximate very well
equilibrium actions in different states with equilibrium actions in the complete information
game. Finally, note that if, for example, 
b − 
b  → 0, meaning that both levels of s (i.e.
∗
states of the world) are very similar, then x → 
b  b (; G) and x∗ → 
b  b (; G). In other
words, we end up with an equilibrium similar to the one obtained in the perfect information
case.
5
The incomplete information case: A general model
with a finite number of states and types
5.1
The model
The model of Section 4 with unknown  and two states of the world and two possible signals
provides a good understanding on how the model works. Let us now consider a more general
model when there is a finite number of states of the world and signals. We study a family
of Bayesian games that share similar features and where there is incomplete information on
either  or . Hence we analyze Bayesian games with common values and private information
(the level of direct reward of own activity, denoted by , and the level of pairwise strategic
complementarities, denoted by ).
As above, let I := {1     } denote the set of players, where   1. For all  ∈ I,
let  denote player ’s signal, where  : Ω → S ⊂ R is a random variable defined on some
probability space (Ω A P). Assume that S is finite with  := |S|  1. Assume without
loss of generality that S = {1     }.13 Let (1       )T denote the random -vector of the
players’ signals.
If 1       have the same distribution, then 1       are called identically distributed.

T
Similarly, for all 2 ≤  ≤ , for all { }
=1 ⊂ I, and for all { }=1 ⊂ I, if (1       ) and
(1       )T have the same (multivariate) distribution, then (1       )T and (1       )T
are called identically distributed.
13
This assumption is crucial for the definition of the players’ information matrix (see Definition 3 and
Remark 1).
14
A permutation  of I is a bijection  : I → I. Any permutation  of I can be uniquely
represented by a non-singular  ×  matrix P , the so-called permutation matrix of .
Definition 2 The (multivariate) distribution of (1       )T , or equivalently, the joint distribution of 1       , is called permutation invariant if for all permutations  of I,
P (1       )T = ((1)      () )T and (1       )T are identically distributed.
If the distribution of (1       )T is permutation invariant, permuting the components of
(1       )T does not change its distribution. For example, if  = 3 and the (trivariate) distribution of (1  2  3 )T is permutation invariant, then (1  2  3 )T , (1  3  2 )T , (2  1  3 )T ,
(2  3  1 )T , (3  1  2 )T , and (3  2  1 )T are identically distributed.
From now on, we assume that the two following assumptions hold throughout the paper:
Assumption 1: For all  ∈ I and for all  ∈ , P ({ =  })  0.
Assumption 1 ensures that conditional probabilities of the form P({ = } | { =  })
are defined.
Assumption 2: The distribution of (1       )T is permutation invariant.
In Appendix A.1, sections A.1.1 and A.1.2, we derive some results showing the importance
of each of these two assumptions. We show that the information matrix Γ is well-defined
if Assumptions 1 and 3a (defined in Appendix A.1) are satisfied. A sufficient condition for
Assumption 3a to hold true is that the distribution of the players’ signals is permutation
invariant (Assumption 2). Observe that, in Proposition 5 in Appendix A.1, we show that
Assumptions 1 and 2 guarantee that the eigenvalues of matrix Γ are all real. Observe also
that Assumption 2 does not imply that Γ is symmetric. It just says that the identity of the
player does not matter when calculating conditional probabilities. Below we give an example
where Assumption 2 is satisfied and the matrix Γ is not symmetric.
Let us now go back to the model and let  ∈ { } be the unknown common value.
This parameter can take  different values (i.e. states of the world),  ∈ Θ = {1       }.
Agents can be of  different types, that we denote by S = {1      }. These types can
be interpreted as private informative signals of the value of . Given the type  ∈ S and
a state  ∈ Θ, we denote  := P ({ = } | { =  }),  = 1   ,  = 1  . The
informational structure is therefore governed by a  ×  matrix
 ∈ S  ∈ Θ
P = ( )
i.e.
P
( ×)
⎛
11 · · ·
⎜ ..
...
=⎝ .
⎞
1
.. ⎟
. ⎠
 1 · · ·  
15
Next, we define the notation of the players’ information matrix.
Definition 3 The players’ information matrix, denoted by Γ = (  )( )∈S 2 , is a square
matrix of order  = |S| that is given by
∀(  ) ∈ S 2
  = P({ =  } | { = }) =
P({ =  } ∩ { = })

P({ = })
(26)
where ( ) ∈ I 2 with  6=  is arbitrary.
Building on this definition, we can derive the information matrix Γ = (  )( )∈S 2 where
  is defined by
  = P({ =  } | { = }) =
=
=
P
=1
P
=1

X
=1
P({ =  } ∩ { =  } | { = })
P({ = } | { =  } ∩ { =  })P({ =  } ∩ { =  })
P({ = })
P({ = } | { =  } ∩ { =  })P({ =  } | { =  })P( )
P({ = })
i.e.   is the conditional probability of the event { =  } (that is, an agent  receives the
signal  ) given the even { = } (that is, another agent  receives the signal ). We obtain
the following  ×  matrix:
⎞
⎛
 11 · · ·  1
⎜
.. ⎟
...
Γ = ⎝ ...
. ⎠
( )
 1 · · ·  
Agents know their own type but not the type of other agents. The strategy of each agent 
is the function
 : S −→ [0 ∞)
and the utility of each agent  by (1).
Let us now give an example where the distribution of (1       )T is permutation invariant (Assumption 2) but the matrix Γ is not symmetric. It readily follows from Assumption 2
that P({ =  } ∩ { = }) = P({ = } ∩ { =  }). This implies that the probability mass
function P({ =  } ∩ { = }) of the (joint) distribution of (   ) can be represented by a
symmetric matrix as shown in the example below with three states of the world ,  and 
and three signals:
16




P( =  )



010
010
005
010
010
015
005
015
020
025
035
040
P( = )
025
035
040
1
Notice that the marginal distributions for each  are the same. Assumption 2 therefore
implies that P({ =  }) = P({ =  }). Indeed
P({ =  }) =
X

P({ =  } ∩ { = }) =
X

P({ = } ∩ { =  }) = P({ =  })
Observe, however, that this does not imply that the matrix of conditional probabilities Γ,
or the information matrix, will be symmetric. Indeed, Using Definition 3, it is straightforward
to derive Γ for the above example
P({ = } ∩ { = })
=
P({ = })
P({ = } ∩ { = })
=
= P({ = }|{ = }) =
P({ = })
P({ = } ∩ { = })
=
= P({ = }|{ = }) =
P({ = })
  = P({ = }|{ = }) =
 
 
010
=04
025
010
=04
025
010
=0286
035
It can be thus seen that   6=    . Hence matrix Γ will be non-symmetric in general. In our
example, it is given by:
⎤
⎤ ⎡
0400 0400 0200
     
⎥
⎢
⎥ ⎢
Γ = ⎣      ⎦ = ⎣0286 0286 0429⎦
0125 0375 0500
     
⎡
Observe that Assumptions 1 and 2 (see Proposition 5) guarantee that the eigenvalues are
all real for matrix Γ. In the present example, it can be shown that the eigenvalues of Γ are
equal to: {1 0286 −01} ⊂ R.
5.2
Examples
To illustrate our information structure, consider the following example where the number of
states and types is the same, i.e.,  =  .
17
Example 1 Consider the case when  =  = 2. Denote state 1 and signal 1 by , i.e.,
 = 1, and state 2 and signal 2 by , i.e.  = 2. The prior is denoted by  = P( =  ) and
the private signal is such that  =  =  ≥ 12. It follows that the 2 × 2 matrix P is
given by:
Ã
!

1−
P=
1−

We can then easily calculate the 2 × 2 matrix Γ as follows:
!
Ã

 

Γ=
   
where   :=   ,   := 1 −   ,   :=   and   := 1 −   are given in Lemma 1.
Example 2 Let us now consider a case where there are  =  states of the world (i.e.,
there are as many signals as possible ’s) but where the information structure is as follows.
The priors are such that, for all  ∈ {1   }, P( ) = 1 . Furthermore, assume that if
agent  observes the signal , she assigns the probability   1 of being in state  and
probability (1 − )( − 1) of being in each other state. In other words,
(

if  = 
 = P ({ = } | { =  }) =
(1 − )( − 1) if  6= 
where   1 . In this case, the
⎛
 1−
−1
⎜ 1− . .
⎜
.
 −1
P=⎜
⎜ ..
...
⎝ .
1−
···
 −1
 ×  matrix P is given by
⎞
· · · 1−
−1
¶
µ
.. ⎟
...
¡
¢
. ⎟
1
−

T
⎟ = I +
1
1
−
I



⎟
. . . 1−
 −1
⎠
 −1
1−

 −1
(27)
where 1 is a −dimensional vector of ones and I is the −dimensional identity matrix. It
is easily verified that P is symmetric. Let us now determine the  ×  information matrix
Γ. First, let us compute the values of P({ =  }   ( )| ()) for given    . We have
that
P({ =  } ∩ { =  } ∩ { = })
P({ = })
P({ =  } ∩ { = } | { =  })P({ =  })
=
P({ = })
P({ =  } ∩ { =  } | { = })P({ = })
=
P({ = })
P({ =  } ∩ { =  })| { = }) =
18
Given the assumptions on the informational structure, we have that P( ) = 1 and
P ({ = }) =

X
=1
P ({ = } | { =  }) P({ =  })
1
= × +

µ
¶
1−
1
1
( − 1) × =
 −1


Thus the signal  of player  has the discrete uniform distribution on  = {1   } (Assumption 3). As a result,
P({ =  }∩{ =  } | { = }) = P({ =  }∩{ = } | { =  }) = P({ =  }∩{ =  } | ())
It is then easy to show that:
We have:
P({ =  } ∩ { = } | { =  })
⎧ ¡
¢
1− 2
⎪
if  6=  and  6= 
⎨ ¡ −1 ¢
1−
=
  −1 if either ( 6=  and  = ) or ( =  and  6= )
⎪
⎩
2
if  =  = 
  = P({ =  } | { = })

X
=
P({ =  } ∩ { =  } | { = })
=1

X
=
=1
P({ =  } ∩ { = } | { =  })
Hence, if  =  , we obtain:
 
µ
1−
=  + ( − 1)
 −1
2
¶2
= 2 +
(1 − )2
 −1
while, if  6=  , we get:
 
¶
µ
¶2
1−
1−
+ ( − 2)
= 2
 −1
 −1
(1 − ) (  +  − 2)
=
( − 1)2
µ
19
The information matrix is thus given by:
⎞
⎛
2
(1−)( + −2)
(1−)( + −2)
2 + (1−)
·
·
·
2
2
 −1
( −1)
( −1)
⎟
⎜
..
⎟
⎜ (1−)( + −2)
...
(1−)2
2
⎟
⎜
 +  −1
.
( −1)2
⎟
⎜
Γ = ⎜
..
...
...
(1−)( + −2) ⎟
⎟
⎜
.
( −1)2
⎠
⎝
2
(1−)( + −2)
(1−)( + −2)
(1−)
2
·
·
·

+
2
2
 −1
( −1)
( −1)
"
#
¢
(1 − )2
(1 − ) (  +  − 2) ¡
= 2 +
I +
1 1T − I
2
 −1
( − 1)
(28)
Evidently, Γ is symmetric.
5.3
The model with unknown 
Let us now solve the model with unknown  when there is a finite number of states of the
world () and signals ( ).
5.3.1
Equilibrium
Assume that the  ×  information matrix Γ is diagonalizable. Then,
Γ = A DΓ A−1
where
⎛
⎜
⎜
DΓ = ⎜
⎜
⎝
1 (Γ)
0
..
.
0
0 ···
... ...
... ...
···
0
0
..
.
0
 (Γ)
⎞
⎟
⎟
⎟
⎟
⎠
and 1 (Γ)   (Γ) are the eigenvalues of Γ, with max (Γ) := 1 (Γ) ≥ 2 (Γ) ≥  ≥  (Γ).
In this formulation, A is a  ×  matrix where each th column is formed by the eigenvector
corresponding to the th eigenvalue. Let us have the following notations:
⎛
⎛ (−1)
⎞
(−1) ⎞
11
1
· · · 1
.. ⎟ and A−1 = ⎜ ..
.. ⎟
...
⎝ .
. ⎠
. ⎠
(−1)
(−1)
· · ·  
 1 · · ·  
11 · · ·
⎜ ..
...
A=⎝ .
 1
(−1)
where 
is the ( ) cell of the matrix A−1 .
20
The utility function of individual  receiving signal  can be written as:
E [ | { =  }] = E [| { =  }]  ( ) −

X
1
[ ( )]2 +  ( )
 E [ | { =  }]
2
=1
The first order conditions are given by

X
£
¤
E [ | { =  }]
∗
= E [| { =  }] −  ( ) + 
 E ∗ | { =  }

=1
= E [| { =  }] −
∗
= E [| { =  }] −
∗
( ) + 
 X

X
=1  =1
( ) + 

 X
X
 P [{ = } | { =  }] ∗ ( )
    ∗ ( )
=1  =1
Define
∀ ∈ {1   }

b  := E [| { =  }] =

X
=1
 P ({ =  } | { =  })
We have the following result.
Theorem 1 Consider the case when the marginal return of effort is unknown. Assume that
Γ is diagonalizable. Let 1 (Γ) ≥ 2 (Γ) ≥  ≥  (Γ) be the eigenvalues of the information
matrix Γ. Then, if { }=1 ⊂ R++ and 0    1max (G), there exists a unique BayesianNash equilibrium. In that case, if the signal received is  =  , then the equilibrium efforts
are given by:
x∗ ({ =  }) = 
b1
for  = 1   .

X
=1
(−1)
  1 b ( (Γ)  G) +  + 
b

X
(−1)
   b ( (Γ)  G)
(29)
=1
Theorem 1 generalizes Proposition 2 when there are  states of the world,  ∈ Θ =
{1       }, and  different signals or types, S = {1      }. Interestingly, the condition
for existence and uniqueness of a Bayesian-Nash equilibrium (i.e. 0    1max (G))
is still the same because Γ is still a stochastic matrix whose largest eigenvalue is 1. The
proof of this Theorem is relatively similar to that of Proposition 2 where we diagonalize the
two matrices G and Γ to obtain a nice characterization of the equilibrium conditions. The
characterization obtained in Theorem 1 is such that each equilibrium effort (or action) is a
21
combination of the  different Katz-Bonacich centralities, where the decay factors are the
corresponding eigenvalues of the information matrix Γ multiplied by the synergy parameter
, while the weights are the elements of the A and A−1 . This is because the diagonalization
of G leads to the Katz-Bonacich centralities while the diagonalization of Γ leads to a matrix
A, with eigenvectors as columns. This implies that the number of the different eigenvalues of
Γ determines the number of the different Katz-Bonacich centrality vectors and the discount
factor in each of them, while the elements of A and A−1 characterize the weights of the
different Katz-Bonacich vectors in equilibrium strategies.
Observe that, in Theorem 1, we assume that Γ is diagonalizable. The case of nondiagonalizable Γ is nongeneric (Meyer, 2001). However, this does not mean that such matrices
could not occur in practice. We consider the case of a nondiagonalizable Γ in Section 6
below.
In the next section, we give some examples showing how this characterization is easy to
implement.
5.3.2
Examples
Example 1 Consider the example of the previous section where  takes only two values:
 and  , with    (i.e.  =  = 2). In this case, we have:
Γ=
Ã

1 − 
1 − 

!
where 1 (Γ) = max (Γ) = 1 and 2 (Γ) =   +   − 1 and
Ã
!
!
¶Ã
µ
1
(1
−

1 − (1 −   )  (1 −   )
)

(1
−

)
1
−




A=
, A−1 =
2 −  − 
1
1
−1
1
Let us calculate the equilibrium conditions given in (29), where the state  corresponds to 1
and the state  corresponds to 2 ( =  ;  =   and  =  = 2). Using (29), if agent 
receives the signal  = , we easily obtain:
h
i
(−1)
(−1)
∗ () := ∗ = 
b     (1 (Γ)  G) +    (2 (Γ)  G)
Since  = 
(−1)

=
1− 
,
2−  − 
h
i
(−1)
(−1)
+b
    (1 (Γ)  G) +    (2 (Γ)  G)
³
³
´
´
(−1)
(−1)
(−1)
1− 
1− 
1− 
=  = 1,  = − 1−
,

=
−
,  =  = 2−
,

−
2− −



1 (Γ) = 1 and 2 (Γ) =   +   − 1, it is easily verified that
22

∗

() is exactly
equal to (23). Using (29), with the same methodology, it is easy to show that ∗ () := ∗ is
equal to (24).
If we assume, for example, that  = 06 and  = 085, then, using Lemma 1, we have:
  = 0703 and   = 0776 and thus 2 (Γ) = 048. Assume also that  = 02, which means
that 1 (Γ)  = 02 and 2 (Γ)  = 0096, then
x∗
x∗
= 
b b (02 G) − 057 (b
 − 
b  ) b (0096 G)
(30)
= 
b b (02 G) + 043 (b
 − 
b  ) b (0096 G)
If we further assume that  = 02 and  = 08, then: 
b  = 0326, 
b  = 0737 and 
b = 056.
Thus,
x∗
x∗
= 056 b (02 G) − 0234 b (0096 G)
(31)
= 056 b (02 G) + 0177 b (0096 G)
To understand the importance of networks, consider the star network of Section 3.2 (Figure
√
¡ ¢
1). Then, the condition   1max G can be written as 02  1 2 = 0707, which is
always satisfied. As a result, the Katz-Bonacich centralities are given by:
⎞
⎛ ¡
⎞
⎛ ¡
¢ ⎞ ⎛
¢ ⎞ ⎛
1522
1 0093 G
1214
1 02 G
¢ ⎟ ⎜
¢ ⎟ ⎜
⎟
⎜ ¡
⎟
⎜ ¡
⎝ 2 02 G ⎠ = ⎝ 1304 ⎠ and ⎝ 2 0093 G ⎠ = ⎝ 1117 ⎠
¡
¢
¡
¢
1304
1117
3 02 G
3 0093 G
In that case, there exists a unique Nash
⎛
∗
1 = 0568
⎜ ∗
⎝ 2 = 0469
∗
3 = 0469
equilibrium, which is given by:
⎛
⎞
⎞
∗
1 = 1067
⎜
⎟
⎟
⎠ and ⎝ ∗
2 = 0928 ⎠
∗
3 = 0928
Consider now the complete network with the same three agents so that:14
⎞
⎛
0 1 1
⎟
⎜
G = ⎝ 1 0 1 ⎠
1 1 0
¡ ¢
The condition   1max G can now be written as 02  12 = 05, which is satisfied.
Then,
⎞
⎛ ¡
⎞
⎛ ¡
¢ ⎞ ⎛
¢ ⎞ ⎛
1667
1 0096 G
1238
1 02 G
¢ ⎟ ⎜
¢ ⎟ ⎜
⎟
⎜ ¡
⎟
⎜ ¡
⎝ 2 02 G ⎠ = ⎝ 1667 ⎠ and ⎝ 2 0096 G ⎠ = ⎝ 1238 ⎠
¡
¢
¡
¢
1667
1238
3 02 G
3 0096 G
14
The superscript  refers to the complete network.
23
In that case,
⎛
⎞
⎞
∗
∗
1 = 1152
1 = 0643
⎜
⎟
⎟
⎜ ∗
⎝ 2 = 0643 ⎠ and ⎝ ∗
2 = 1152 ⎠
∗
∗
3 = 0643
3 = 1152
⎛
Not surprisingly, the efforts are always higher in the complete network compared to the star
network due to more strategic complementarities. However, the high-to-low-effort ratio is
lower in the complete network and less central agents have a higher ratio. Indeed,
∗
∗
1
1
=
1880,
= 1791
∗
∗

1
1
and
∗
∗
∗
∗
2
3
2
= ∗ = 1980, ∗ = 3∗ = 1791
∗
2
3
2
3
Finally, we can compare our results with the complete information case where optimal efforts
are given by (6). Assume as above that  = 02 and  = ( +  ) 2 = 05. In that case,
∗
∗
∗
∗
∗
∗
1 = 0761 and 2 = 3 = 0652. For the complete network, we have: 1 = 2 = 3 =
0833. If we now take the average value of effort in the incomplete information case (i.e.
(∗1 + ∗1 ) 2), we obtain:
⎛
⎞
⎞
⎛

∗
1 = 0817
1 = 0898
⎜
⎟
⎟
⎜ 
⎝ 2 = 0698 ⎠ and ⎝ 
2 = 0898 ⎠
3 = 0698

3 = 0898
It is interesting to see that, on average, individuals put more effort under incomplete information.
Example 2 Let us now consider the model of Section 5.2 with  =  and where the
 ×  information matrix Γ is given by (28). Assume that  = 06 and  = 3. This means
that  can take three values  ,  ,  and that each agent receives a signal, which is either
equal to ,  or . In that case,
⎞
⎛
044 028 028
⎟
⎜
(32)
Γ = ⎝028 044 028⎠
028 028 044
This matrix Γ has two distinct eigenvalues: 1 (Γ) = 1 and 2 (Γ) = 016. We can thus
diagonalize Γ as follows:
⎞
⎛
1 0
0
⎟
⎜
Γ = A ⎝0 016 0 ⎠ A−1
0 0 016
24
where
⎞
⎞
⎛
⎛
0577 0577 0577
0577 −0765 0286
⎟
⎟
⎜
⎜
A = ⎝0577 0630
0520 ⎠ , A−1 = ⎝−0765 0630 0135 ⎠
0286 0520 −0805
0577 0135 −0805
(33)
Assuming as before that  = 02, which means that 1 (Γ)  = 02 and 2 (Γ)  = 0032.
Therefore, applying Theorem 1, if each agent  receives the signal  = , then her equilibrium
effort is equal to:
b  [0333 b (02 G) + 0667 b (0032 G)]
x∗ () = 
+b
 [0333 b (02 G) − 0333 b (0032 G)]
+b
 [0333 b (02 G) − 0333 b (0032 G)]
Similar calculations can be done when each agent  receives the signals  =  and  = .
In Appendix A.2, we derive the same results for the case when  is unknown. In particular, Theorem 4 gives the conditions for existence and uniqueness of a Bayesian-Nash
equilibrium and its characterization. We also provide some examples showing how this characterization can be implemented.
6
Diagonalizable versus nondiagonalizable information
matrix Γ
In Theorem 1, we assumed that the information matrix Γ was diagonalizable, which is
generically true. First, let us give some sufficient condition on the primitives of the model
(i.e. on the joint distribution of the signals) that guarantees that Γ is symmetric and thus
diagonalizable. We have the following assumption:
Assumption 3: The signal  of each player  has the discrete uniform distribution on
.
In Appendix A.1, Section A.1.3, we show in Proposition 6 that, if Assumption 3 holds,
then the information matrix Γ is symmetric, and therefore diagonalizable. In some sense,
Assumption 3 is relatively similar to what is assumed in the literature with linear-quadratic
utility functions and a continuum of signals (and states of the world) where the signals
are assumed to follow a Normal distribution (see e.g. Calvó-Armengol and de Martí, 2009,
and Bergemann and Morris, 2013).15 Note that there are other, potentially less restrictive
15
Indeed, the uniform distribution defined on a finite set is the maximum entropy distribution among
¡
¢
all discrete distributions supported on this set. Similarly, the Normal distribution    2 has maximum
entropy among all real-valued distributions with specified mean  and standard deviation .
25
conditions that would ensure the diagonalizability of Γ. For example, it is sufficient to
assume that Γ is strictly sign-regular. Then, it can be shown that all its eigenvalues will be
real, distinct, and thus simple and the corresponding eigenbasis will consist of real vectors
(see Ando, 1987, Theorem 6.2). The advantage of Assumption 3 is that it provides some
sufficient conditions in terms of more primitive assumptions on the joint distribution of the
signals.
Second, when Γ is nondiagonalizable, we can still characterize our unique Bayesian-Nash
equilibrium using the Jordan decomposition. We have the following result, whose proof is
given in Appendix A.4:
Theorem 2 Consider the case when the marginal return of effort is unknown. Let 1 (Γ) ≥
2 (Γ) ≥  ≥  (Γ) be the eigenvalues of the information matrix Γ. Assume that the Jordan
b ), where 
b is the eigenvalue associated with the
form of Γ is made up of  Jordan blocks J(
b1 (Γ) ≥ 
b2 (Γ) ≥  ≥ 
b (Γ). Let  be the dimension of the -th
-th Jordan block of Γ. Let 
P
b ) := (I − 
b G)−   Gk 1 . Then,
Jordan block of Γ, and define  := =1  and u (
if {b
 }=1 ⊂ R++ and 0    1 (G), there exists a unique Bayesian-Nash equilibrium.
In that case, if the signal received is  =  , then the equilibrium efforts are given by:
∗
x ({ =  }) =b
1

X
=1
−1 +
X

X
=
=
−1 +1 −1 +1
 + 
b

X
=1
−1 +
X
(−1)
b  G) +   
  1 bu− (

X
=
=
−1 +1 −1 +1
for  = 1      , or, more compactly
∗
x ({ =  }) =

X
=1

b

X
=1
−1 +
X

X
=
=
−1 +1 −1 +1
(−1)
b  G)
   bu− (
(−1)
b  G)
   bu− (
b  G) denotes the u− (
b )-weighted Katz-Bonacich centrality.
where bu− (
(34)
We can see that the structure of the equilibrium characterization (34) is similar to that
of Theorem 1, given by (29). It contains, however, additional terms, which are weighted
b  G), and is more complicated to calculate. The main
Katz-Bonacich centralities bu− (
advantage of this result is that it does not hinge on the diagonalizability of the information
matrix Γ. Observe that the number and the weights of the Katz-Bonacich centralities given
26
in (34) depend on the deficiency of the information matrix Γ. This implies that, when Γ is
diagonalizable so that its eigenvalues are either simple or semi-simple, then the equilibrium
characterization of efforts given by (34) collapses to (29), which is given by Theorem 1.
7
Key-player policies
We would like now to derive some policy implications of our model. In the context of
the model above with complete information, Ballester et al. (2006, 2010) have studied the
“key player” policy, which consists of finding the key player, i.e. the individual who, once
removed from the network, generates the highest possible reduction in aggregate activity.
This is particularly relevant for crime (Liu et al., 2012; Lindquist and Zenou, 2014) but
also for financial networks (Denbee et al., 2014), R&D networks (König et al., 2014), and
development economics (Banerjee et al., 2013).16 In Section 7.1, we will first expose this
policy in the context of complete information. Then, in Section 7.2, we will analyze this
policy when information is incomplete.
7.1
7.1.1
Perfect information: Key players
Model
Consider the utility function defined by (1). At the Nash equilibrium, each individual  will
provide effort given by (6), i.e. ∗ =   ( G) (see Proposition 1). Denote by
∗
 (G) =

X
∗
=
=1

X
 ( G)
=1
the total sum of efforts at the Nash equilibrium. The planner’s objective is to generate the
highest possible reduction in aggregate effort level by picking the appropriate individual.
Formally, the planner’s problem is the following:
max {∗ (G) − ∗ (G− )}
∈{1}
where G− is the ( − 1) × ( − 1) adjacency matrix corresponding to the network g− when
individual  has been removed. From Ballester et al. (2006, 2010), we now define a new
network centrality measure, called the intercentrality of agent , and denoted by  ( G)
that will solve this program:
16
For an overview of the literature on key players in networks, see Zenou (2015b).
27
Proposition 3 Assume   0 and 0    1 (G). Then, under complete information,
the key player in network g is the agent 17 that has the highest intercentrality measure
 ( G), which is defined by:
[ ( G)]2
 ( G) :=
 ( G)
(35)
where  ( G) is the cell corresponding to the th row and the th column of the matrix
(I −G)−1 and thus keeps track of the paths that start and finish at  (cycles).
This proposition says that the key player ∗ who solves max∈{1} {∗ (G) − ∗ (G− )}
is the agent who has the highest inter-centrality  ( G) in g, that is, for all  = 1  ,
∗ ( G) ≥  ( G). As a result,
∗ ( G) ∈ max {∗ (G) − ∗ (G− )}
∈{1}
7.1.2
(36)
Example 3
Consider the bridge network g in Figure 2 with eleven agents.
8 tH
t3
©
¢J HH
­
©
© ­ AA
2©
7
H
¢ J
t
t
P
³
J³ ­HHH 1 ©©©J PP
¢ ³³
­P A
PP
³
9 tP
J­
¢
³
H
t
©
At4
­
J
©HH
³³
A PPP­J
©
¢
­
J
³
³
HH³
­P©
Jt©©
A
P
¢
­
tH³J
6 H J ¢
A ­©© 11
HH
t­
Jt¢5
10A©
Figure 2: Bridge network
We distinguish three different types of equivalent actors in this network, which are the
following:
Type
Players
1
1
2
2, 6, 7, and 11
3
3, 4, 5, 8, 9, and 10
Table 1 computes, for agents of types 1, 2 and 3 the value of the Katz-Bonacich centrality
measures  ( G ) (which is equal to the effort ∗ when  = 1) and the intercentrality
17
The key player might not be unique.
28
measures  ( G ) for different values of . In each column, a variable with a star identifies
the highest value.18
Type
1
2
3
 (0096 G )
1679
1793∗
1646
 (0096 G )
2726
3016∗
2570
 (02 G )
7143
7381∗
6191
 (02 G )
35714∗
28601
21949
Table 1: Katz-Bonacich versus intercentrality measures in a bridge network
First note that type−2 agents always display the highest Katz-Bonacich centrality measure. These agents have the highest number of direct connections. Besides, they are directly
connected to the bridge agent 1, which gives them access to a very wide and diversified span
of indirect connections. For low values of , the direct effect on effort reduction prevails, and
type−2 agents are the key players −those with highest intercentrality measure  ( G ).
When  is higher, though, the most active agents are not anymore the key players. Now,
indirect effects matter a lot, and eliminating agent 1 has the highest joint direct and indirect
effect on aggregate effort reduction.
7.2
7.2.1
Incomplete information on : Key players
Model with two states of the world
As in Section 4, we assume that there are two states of the world,19 so that the parameter
 can take two values:    . We also assume that the planner has a prior (which is
unknown to the agents) for the event { =  }, which is given by:
P({ =  }) =  ∈ (0 1)
The prior of the planner may be different than the one shared by the agents, which is
P({ =  }) =  ∈ (0 1) because the planner may have superior information. The planner
18
We can compute the highest possible value for  compatible with our definition of centrality measures.
It is equal to
1
1
=
=
= 0227
max (G)
4404
19
The general case with  states of the world is considered below.
29
needs to solve the key player problem, which is the difference in aggregate activity according
to her prior, i.e. max∈{1} ∆ , where
£
¤ £
¤
∆ =  ∗ (G) + (1 −  )∗ (G) −  ∗ (G− ) + (1 −  )∗ (G− )
|
{z
} |
{z
}
Total activity before the removal of 
Total activity after the removal of 
where ∗ are the Bayesian-Nash equilibrium high actions defined by (24) while ∗ are the
P
Bayesian-Nash equilibrium low actions defined by (23), and ∗ (G) = =1 ∗ , ∗ (G− ) =
P
P
P
∗
∗
∗
∗
−
∗
=16=  ,  (G) =
=1  and  (G ) =
=16=  . Indeed, if the planner believes
that the state of the world is  , which occurs with probability  , then she believes that all
agents will play the high actions while, if it is  , then she thinks that the low actions will
be played.
Using the values defined in (24) and (23), we obtain:
" 
#
−1
X
X
¡
¢
b
 ( G) −
  G−
∆ = 
=1
+
" 
X
=1
where
=1
 ((  +   − 1)  G) −
∙
b  ) 
 := (b
 − 
−1
X
=1
#
¡
¢
 (  +   − 1)  G−
(1 −   )
(1 −   )
− (1 −  )
(2 −   −   )
(2 −   −   )
¸
(37)
b  and 
b  by (14) and
and 
b is defined in (25),   and   are given by (18) and (19), and 
(15). Using the definition of intercentrality given in (36), this is equivalent to
b  ( G) +   ((  +   − 1)  G)
∆ = 
We have the following proposition:
Proposition 4 Assume {b
 }2 =1 ⊂ R++ and 0    1 (G). Then, under incomplete
information on  and with two states of the world, the key player is given by:
b  ( G) +   ((  +   − 1)  G)}
arg max {∆ ( G Γ) ≡ 
∈{1}
b  → 0, which means that both levels of s (i.e. state of the
If  = 0, then, when 
b − 
world) are very similar, the optimal targeting is equivalent to the complete information case.
If  6= 0, then the optimal targeting may change. This is because the ranking derived from
the intercentrality measure is not stable over .
30
7.2.2
A general model with a finite states of the world
It is relatively easy to extend the model when there is a finite number of states of the world
and the uncertainty is on . In that case, the planner has expectations 
 when the state of
P
the world is , with =1 
 = 1. For simplicity and to avoid too cumbersome notations,
assume that when the planner believes that the state is  (with probability 
 ), then each
agent  receives the signal  ({ = }) so that  = . Using (29), the expected aggregate
activity when the network is G is given by:

X


=1
=

X
=


=
à 

X
X
=1
=1


=1

X
 (() G)
=1
=1

X

X
à  

X
XX
=1  =1
=1


=1
(−1)

b 1  1 
 X

 X
X
( (Γ)  G) +  +
!
(−1)

X
=1

b     ( (Γ)  G)
(−1)

b    
!
( (Γ)  G)
    ( (Γ)  G)
=1 =1  =1
(−1)
b    . The expected aggregate activity when the network is G− can be
where    := 
written exactly in the same way by replacing G with G− . The key player is thus defined
as:
(
)

−1

X
X
X
X
arg max

 (() G) −

 (() G− )


∈
= arg max
∈
=1
=1

X

 X
 X
X
(


=1
=1 =1  =1
=1
=1
    ( (Γ)  G) −

X
=1



−1 X
 X
X
=1 =1  =1
)
¡
¢
     (Γ)  G−
Given the definition of intercentrality in (36), we have the following result:
Theorem 3 Assume that {b
 }=1 ⊂ R++ and 0    1 (G). Then, under incomplete
information with  states of the world, the key player ∗ ∈  is the agent that solves
(
)


 X
X
X

arg max ∆ ( G Γ) ≡

   ( (Γ)  G)
∈
=1
=1  =1
It is straightforward to determine the key player when the uncertainty is on  and the
results are very similar to the case when  is unknown. We will now provide an example
showing how the key player is determined when there is incomplete information on .
31
7.2.3
Examples
Example 1 Let us go back to Example 1 (Section 5.3.2). Let us first calculate the
Bayesian-Nash equilibrium when  can take two values    using Proposition 2. Assume
as above that  = 06 and  = 085, so that   = 0703 and   = 0776. Assume also that
 = 02. We have seen (see (30)) that
x∗
x∗
= 
b b (02 G) − 057 (b
 − 
b  ) b (0096 G)
= 
b b (02 G) + 043 (b
 − 
b  ) b (0096 G)
Let us now calculate the key player for the bridge network g with eleven agents displayed
in Figure 2 (Example 3 in Section 7.1.2). The key player is the agent  that maximizes
¢
¡
b  (02; G) +  (0096; G)
∆  G  Γ ≡ 
Using Table 1, we obtain:
Type
1
2
3
 (0096 G )
1679
1793∗
1646
 (0096 G )
2726
3016∗
2570
 (02 G )
7143
7381∗
6191
 (02 G )
35714∗
28601
21949

b  (02) +  (0096)
35714 
b + 2726 
28601 
b + 3016 
21949 
b + 257 
Table 2: Intercentrality measures in a bridge network with imperfect information
Hence, the key player depends on the ratio b
, which is given by:
£
¤
b  )  (1 −   ) − (1 −  ) (1 −   )
(b
 − 

=

b
(1 −   ) 
b  + (1 −   ) 
b
This ratio captures all the (incomplete) information that the agents have: the priors  and
 of the agents and the planner and the posteriors of the agents displayed in the elements
of the matrices P and Γ. If the ratio b
 is small, type−1 agents are more likely to be
the key players as in the perfect information case. However, if b
 is high, then type−2
agents will be the key players, which give the opposite prediction compared to the perfect
information case. It readily verified that b
 increases with  ,   , and 
b  and decreases
with   and 
b  . To understand this result, two effects need to be considered: () there is a
32
common effect when || is large (e.g. high variance of 
b ); () there is an idiosyncratic effect
when   +   − 1 ¿ 1 and then the change in ranking is more likely to occur.
To illustrate this result, assume, as before, that  = 02 and  = 08, so that 
b  = 0326,

b  = 0737 and 
b = 056. Also assume that  = 06 and  = 085, so that   = 0703 and
  = 0776. Finally, assume  = 08  . Then using (37), we have:  = 00945, which
means that b
 = 0169. It is easily verified that, when  is (partially) unknown, the key
player is individual 1 since she is the one who has the highest 
b  (02)+ (0096), while,
when there is perfect information on , the key player is individual 2 if  is low enough (for
example, if  = 0096; see Table 2) and individual 1 if  is high enough (for example, if
 = 02; see Table 2). If we now change the parameters of the model so that b
 becomes

larger (by assuming higher values of  ,   , and 
b  and lower values of   and 
b  ), then the
key player in the imperfect information case becomes individual 2.
Example 2 Let us now consider the model of Section 5.2 with  =  and where the
 ×  information matrix Γ is given by (28). Assume that  = 06 and  = 3. This means
that  can take three values,  ,  ,  , and that each agent  receives of the different three
signals: ,  or . State 1 corresponds to , state 2 to  and state 3 to . In that case,
Γ is given by (32) and A and A−1 by (33). Since the stochastic matrix Γ has two distinct
eigenvalues: 1 (Γ) = 1 and 2 (Γ) = 016. Assume as before that  = 02, which means that
1 (Γ)  = 02 and 2 (Γ)  = 0032. Then, we easily obtain:
Type
1
2
3
 (0032 G )
1151
1184∗
1148
 (0032 G )
1319
1394∗
1312
 (02 G )
7143
7381∗
6191
 (02 G )
35714∗
28601
21949
Table 3: Katz-Bonacich versus intercentrality measures in a bridge network
We obtain exactly the same result as in Example 1, i.e., the key player changes depending
on the value of  so that, when  is small, the key player is individual 2 while, it is individual
1 when  is higher. The intercentrality measure in the imperfect information case is now
33
given by:
3
11 X
3 X
3
X
X
¢
¡
∆  G  Γ =

   ( (Γ)  G)

=
=1
3
X
=1


=1 =1  =1
3
11 X
3 X
X
=1 =1  =1
(−1)

b     ( (Γ)  G)
¢
¡
The calculation of ∆  G  Γ is much more complicated but it is still a weighted
average of intercentrality measures and the weights are the different parameters related to the
P
(−1)
information structure since   are elements of A and A−1 , 
b = 
=1 P ( | ()) 

are conditional expectations on the s and  are the prior of the planner. As for example
1, the same tradeoff arises between the s and the 
b s.
8
Conclusion
We analyze a family of tractable network games with incomplete information on relevant
payoff parameters. We show under which condition there exists a unique Bayesian-Nash
equilibrium. We are also able to explicitly characterize this Bayesian-Nash equilibrium by
showing how it depends in a very precise way on both the network geometry (Katz-Bonacich
centrality) and the informational structure. Finally, we prove that incomplete information
can distort the network policy implications with respect to the complete information benchmark. Indeed, the targeting of individuals in a network (key-player policy) may be very
different when information of the payoff structure is partially known.
We believe that, in many situations, information is incomplete and thus our model can
shed light on these issues. For example, in criminal networks (Ballester et al., 2010; CalvóArmengol and Zenou, 2004; Liu et al., 2013), delinquents do not always know the proceeds
from crime (unknown  in our model) or the synergy their obtain from other delinquents
(unknown ). In R&D networks (Goyal and Moraga-Gonzalez, 2001; König et al., 2014),
the marginal reduction in production costs due to R&D collaboration is partially known. In
education (Calvó-Armengol et al., 2009), all students know each other connections in the
classroom, i.e. the network structure, but they may not be completely aware of what are
the benefits of studying. In financial networks (Acemoglu et al., 2015; Cohen-Cole et al.,
2011; Denbee et al., 2014; Elliott et al., 2014), this is even more true. Banks do not know
exactly the risk taken by giving loans to other banks. From an empirical perspective, it
would interesting to bring the model to the data. If we know the network (as, for example,
in the AddHealth data; see e.g. Calvó-Armengol et al., 2009), we can estimate the quality of
34
information of a group and how it impacts the outcomes of agents. We leave this for future
research.
References
[1] Acemoglu, D., Dahleh, M.A., Lobel, I. and A. Ozdaglar (2011), “Bayesian learning in
social networks,” Review of Economic Studies 78, 1201-1236.
[2] Acemoglu, D., Ozdaglar, A. and A. Tahbaz-Salehi (2015), “Systemic risk and stability
in financial networks,” American Economic Review 105, 564-608.
[3] Ando, T. (1987), “Totally positive matrices,” Linear Algebra and its Applications 90,
165-219.
[4] Bala, V. and S. Goyal (1998), “Learning from neighbors,” Review of Economic Studies
65, 595-621.
[5] Ballester C, Calvó-Armengol. A. and Y. Zenou (2006), “Who’s who in networks: wanted
the key player,” Econometrica 74, 1403-1417.
[6] Ballester, C., Calvó-Armengol, A. and Y. Zenou (2010), “Delinquent networks,” Journal
of the European Economic Association 8, 34-61.
[7] Banerjee, A., Chandrasekhar, A.G., Duflo, E. and M.O. Jackson (2013), “The diffusion
of microfinance,” Science 341, 6144.
[8] Blume, L.E., Brock, W.A., Durlauf, S.N. and R. Jayaraman (2015), “Linear social
interactions models,” Journal of Political Economy, forthcoming.
[9] Bonacich P. (1987), “Power and centrality: A family of measures,” American Journal
of Sociology 92, 1170-1182.
[10] Becker, G. (1968), “Crime and punishment: An economic approach,” Journal of Political
Economy 76, 169-217.
[11] Bergemann, D. and S. Morris (2013), “Robust predictions in games with incomplete
information,” Econometrica 81, 1251-1308.
35
[12] Calvó-Armengol, A. and J. de Martí (2009), “Information gathering in organizations:
Equilibrium, welfare and optimal network structure,” Journal of the European Economic
Association 7, 116-161.
[13] Calvó-Armengol, A., E. Patacchini, and Y. Zenou (2009), “Peer effects and social networks in education,” Review of Economic Studies 76, 1239-1267.
[14] Calvó-Armengol, A. and Y. Zenou (2004), “Social networks and crime decisions. The role
of social structure in facilitating delinquent behavior,” International Economic Review
45, 939-958.
[15] Cohen-Cole, E., Patacchini, E., and Y. Zenou (2011), “Systemic risk and network formation in the interbank market.” CEPR Discussion Paper No. 8332.
[16] Denbee, E., Julliard, C., Li, Y. and K. Yuan (2014), “Network risk and key players: A
structural analysis of interbank liquidity,” Unpublished manuscript, London School of
Economics and Political Science.
[17] Elliott, M.L., Golub, B. and M.O. Jackson (2014), “Financial networks and contagion,”
American Economic Review 104, 3115-3153.
[18] Galeotti A., Goyal S., Jackson M.O., Vega-Redondo F. and L. Yariv (2010), “Network
games,” Review of Economic Studies 77, 218-244.
[19] Golub, B. and M.O. Jackson (2010), “Naïve learning in social networks: Convergence,
influence, and the wisdom of crowds,” American Economic Journal: Microeconomics 2,
112-149.
[20] Golub, B. and M.O. Jackson (2012), “How homophily affects the speed of learning and
best-response dynamics,” Quarterly Journal of Economics 127, 1287-1338.
[21] Goyal, S. (2007), Connections: An Introduction to the Economics of Networks, Princeton: Princeton University Press.
[22] Goyal, S. (2011), “Learning in networks,” In: J. Benhabib, A. Bisin and M.O. Jackson
(Eds.), Handbook of Social Economics Volume 1A, Amsterdam: Elsevier Science, pp.
679-727.
[23] Goyal, S. and J.L. Moraga-Gonzalez (2001), “R&D networks,” RAND Journal of Economics 32, 686-707.
36
[24] Ioannides, Y.M. (2012), From Neighborhoods to Nations: The Economics of Social Interactions, Princeton: Princeton University Press.
[25] Jackson M.O. (2008), Social and Economic Networks, Princeton, NJ: Princeton University Press.
[26] Jackson, M.O. (2011), “An overview of social networks and economic applications,” In:
J. Benhabib, A. Bisin and M.O. Jackson (Eds.), Handbook of Social Economics Volume
1A, Amsterdam: Elsevier Science, pp. 511-579.
[27] Jackson M.O. and L. Yariv (2007), “The diffusion of behavior and equilibrium structure
properties on social networks,” American Economic Review Papers and Proceedings 97,
92-98.
[28] Jackson, M.O. and L. Yariv (2011), “Diffusion, Strategic Interaction, and Social Structure,” In: J. Benhabib, A. Bisin and M.O. Jackson (Eds.), Handbook of Social Economics
Volume 1A, Amsterdam: Elsevier Science, 645 - 678.
[29] Jackson, M.O. and Y. Zenou (2015), “Games on networks”, In: P. Young and S. Zamir
(Eds.), Handbook of Game Theory, Vol. 4, Amsterdam: Elsevier Publisher, pp. 91-157.
[30] Katz, L. (1953), “A new status index derived from sociometric analysis,” Psychometrica
18, 39-43.
[31] König, M.D., Liu, X. and Y. Zenou (2014), “R&D networks: Theory, empirics and
policy implications,” CEPR Discussion Paper No. 9872.
[32] Meyer, C. D. (2001), Matrix Analysis and Applied Linear Algebra, Philadelphia, PA:
American Mathematical Society.
[33] Lindquist, M.J. and Y. Zenou (2014), “Key players in co-offending networks,” CEPR
Discussion Paper No. 9889.
[34] Liu, X., Patacchini, E., Zenou, Y. and L-F. Lee (2012), “Criminal networks: Who is
the key player?” CEPR Discussion Paper No. 8772.
[35] Sadun, L. (2007), Applied Linear Algebra: The Decoupling Principle, Second Edition,
Providence, RI: Society for Industrial and Applied Mathematics.
37
[36] Zenou, Y. (2015a), “Networks in economics,” In: J.D. Wright (Ed.), International Encyclopedia of Social and Behavioral Sciences, 2nd Edition, Amsterdam: Elsevier Publisher,
forthcoming.
[37] Zenou, Y. (2015b), “Key players,” In: Y. Bramoullé, B.W. Rogers and A. Galeotti
(Eds.), Oxford Handbook on the Economics of Networks, Oxford: Oxford University
Press, forthcoming.
[38] Zhou, J. and Y.-J. Chen (2015), “Key leaders in social networks,” Journal of Economic
Theory, forthcoming.
38
A
Appendix
A.1
Main assumptions of the model and their implications
A.1.1
Some useful results
Lemma 2 If the distribution of (1       )T is permutation invariant, then 1       are
identically distributed.
Proof. Assume that the distribution of (1       )T is permutation invariant. Let ( ) ∈
I 2 with  6= , and let  be a permutation of I with () = . Let  ⊂ R be a Borel set, for
example,  = { } for some  ∈ S, and for all  ∈ I \ {}, let  = R. We find
µ\
¶
µ\
¶


{ ∈  } = P
{() ∈  } = P({ ∈  })
P({ ∈  }) = P
=1
=1
The first equality follows from the fact that
{ ∈  } = { ∈  } ∩ Ω = { ∈  } ∩

\
{ ∈  } =
=16=

\
{ ∈  }
=1
The second equality follows from the assumption that the distribution of (1       )T is
permutation invariant. The third equality follows from the fact that

\
{() ∈  } = {() ∈  } ∩
=1

\
{() ∈  }
=16=
= {() ∈  } ∩ Ω = {() ∈  } = { ∈  }
We conclude that 1       are identically distributed.
Lemma 3 If the distribution of (1       )T is permutation invariant, then for all ( ) ∈ I 2
with  6=  and for all ( ) ∈ I 2 with  6= , (   )T and (   )T are identically distributed.
Proof. Assume that the distribution of (1       )T is permutation invariant. Let ( ) ∈
I 2 with  6= , and let ( ) ∈ I 2 with  6= . Let  be a permutation of I with () =  and
39
() = . Let  ⊂ R and  ⊂ R be two Borel sets, for example,  = { } for some  ∈ S
and  = {} for some  ∈ S, and for all  ∈ I \ { }, let  = R. We find
¢
¡
P (   )T ∈  ×  = P({ ∈  } ∩ { ∈  })
¶
µ\

{ ∈  }
=P
=1
¶
µ\

{() ∈  }
=P
=1
= P({() ∈  } ∩ {() ∈  })
= P({ ∈  } ∩ { ∈  })
¢
¡
= P (   )T ∈  ×  
The third equality follows from the assumption that the distribution of (1       )T is permutation invariant. The other equalities are obvious. We conclude that (   )T and (   )T
are identically distributed.
A.1.2
Main results using Assumptions 1 and 2
We introduce the following assumption concerning the distribution of the players’ signals.
Assumption 3a: For all ( ) ∈ I 2 with  6= , for all ( ) ∈ I 2 with  6= , and for
all (  ) ∈ S 2 , P({ =  })P({ = } ∩ { =  }) = P({ =  })P({ = } ∩ { =  }).
Suppose Assumption 1 (defined in the text) is satisfied. Then, Assumption 3a states that
for all pairs of signal values (  ) ∈ S 2 , the conditional probability P({ = } | { =  })
is (functionally) independent of ( ) ∈ I 2 or, equivalently, P({ = } | { =  }) is only a
function of (  ) but not of ( ), where  6= .
The following lemma gives a sufficient condition for Assumption 3a to be satisfied.
Lemma 4 If the distribution of (1       )T is permutation invariant (Assumption 2), then
Assumption 3a is satisfied.
Proof. Assume that the distribution of (1       )T is permutation invariant. Let ( ) ∈
I 2 with  6= , ( ) ∈ I 2 with  6= , and (  ) ∈ S 2 . We find
P({ =  })P({ = } ∩ { =  }) = P({ =  })P({ = } ∩ { =  })
because P({ =  }) = P({ =  }) according to Lemma 2 and P({ = } ∩ { =  }) =
P({ = } ∩ { =  }) according to Lemma 3.
40
Remark 1 Note that S = {1     }. If S 6= {1     }, then we could not directly use S
as an index set to define the components of Γ. Clearly, Γ can still be defined in a reasonable way if S 6= {1     }. To see this, suppose S 6= {1     }. There exists a unique
order isomorphism  : S → {1     }.20 Using , we can restate Definition 3 as follows:
Suppose Assumptions 1 and 3a are satisfied. The players’ information matrix, denoted by
Γ = (  )()∈{1}2 , is a square matrix of order  = |S| that is given by
∀( ) ∈ {1     }2
  = P({ = −1 ()} | { = −1 ()})
=
P({ = −1 ()} ∩ { = −1 ()})

P({ = −1 ()})
where ( ) ∈ I 2 with  6=  is arbitrary.
We conclude this Appendix with a statement about the spectrum of Γ.
Proposition 5 Suppose Assumptions 1 and 2 are satisfied. Then the eigenvalues of Γ are
real.
Proof. Suppose Assumption 1 is satisfied and assume that the distribution of (1       )T
is permutation invariant. Let ( ) ∈ I 2 with  6= . Let Λ = (  )( )∈S 2 be the diagonal
matrix of order  given by
∀ ∈ S   = P({ =  })
Note that, according to Assumption 1, Λ is positive definite (and therefore non-singular).
(−1)
We write Λ−1 = (  )( )∈S 2 . Let Σ = (   )( )∈S 2 be the square matrix of order  given
by
∀(  ) ∈ S 2    = P({ = } ∩ { =  })
Note that Σ is symmetric because the distribution of (1       )T is permutation invariant.
We have Λ−1 Σ = Γ. Indeed, for all (  ) ∈ S 2 ,

X
(−1)
    = (−1)
   
=1
=
1
P({ = } ∩ { =  })
P({ =  })
= P({ = } | { =  })
=    
20
An order isomorphism is an order-preserving bijection.
41
Since Λ is symmetric and positive definite, it has a unique square root Λ12 , which is symmetric and positive definite (and therefore non-singular). Let Λ−12 denote the inverse of
Λ12 . We have
Λ12 Λ−12 = Λ12 (Λ−1 Σ)Λ−12 = Λ−12 Σ Λ−12 
that is, Γ is similar to the symmetric matrix Λ−12 Σ Λ−12 . Note that the spectrum of
Λ−12 Σ Λ−12 is real because it is symmetric. We conclude that the spectrum of Γ is real
because similar matrices have the same spectrum.
A.1.3
Main result using Assumption 3
Proposition 6 If the distribution of (1       )T is permutation invariant and 1 has the
discrete uniform distribution on S, then ΓT = Γ, that is, Γ is symmetric.
Proof. Assume that the distribution of (1       )T is permutation invariant and 1 has
the discrete uniform distribution on S. It follows that 1       are identically distributed
with the discrete uniform distribution on S (Lemma 2). Let (  ) ∈ S 2 . We need to show
that    =   . We find
   =
P({ = } ∩ { =  })
P({ =  } ∩ { = })
=
=   
P({ =  })
P({ = })
The first and the third equality are according to (26). The second equality follows from
the assumption that the distribution of (1       )T is permutation invariant and 1 has the
discrete uniform distribution on S. Indeed, P({ = } ∩ { =  }) = P({ = } ∩ { =
 }) = P({ =  } ∩ { = }) because the (   )T and (   )T are identically distributed
(Lemma 3) and P({ =  }) = P({ = }) because  has the discrete uniform distribution
of S.
A.2
The model with unknown 
Assume that the uncertainty is on the synergy parameter , which is an unknown common
value for all agents. As in the case of unknown , there are  different states of the world
so that  can take  different values:  ∈ { 1        }. There are  different values for a
signal, so that agents can be of  different types, which we denote by S = {1      }.
42
A.2.1
Equilibrium
When agent  receives the signal  =  , individual  computes the following conditional
expected utility:

¤ X
1 £ 2
E [ | { =  }] = E [ | { =  }] − E  | { =  } +
  E [  | { =  }]
2
=1
X
1
  ( ) E [ | { =  }]
=   ( ) −  ( )2 +
2
=1

The first-order conditions are given by:
∀ = 1  

X
£
¤
E [ | { =  }]
∗
=  −  ( ) +
 E ∗ | { =  } = 0
 ( )
=1
When agent  receives the signal  =  , for each possible , we have that
E [ | { =  }] =
=
=
 X

X
=1 =1

 X
X
   ()P ({ =   } ∩ { = } | { =  })
P ({ =   } ∩ { = } | { =  })    ()
=1 =1
Ã

X
X
=1
=  max
!
P ({ =   } ∩ { = } | { =  })    ()
=1
Ã

X
X
=1

P ({ =   } ∩ { = } | { =  }) 
 max
=1
!
 ()
e with entries equal to
where  max := max { 1        }. We define a  ×  matrix Γ
(e
   )( )∈{1 }2 where 
e  is defined by (individual  receives signal  while individual 
receives signal ):
  =
e

X
=1
P ({ =   } ∩ { = } | { =  })
43

 max
(38)
Observe that, in the case of incomplete information on  instead of , for all  ∈ {1  },

e is non-stochastic because
≤ 1. Therefore, Γ

max

X
=1
=


e 

 X
X
=1 =1

 X
X
=1 =1
P ({ =   } ∩ { = } | { =  })

 max
P ({ =   } ∩ { = } | { =  }) = 1
Altogether, this means that we can write the first order conditions as follows:
 − ∗ ( ) +  max

X

=1

X
=1

e   () = 0
Therefore the system of the best-replies is now given by:
⎛
⎞
Ã
x (1)
⎜ .. ⎟
⎝ . ⎠ = I  −  max
x ( )
e
Γ
|{z}
information
⊗ |{z}
G
network
!−1
⎛ ⎞
1
⎜ .. ⎟
⎝ . ⎠
1
To characterize the equilibrium, we can use the same techniques as for the case when  was
unknown. We have the following result.
Theorem 4 Consider the case³when
´ the strength
³ of´ interactions is unknown. Assume that
e
e be the eigenvalues of the information
e
Γ is is diagonalizable. Let 1 Γ ≥ · · · ≥  Γ
e and 1 (G) ≥ · · · ≥  (G) be the eigenvalues of the adjacency matrix G where
matrix
´
³ Γ
n¯ ³ ´¯o
e = max ¯¯ Γ
e ¯¯ and max (G) = max {| (G)|}. Then, there exists a unique
max Γ
Bayesian-Nash equilibrium if   0,  min  0 and
 max 
1
³ ´
e max (G)
max Γ
If the signal received is  =  , the equilibrium efforts are given by:
" 
#

³ ³ ´
´
³ ³ ´
´
X
X
(−1)
(−1)
e  max  G  +
e  max  G
x∗ ( ) = 
  1 b  Γ
   b  Γ
=1
=1
for  = 1   .
44
(39)
(40)
Proof: See Appendix A.5.
The results are relatively similar to the case when  was unknown. One of the main
difference with Theorem 1 is that the condition (39) is weaker since it imposes a larger
e is not stochastic and its
upper bound on  max compared
to  max  1max (G) because Γ
³ ´
e is not 1. We have the following remark that shows, however,
largest eigenvalue max Γ
that the condition  max  1max (G) is still a sufficient condition.
Remark 2 A sufficient condition for existence and uniqueness of a Bayesian-Nash equilibrium when  is unknown is that  max  1max (G).
Proof: See Appendix A.5.
Theorem 4 gives a complete characterization of equilibrium efforts as a function of
weighted Katz-Bonacich centralities when  is unknown. In the next section, we show
how to calculate these equilibrium efforts for some specific networks and specific information
structures.
A.2.2
Examples
Example 1 Consider first a model which is equivalent to that of Section 4, where state
 corresponds to 1 and state  corresponds to 2 ( =  ;  =   and  =  = 2). Here,
 takes two values:   and   , with      . In that case, we have:
!
Ã
!
Ã



(1
−

)
1 − 





e=
and Γ
Γ=
1 − 

1 − 

The two eigenvalues can be calculated but have cumbersome values. To simplify, assume as
in Section 5.3.2 that  = 06 and that  = 085 so that   = 0703 and   = 0776. Also,
assume that   = 02 and   =  max = 04. Then
Ã
!
0352
0149
e=
Γ
0224 0776
³ ´
³ ´
e = 0844 and 2 Γ
e = 0284. Then:
and the two eigenvalues are: 1 Γ
Ã
!
Ã
!
0290 0910
0418
0918
A=
, A−1 =
0957 −0414
0966 −0293
45
1
We can now use Theorem 4 and state that, if max (G)  04×0844
= 2962, then if individual
∗
 receives the signal  = , she provides a unique effort  ({ = }) := ∗ given by:
h
i
(−1)
(−1)
(−1)
(−1)
∗ =     (0338 G) +    (0114 G) +    (0338 G) +    (0114 G)
=  [0121  (0338 G) + 0879  (0114 G) + 0266  (0338 G) − 0267  (0114 G)]
If individual  receives the signal,  = , she provides a unique effort ∗ ({ = }) := ∗ ,
which is given by:
h
i
(−1)
(−1)
(−1)
(−1)
∗ =     (0338 G) +    (0114 G) +    (0338 G) +    (0114 G)
=  [04  (0338 G) − 04  (0114 G) + 0879  (0338 G) + 0121  (0114 G)]
¡ ¢ √
Let us consider the star network of Figure 1 (Section 3.2), then max G = 2  2962.
We have
⎞
⎛ ¡
⎞
⎛ ¡
¢ ⎞ ⎛
¢ ⎞ ⎛
1 0114 G
2172
1261
1 0338 G
¢ ⎟ ⎜
¢ ⎟ ⎜
⎟
⎜ ¡
⎟
⎜ ¡
⎝ 2 0338 G ⎠ = ⎝ 1734 ⎠ and ⎝ 2 0114 G ⎠ = ⎝ 1144 ⎠
¡
¢
¡
¢
1734
1144
3 0338 G
3 0114 G
In that case, if  = 05, we have:
⎛
⎛
⎞
⎞
∗
∗
1 = 1213
1 = 0806
⎜ ∗
⎜
⎟
⎟
⎝ 2 = 0686 ⎠ and ⎝ ∗
2 = 0949 ⎠
∗
∗
3 = 0949
3 = 0686
¡ ¢
If we now consider the complete network with three agents (where max G = 2  2962),
obtain:
⎞
⎛ ¡
⎞
⎛ ¡
¢ ⎞ ⎛
¢ ⎞ ⎛
3086
1 0114 G
1295
1 0338 G
¢ ⎟ ⎜
¢ ⎟ ⎜
⎟
⎜ ¡
⎟
⎜ ¡
⎝ 2 0338 G ⎠ = ⎝ 3086 ⎠ and ⎝ 2 0114 G ⎠ = ⎝ 1295 ⎠
¡
¢
¡
¢
3086
1295
3 0338 G
3 0114 G
and thus
⎛
⎞
⎞
∗
∗
1 = 1793
1 = 0993
⎜
⎜ ∗
⎟
⎟
⎝ 2 = 0993 ⎠ and ⎝ ∗
2 = 1793 ⎠
∗
∗
3 = 1793
3 = 0993
⎛
46
Example 2 Let us now consider the model of Section 5.2 with  =  and where
the  ×  matrix P is given by (27) and the  ×  information matrix Γ by (28). We
e and then get a closed-form expression for the Bayesian-Nash
want to compute the matrix Γ
equilibrium. We have seen that
P({ =  } ∩ { = } | { =  })
⎧ ¡
¢
1− 2
⎪
if  6=  and  6= 
⎨ ¡ −1 ¢
1−
=
  −1 if either ( 6=  and  = ) or ( =  and  6= )
⎪
⎩
if  =  = 
2
and that, if  =  , we obtain:
 
while, if  6=  , we get:
  =
(1 − )2
= +
 −1
2
(1 − ) (  +  − 2)
( − 1)2
It is easily verified that, if  =  , then
Ã
µ
¶2 X !
1
1
−

2   +


e  =
 max
 − 1 6= 
"Ã
#
¶2 !
¶2
µ
µ
1
1
−

1
−

b
2 −
 +
=

 max
 −1
 −1
b is the expected value of  with respect to the prior distribution, i.e. 
b=
where 
P
1
   . If, on the contrary,  6=  , then

X


=
P(() ( )|  ) 
 max
 max


⎛
⎞
µ
¶
¶2 X
µ
⎟
1−
1 ⎜
1−
⎜
⎟

(
=
+

)
+



 max ⎝  − 1
 − 1 6=  ⎠

e  =
=
X
P(   ()|( ))
1
 max
µ
1−
 −1
¶ ∙µ
¶
6=
¶ ¸
µ
 − 1
 − 1 b
(  +   ) + 1 −

 −1
 −1
e is symmetric and thus diagonalizable.
Clearly the matrix Γ
47
1

P

 =
As above, assume that  = 06 and  = 3. This means that  can take three values
b = 03 and that each agent  receives three
  = 02,   = 03,   =  max = 04 so that 
signals: ,  or . In that case,
⎞
⎛
025 019 021
⎟
e=⎜
Γ
⎝019 033 023⎠
021 023 041
³ ´
³ ´
³ ´
e = 076, 2 Γ
e = 0138 and 3 Γ
e = 0091. Then:
and the three eigenvalues are: 1 Γ
⎞
⎞
⎛
⎛
0485 0166
0859
0485 0569
0664
⎟
⎟
⎜
⎜
A = ⎝0569 0686 −0454⎠ , A−1 = ⎝0166 0686 −0709⎠
0664 −0709 −0238
0859 −0454 −0238
1
We can now use Theorem 4 and state that, if max (G)  04×076
≈ 3289, then if each
individual  receives the signal  = , she provides a unique effort given by:
⎤
⎡
(−1)
(−1)
(−1)
  b (0304 G) +   b (0055 G) +   b (0037 G)
⎥
⎢
(−1)
(−1)
x∗ ({ = }) =  ⎣ + (−1)
 b (0304 G) +   b (0055 G) +   b (0037 G) ⎦
(−1)
(−1)
(−1)
+  b (0304 G) +   b (0055 G) +   b (0037 G)
⎡
⎤
0235 b (0304 G) + 0028 b (0055 G) + 0737 b (0037 G)
⎢
⎥
=  ⎣ +0276 b (0304 G) + 0114 b (0055 G) − 039 b (0037 G) ⎦
+0322 b (0304 G) − 0118 b (0055 G) − 0204 b (0037 G)
Similar calculations can be done when each agent  receives the signals  =  and  = .
A.3
The Kronecker product
Definition 4 Let A be an 1 ×2 matrix and B an 1 ×2 matrix. The Kronecker product
(or tensor product) of A and B is defined as the 1 1 × 2 2 matrix
⎛
11 B
⎜ ..
A ⊗ B =⎝ .
···
...
⎞
12 B
.. ⎟
. ⎠
1 1 B · · · 1 2 B
Here are some useful results:
Proposition 7 Let A, B, C, and D be matrices.
48
() If A and C are conformable and B and D are conformable, then
(A ⊗ B) (C ⊗ D) = AC ⊗ BD
(A ⊗ B) = A ⊗B 
() If A and B are nonsingular, then
(A ⊗ B)−1 = A−1 ⊗B−1
() Let the  ×  matrix A have eigenvalues 1    and let the  ×  matrix B have
eigenvalues 1    . Then the  eigenvalues of A ⊗ B are
1 1   1   2 1   2     1     
A.4
Jordan decomposition
A.4.1
Some useful results on the Jordan canonical form
Diagonalizable matrices21
(Γ). Let  ∈ (Γ).
Consider a matrix Γ ∈ C× , and denote its spectrum by
Definition 5
1. The algebraic multiplicity alg mult Γ () of an eigenvalue  is the multiplicity of 
as a root of the characteristic polynomial of Γ.
2. The geometric multiplicity geo mult Γ () of an eigenvalue  is the dimension of the
subspace spanned by the eigenvectors corresponding to .
The above definition suggests that it is possible for a repeated eigenvalue to be associated with more than one independent eigenvectors. This implies that geo mult Γ () =
 (Γ − I). Different eigenvalues though correspond to linearly independent (“different”) eigenvectors. In fact, it can be shown (see, for example, Sadun, 2007, Theorem 4.8)
that
1 ≤ geo mult Γ () ≤ alg mult Γ ()
(41)
This gives rise to the following definition
21
For a more thorough technical discussion as well as some intuition behind matrix diagonalizability, see
Meyer (2001), Chap. 7.8, and Sadun (2007), Chap. 4.5.
49
Definition 6
1. An eigenvalue  is called simple if alg mult Γ () = 1, that is  is a non-repeated
eigenvalue of Γ.
2. An eigenvalue is called semi-simple if alg mult Γ () = geo mult Γ (). A matrix is
called semi-simple if all its eigenvalues are semi-simple.
3. An eigenvalue  that is not semi-simple is called defective or deficient of (order of)
deficiency Γ () := alg mult Γ () = geo mult Γ (). A matrix with one or more defective
eigenvalues is called defective or deficient.
A well-known result in linear algebra is the following (see e.g. Meyer, 2001):
Proposition 8 A matrix Γ is diagonalizable if and only if all its eigenvalues are semi-simple.
It follows directly from (41) that all simple eigenvalues are also semi-simple, but the
converse, of course, will not hold in general. Hence
Corollary 1 If all the eigenvalues of the matrix Γ are simple, then Γ is diagonalizable.
As the above discussion implies, the converse is not true: there are diagonalizable matrices
with repeated eigenvalues. The identity matrix I , for example, has only one eigenvalue,
 = 1, yet it is diagonalizable. In fact, any non-singular matrix can serve as its eigenbasis
since alg mult I (1) = geo mult I (1) = .
Although deficient matrices are not diagonalizable, they can still be written as a similarity transformation of a matrix with their eigenvalues in the diagonal. This matrix will no
longer be diagonal, but it will be block diagonal and have a special form, as discussed in the
following subsection.
Definition and motivation The Jordan canonical form or simply Jordan form of a matrix Γ ∈ C × is a  ×  block diagonal matrix similar to Γ, that is, a matrix JΓ satisfying
Γ = AJΓ A−1 , that has the form
⎤
⎡
b1 0 · · · 0
J
⎥
⎢
⎢ 0 . . . . . . ... ⎥
⎥
JΓ = ⎢
⎥
⎢ .. . . . .
.
.
⎣.
0⎦
b
0  0 J
50
that satisfies
Γ = AJΓ A−1
b   = {1     } are called Jordan
for some nonsingular matrix . The square sub-matrices J
b of size  ×  for each distinct eigenvalue 
segments. There exists a Jordan segment J
of Γ. Hence, the number  of Jordan segments in the Jordan form of a matrix is equal to
the number of distinct eigenvalues of that matrix,  = |(Γ)|. The size of Jordan segment
b is equal to the algebraic multiplicity of the corresponding eigenvalue  , that is,  =
J
 Γ ( ), and its main diagonal consists of  ’s.
b is a block diagonal matrix itself, consisting of Jordan blocks Jh
Each Jordan segment J
that are square matrices of the form
⎤
⎡
 1
0
⎥
⎢
 1
⎥
⎢
⎥
⎢
.
⎥
.. 1
(42)
Jh ( ) = ⎢
⎥
⎢
⎥
⎢
⎣
0
 1 ⎦

b is equal to the geometric multiplicity
The number  of Jordan blocks in a Jordan segment J
of the corresponding eigenvalue  ,22 while the size of the largest Jordan block in segment
b is equal to ( ). The structure of the Jordan form of a matrix, i.e. the number and
J
the size of its Jordan segments and blocks, is uniquely determined by the elements of that
matrix. This implies that the Jordan form of a matrix will be unique up to the ordering of
the Jordan segments and the blocks within each segment.
The vectors comprising the basis (A A−1 ) of the similarity transformation are called
the generalized eigenvectors of Γ, and are, in general, not unique. The set of generalized
eigenvectors corresponding to a Jordan block Jh ( ) constitute a Jordan chain. Notice that
the eigenvector associated with  will be a member of that Jordan chain, that is, eigenvectors
are also generalized eigenvectors of a matrix. The converse is, however, not true.
Functions of non-diagonal Jordan forms Directly computing functions of nontrivial
Jordan blocks can be cumbersome. Yet there exists an elegant expression for differentiable
functions of Jordan blocks.
22
b is given by
More specifically, it can be shown that the number of -dimensional Jordan blocks in J
  ( ) = −1 ( ) − 2 ( ) + +1 ( )
¡
¢
where  ( ) =  (Γ −  I ) .
51
Proposition 9 For a  ×  Jordan block J ( ) associated with eigenvalue  , and for a
function () such that  ( ),  0 ( ), ... ,  (−1) ( ) exist,  (J ( ) is defined as:
⎛⎡
⎤⎞
⎤⎞
⎛⎡
 00 ( )
 ( −1) ( )
0

(
)

(
)

 1
0


2!
( −1)!
⎜⎢
⎥⎟
⎥⎟
⎜⎢
..
.
⎢
⎜
⎥⎟
0
..
 1
⎥⎟
⎜⎢

(
)

(
)
.


⎢
⎜
⎥⎟
⎟
⎥
⎜⎢
⎜⎢
⎥⎟
.
00 ( )
⎟
⎢
⎥
⎜
.
.
.


 (Jh ) =  ⎜⎢
..
..
⎢
⎥⎟
. 1
⎥⎟ =  ⎜
⎜⎢
⎥⎟
2!
⎥⎟
⎜⎢
⎢
⎜
⎥⎟
⎝⎣
0
 1 ⎦⎠
0
 ( )  0 ( ) ⎦⎠
⎝⎣

 ( )
(43)
A.4.2
Proof of Theorem 2
A more general approach Recall that in the −player game, the equilibrium efforts of
the agents as a function of the signal they receive are given by
⎛
⎛
⎞
⎞
x∗ ({ = 1})

b 1 1
⎜
⎟
..
−1 ⎜ . ⎟
⎝
⎠ = [I  − (Γ ⊗ G)] ⎝ .. ⎠
.
x∗ ({ =  })
(44)

b  1
Let us rewrite the proof of Theorem 1 using the Jordan decomposition of matrix Γ instead
of its diagonal eigenvalue decomposition. We have:
Γ = A JΓ A−1
where A is a non-singular  ×  matrix and JΓ is the Jordan form of matrix Γ. Recall that
since G is a real symmetric matrix, it is diagonalizable with
G = C DG C−1
Let max (G) denote the spectral radius of matrix G, that is, max (G) := max∈(G) ||.
Then, assuming that max (Γ ⊗ G) = max (Γ)max  1, it follows that
£
¤−1
[I  − (Γ ⊗ G)]−1 = I  − (A ⊗ C)(JΓ ⊗ DG )(A−1 ⊗ C−1 )
=
+∞
X
(A ⊗ C)  (JΓ ⊗ DG ) (A−1 ⊗ C−1 )
=0
= (A ⊗ C)
+∞
X
(  JΓ  ⊗ DG  )(A−1 ⊗ C−1 )
=0
52
(45)
The above expression differs from the respective expression in the paper in the term JΓk .
Recall that JΓ is a block diagonal matrix, consisting of Jordan blocks and zero matrices.
Thus
JΓk
⎤
⎡ k
J1
0 ··· 0
⎢
.. ⎥
⎢ 0 J2k . . .
. ⎥
⎥
⎢
=⎢ .
⎥
.
.
.
.
.
.
. 0⎦
⎣ .
0    0 JTk
An additional complication stems from calculating the terms Jqk . For Jordan blocks of
diagonalizable matrices, or Jordan blocks of deficient matrices associated with semi-simple
eigenvalues, it is easy to calculate these terms since Jqk will be a degenerate 1 × 1 matrix
given by:
Jqk = [ ] = [ ]
If, however, Γ is not diagonalizable, its Jordan form will contain at least one non-diagonal
Jordan block of the form given in (42). In that case, we need to resort to formula (43),
letting () =  ,
Jqk
or more compactly
⎡
 −1

⎢ 
⎢

⎢
⎢
=⎢
⎢
⎢
⎣
0
Jqk
(−1)−2

2
−1


...
...
...
−( −1)
(−( −2))(−1)
( −1)!
..
.
(−1)−2

2
−1


⎡
¡¢
¡¢
 1 −1 2 −2

⎢
¡¢ −1

⎢


⎢
1
⎢
...
=⎢
⎢
⎢
0
⎣

...
...

¡
¢ −( −1) ⎤

⎥
..
⎥
.
⎥
⎥
¡¢ −2
⎥


⎥
¡2¢ −1
⎥

⎦
1


⎤
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎦
(46)

 −1
It follows then that Jqk , and thus JΓk , will not be diagonal matrices as DΓk is. As a
result, the breakdown of the vector of equilibrium efforts x∗ into Katz-Bonacich measures
is complicated and to obtain the expression (29), we will first consider the case of a 3 × 3
information matrix Γ (3 states of the world and 3 signals).
53
The case of a 3 × 3 information matrix Γ In order to see how expression (29) changes
when the information matrix Γ is non-diagonalizable, we first start with an example. Suppose
that there are three possible values for the signal, leading to a 3 × 3 information matrix Γ.
Assume that Γ possesses a simple eigenvalue, 1 , and a defective double eigenvalue, 2 = 3 .
Hence Γ will not be diagonalizable. Yet, as discussed above, a diagonal Jordan decomposition
will still exist and given by:
⎛
⎞
!
Ã
1 0 0
J1{1x1} 0{12}
⎜
⎟
= ⎝ 0 2 1 ⎠
JΓ =
0{21} J2{2x2}
0 0 3
b denotes
For notational convenience, it will useful to relabel the eigenvalues of Γ so that 
b1 := 1 = 2 and 
b2 := 3 .
the eigenvalue associated with the −th Jordan block, that is 
Thus,
⎛
Using (46), we have:
⎞
b1 0 0

⎜
b2 1 ⎟
JΓ = ⎝ 0 
⎠
b3
0 0 
JΓ
⎛
b

1
⎜
=⎜
⎝ 0
0
⎞
0
0
⎟

b−1 ⎟
b 

⎠
2
2

b
0

2
Then, by using standard matrix algebra, it is easy to show that
⎛ P
⎞
 b k

D
0
0

+∞
1 G
X
⎜ 
⎟
P  b k P
 
k
 b−1 k ⎟
⎜
(47)
( JΓ ⊗ DG ) = ⎝

D

D


0
2 G
2
G ⎠


P  b k
=0
0
0
  2 DG
P
 k
k
Recall that if Γ is diagonalizable, then matrix +∞
=0 ( DΓ ⊗ DG ) will be diagonal, as shown
in the proof of Theorem 1. Here, in our example, this will not be the case since the term
P
 b−1

  2 DG appears in the (2 3) block of this matrix. This term is a source of potential
concern since, apart from complicating the algebra, it is not straightforward to interpret this
expression as some measure of Katz-Bonacich centrality. It will, however, turn out to be the
case. Indeed, taking into account (47), expression (45) can be written as:
[I  − (Γ ⊗ G)]−1 =
54
⎞
⎞ ⎛P   k
⎛
b

D
0
0

11 C 12 C 13 C
1 G

⎟
P  b k P
⎟⎜
⎜
 b−1 k ⎟
⎜
⎝21 C 22 C 23 C⎠ ⎝



D

D
0
2 G
2
G⎠


P  b k
31 C 32 C 33 C
0
0
  2 DG
⎞
⎛ (−1)
(−1)
(−1)
11 C−1 12 C−1 13 C−1
⎟
⎜
(−1)
(−1)
−1
× ⎝(−1)
22 C−1 23 C−1 ⎠
21 C
(−1)
(−1)
(−1)
31 C−1 32 C−1 33 C−1
⎛
⎞
P  b
P  b
P
P  b
 b−1




 C
 1 DG 12 C   2 DG 12 C   2 DG + 13 C   2 DG
⎜ 11 P
⎟
⎜
b DG  22 C P   
b DG  22 C P   
b−1 DG  + 23 C P   
b DG  ⎟
= ⎝21 C    
1
2
2
2
⎠



P  b
P  b
P
P  b
 b−1




31 C   1 DG 32 C   2 DG 32 C   2 DG + 33 C   2 DG
⎛ (−1)
⎞
(−1)
(−1)
11 C−1 12 C−1 13 C−1
⎜
⎟
(−1)
(−1)
−1
× ⎝(−1)
22 C−1 23 C−1 ⎠
21 C
(−1)
(−1)
(−1)
31 C−1 32 C−1 33 C−1
⎛
³
³
³
³
´
´
´
´
(−1)
(−1)
(−1)
(−1)
b
b
b
b
  M0 1  G + 12 21 M0 2  G + 12 31 M1 2  G + 13 31 M0 2  G   
⎜ 11 11
³
³
³
³
´
´
´
´
⎜
(−1)
b1  G + 22 (−1) M0 
b2  G + 22 (−1) M1 
b2  G + 23 (−1) M0 
b2  G   
= ⎜ 21 11 M0 
21
31
31
⎝
³
³
³
³
´
´
´
´
(−1)
(−1)
(−1)
b1  G + 32  M0 
b2  G + 32  M1 
b2  G + 33 (−1) M0 
b2  G   
31 11 M0 
21
31
31
´
´
´
´ ⎞
³
³
³
³
(−1)
b1  G + 12 (−1) M0 
b2  G + 12 (−1) M1 
b2  G + 13 (−1) M0 
b2  G
11 13 M0 
23
33
33
³
³
³
³
´
´
´
´ ⎟
(−1)
(−1)
(−1)
b1  G + 22  M0 
b2  G + 22  M1 
b2  G + 23 (−1) M0 
b2  G ⎟
21 13 M0 
⎟
23
33
33
³
³
³
³
´
´
´
´ ⎠
(−1)
(−1)
(−1)
(−1)
b1  G + 32  M0 
b2  G + 32  M1 
b2  G + 33  M0 
b2  G
31 13 M0 
23
33
33
⎛
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎝
⎞
´
³
(−1)
b  G ⎟
1 3 M− 
⎟
⎟
=1 =−1 +1 =−1 +1
=1 =−1 +1 =−1 +1
⎟
+
−1 +


−1
2
2


³
³
´
´ ⎟
X X
X
X
X
X
(−1)
(−1)
b  G   
b  G ⎟
2 1 M− 
2 3 M− 
⎟
⎟
=1 =−1 +1 =−1 +1
=1 =−1 +1 =−1 +1
⎟
−1 +
−1 +
2
2


³
³
´
´ ⎟
X
X
X
X
X
X
⎟
(−1)
(−1)
b  G   
b  G ⎠
3 1 M− 
3 3 M− 
2
X
−1 +
X

X
2
´
³
X
(−1)
b
1 1 M−   G   
=1 =−1 +1 =−1 +1
−1 +
X

X
=1 =−1 +1 =−1 +1
(48)
55
where
b  G) :=
M0 (
b  G) :=
M1 (
 :=
+∞
X
=0
+∞
X
=0

X
b  G)
b G = Λ(

−1
b
  

G

=1
where  denotes the size of the th Jordan block. It can be observed that
b  G)1 = Λ(
b  G)1 = b(
b  G)
M0 (
(49)
b  ) is not as straightforward. Yet it can be shown that
The interpretation of M1 (
b  ) leads to weighted Katz-Bonacich centrality. Indeed, we have the following result.
M1 (
b ) := (I − 
b )−1 1 , and denote the 1 (
b )-weighted KatzLemma 5 Let 1 (
Bonacich centrality by u1 . Then,
b  G)1 = bu (
b  G)
M1 (
1
(50)
b  G), the derivative of the unweighted Katz-Bonacich centrality with respect
Hence M1 (
b , is the 1 (
b )-weighted Katz-Bonacich centrality.
to 
Proof: Recall that, by definition,
b  1max (G), then
If  
b  G) :=
b(
+∞
X
b ) G 1
(
(51)
=0
b G)−1 1
b  G) = (I − 
b(
Hence, using the definition of Katz-Bonacich centrality in (51), we have:
" +∞
# +∞
X
b  G)
 Xb  
b(
b−1 G 1 = M1 (
b  G)1
=
( ) G 1 =
  

b
b


=0
=0
56
(52)
(53)
Similarly, by the alternative expression for Katz-Bonacich given in (52), we have:
i
b  G)
b(
 h
−1
b
=
(I −  G) 1
b
b


b
b G)−1 (I −  G) (I − 
b G)−1 1
= −(I − 
b
 
−1
b G) (−G)(I − 
b G)−1 1
= −(I − 

b G)−1 1


b G) G(I −
= (I − 
b G)−1 (I − 
b G)−1 G 1
= (I − 
b G)−1 u1
= (I − 
−1
b  G)
= bu1 (
(54)
b ) := (I − 
b G)−1 G1 , and the fifth equality follows from the fact that
where u1 (
b G)−1 and G commute. Let us show that, indeed, matrices G and (I − 
b G)−1
(I − 
are commutative by the following lemma.
Lemma 6 Let A be a nonsingular matrix and let B be a conformable matrix that commutes
with A. The B also commutes with A− , for  ∈ N.
Proof: Let us start by showing that A−1 and B commute. By using the assumption
that A and B commute, and pre- and post-multiplying by A−1 , we obtain:
AB=BA
A−1 (A B) A−1 = A−1 (B A) A−1
B A−1 = A−1 B
It is now straightforward to show that matrices A− and B are commutative. Indeed
A− B = A−+1 A−1 B
= A−+1 B A−1
= A−+2 A−1 B A−1
= A−+2 B A−2
= 
= A−1 B A−+1
= B A−
57
(55)
where we have used (55).
Let us go back to the proof of Lemma 5. We have shown that
b  G)
b(
b  G)
= bu1 (
b
 
Therefore, equalities (53) and (54) imply that:
b  G)1 = bu (
b  G)
M1 (
1
which is the statement of the lemma.
b  G)1 = bu (
b  G). Substituting (48) into (44),
We have thus showed that M1 (
1
and taking into account (49), the vector of (stacked) equilibrium efforts can be written as:
⎞ ⎛ P3
⎛
⎞
P2 P−1 + P
(−1)
b  G)

b


M
(

x∗ ({ = 1})

1
h−

=1
=−1 +1
−1 +1
P=1
P=
⎟ ⎜ P3
⎜ ∗
−1 + P
(−1)
2
b  G) ⎟
=

b


x
({
=
2})
⎠ ⎝ =1  =1 =−1 +1 =−1 +1 2  Mh− (
⎝
⎠
P2 P−1 + P
P3
(−1)
∗
b
x ({ = 3})
b  =1 =−1 +1 =−1 +1 3  Mh− (  G)
=1 
(56)
It can thus be seen that the equilibrium strategies are a linear combination of unweighted
and weighted Katz-Bonacich centralities.
Generalization to an arbitrary information matrix It can be shown that above conclusion carries over to the more general case of a  × matrix Γ with any number of defective
eigenvalues of arbitrary deficiency
2). Indeed, assume that the Jordan form of Γ
³ (Theorem
´
b   ∈ {1     }. Using the same technique as above,
consists of  Jordan blocks,  
it can be seen that the (  )−block of matrix [I  − (Γ ⊗ G)]−1 can be written as:
[I  − (Γ ⊗
where
G)]−1
( )
=
 −1 +

X
X X
=
=1
=
−1 +1 −1 +1
b  G)
   M− (
(−1)
¶
+∞ µ
X

b  G) :=
b−(−)   G
M− (

− 
=0
(57)
(58)
b defined as above. The following result is useful in obtaining the desired
and let    and 
characterization of the equilibrium efforts.23
23
Observe that in order to keep notation as simple as possible and since there is no risk of confusion,
similarly to Lemma 5, we define
b  G) := b
b
bu− (
 ) (  G)
u− (
58
b  ) can be mapped into a vector of u (
b )Lemma 7 For  ∈ , the matrix M (
b ) := (I − )−   G 1 , as follows:
weighted Katz-Bonacich centralities, with u (
b  G)1 = bu (
b  G)
Mh (

(59)
b  G)
Hence, the -th order derivative of the unweighted Katz-Bonacich centrality measure b(
b is still a weighted Katz-Bonacich centrality. More generally, for   ∈ ,
with respect to 
b  G) with respect
the -th order derivative of the weighted Katz-Bonacich centrality u (
b is still a weighted Katz-Bonacich centrality, albeit with a different weight.
to 
Proof: Using (51), we can calculate the second derivative of the Katz-Bonacich centrality
measure as follows:
b  G)
b  G)
 2 b(
bu1 (
=
b
b2



" +∞
#
 X  b−1 
=
  G 1
b

=0
=
+∞
X
=0
Similarly, using (52), we get
b−2   G 1
( − 1)

b  G)1
= 2 M2 (
b  G)
b  G)
 bu1 (
 2 b(
=
b
b2



i
 h
b G)−1 (I − 
b G)−1 G1
=
(I − 
b

" 
#
b G)−1
b G)−1
(I
−

(I − 

b G)−1 + (I − 
b G)−1
=
G1
(I − 
b
b


h
i
b G)−2 G (I − 
b G)−1 + (I − 
b G)−1 (I − 
b G)−2 G G1
= (I − 
h
i
−1
−2
b
b
= 2 (I −  G) (I −  G) G G1
b G)−1 (I − 
b G)−2  2 G2 1
= 2(I − 
b G)−1 u2
= 2(I − 
b  G)
= 2 bu2 (
59
b G)−2  2 G2 1 , and the fifth equality follows from the commutation of
where u2 := (I − 
b G)− and G for  ∈ N (see Lemma 6).
(I − 
The general pattern for the -th order derivative of each expression of the Katz-Bonacich
b  G) starts now to emerge. Using (51), we obtain:
centrality b(
"
#
+∞ −1
Y
b  G) X
  b(
b−   G 1
=
( − ) 


b

=0 =0

Q−1
+∞
X
( − ) b−  
  G 1
=
(!) =0
!
=0
+∞ µ ¶
X
 b−  
= !
  G 1
 
=0
b  G)1
= (!)Mh (
(60)
where the last equality follows from the definition of Mh given in (58). Similarly, starting
b G)− commute, it can be shown that
from (52) and taking into account that G and (I − 
b  G)
  b(
b G)−−1 (!)   G 1
= (I − 

b


b G)−1 (I − 
b G)−   G 1
= (!) (I − 
b G)−1 u
= (!) (I − 
b  G)
= (!) bu (
(61)
b G)−   G 1 . Now notice that (60) and (61) imply that
where u := (I − 
b  G)1 = bu (
b  G)
Mh (

which is precisely equation (59).
b  G)1 = bu (  G), that is, Mh (
b  G) can be mapped
We have shown that Mh (

b G)−   G 1 .
into a vector of u -weighted Katz-Bonacich centralities, with u := (I − 
Then, substituting (57) into (44) and applying Lemma 7 yields
⎛
⎞
⎞ ⎛
⎞⎛
..
..
..
..
..
.
.
.
.
.
⎜
⎟
⎟ ⎜
⎟⎜
(−1)
⎜x∗ ({ =  })⎟ = ⎜· · · P P−1 + P
⎟
⎟
⎜
b


M
(


G)
·
·
·
1

b


−




=−1 +1
=1
=−1 +1
⎝
⎠
⎠ ⎝
⎠⎝
..
..
..
..
..
.
.
.
.
.
60
⎛
..
.
⎞
hP
i
⎜P
⎟
P−1 + P
(−1)

b
=⎜

b


M
(


G)
1 ⎟
−

=−1 +1
=1
=−1 +1   
⎝ =1 
⎠
..
.
⎛
⎞
..
.
⎜P
⎟
P P−1 + P
(−1)
⎟
b
=⎜
b
=−1 +1
=1
=−1 +1    b− (  G)⎠
⎝ =1 
..
.
and thus expression (34) in Theorem 2 is obtained.
Hence a generalized version of Theorem 1 (given by Theorem 2), applicable to any matrix
Γ, provides an expression for the equilibrium efforts that is a linear combination of weighted
Katz-Bonacich centralities. In the light of the above discussion, deriving explicitly such
expression seems straightforward, albeit quite tedious in terms of algebra and notation,
since it must take into account the deficiency of the eigenvalues of Γ.
A.5
Proofs
Proof of Lemma 1: We have:
P ({ = })
= P ({ = } | { =  }) P({ =  }) + P ({ = } | { =  }) P({ =  })
=  (1 − ) + (1 − ) 
and
P ({ = })
= P ({ = } | { =  }) P({ =  }) + P ({ = } | { =  }) P({ =  })
= (1 − ) (1 − ) + 
We have:
=
=
=
=
P ({ =  } ∩ { = } | { = })
P ({ = } ∩ { = } ∩ { =  })
P ({ = })
P ({ = } | { = } ∩ { =  }) P ({ = } | { =  }) P ({ =  })
P ({ = })
P ({ = } | { =  }) P ({ = } | { =  }) P ({ =  })
P ({ = })
2
 (1 − )
 (1 − ) + (1 − ) 
61
and thus
 (1 − ) (1 − )
 (1 − ) + (1 − ) 
 (1 − )2
P ({ =  } ∩ { = } | { = }) =
 (1 − ) + (1 − ) 
(1 − ) 
P ({ =  } ∩ { = } | { = }) =
 (1 − ) + (1 − ) 
P ({ =  } ∩ { = } | { = }) =
Similarly,
(1 − ) (1 − ) 
 + (1 − ) (1 − )
(1 − ) (1 − )2
P ({ =  } ∩ { = } | { = }) =
 + (1 − ) (1 − )
 (1 − )
P ({ =  } ∩ { = } | { = }) =
 + (1 − ) (1 − )
 2
P ({ =  } ∩ { = } | { = }) =
 + (1 − ) (1 − )
P ({ =  } ∩ { = } | { = }) =
As a result,
  = P ({ = } | { = })
= P ({ =  } ∩ { = } | { = }) + P ({ =  } ∩ { = } | { = })
(1 − )  2 +  (1 − )2
=
 (1 − ) + (1 − ) 
1 −   = P ({ = } | { = }) =
 (1 − )
 (1 − ) + (1 − ) 
1 −   = P ({ = } | { = })
= P ({ =  } ∩ { = } | { = }) + P ({ =  } ∩ { = } | { = })
 (1 − )
=
 + (1 − ) (1 − )
(1 − ) (1 − )2 + 2
  = P ({ = } | { = }) =
 + (1 − ) (1 − )
This proves the results.
62
Proof of Proposition 2: The first-order conditions are given by (22), i.e.
b
x = [I2 −  Γ ⊗ G]−1 α
!
à !
Ã
x

b  1
b=
.
where x =
and α
x

b  1
First, let us show that I2 −  Γ ⊗ G is non-singular. This is true if 0    1max (G)
holds true. Indeed, since Γ is a stochastic matrix and thus max (Γ) = 1, then max (Γ ⊗ G) =
max (Γ) max (G) = max (G). Therefore, if 0    1max (G) holds true, I2 −  Γ ⊗ G is
non-singular. This shows that the system above has a unique solution and thus there exists
a unique Nash-Bayesian equilibrium. This solution is interior since we have assumed that
0     , which implies that 
b   0 and 
b   0.
Second, to show that the equilibrium action of each agent  is a linear function of the
Katz-Bonacich centrality measures  ( G) and  ((  +   − 1)  G), let us diagonalize
the two main matrices Γ and G.
Since Γ is a 2 × 2 stochastic matrix, it can be diagonalized as follows:
Γ = A DΓ A−1
where
DΓ =
Ã
0
1 (Γ)
0
2 (Γ)
!
and where 1 (Γ) ≥ 2 (Γ) are the eigenvalues of Γ. Observe that Γ is given by
!
Ã
1 − 

Γ=
1 − 

It is easily verified that the two eigenvalues are 1 (Γ) ≡ max (Γ) = 1 and 2 (Γ) =   +   − 1
and that
Ã
!
1 − (1 −   )  (1 −   )
A=
1
1
and
Thus
DΓ =
Ã
A−1 =
µ
1 − 
2 −  − 
1
0
0  +  − 1
!
!
1 (1 −   )  (1 −   )
−1
1
¶Ã
(62)
Moreover, the  ×  network adjacency matrix G is symmetric and therefore can be diagonalized as follows:
G = CDG C−1
63
where
⎛
⎜
⎜
DG = ⎜
⎜
⎝
1 (G)
0
..
.
0
0 ···
... ...
... ...
···
0
0
..
.
0
 (G)
⎞
⎟
⎟
⎟
⎟
⎠
(63)
and where max (G) := 1 (G) ≥ 2 (G) ≥  ≥  (G) are the eigenvalues of G.
In this context, the equilibrium system (22) can be written as:
We have:
b
x∗ = [I2 −  Γ ⊗ G]−1 α
¡
£
¢ ¡
¢¤−1
b
α
= I2 −  A DΓ A−1 ⊗ CDG C−1
¡ −1
£
¢¤
−1 −1
b
α
= I2 −  (A ⊗ C) (DΓ ⊗ D ) A ⊗ C
¡
¢¤−1
I2 −  (A ⊗ C) (DΓ ⊗ D ) A−1 ⊗ C−1
+∞
X
¡
¢
= (A ⊗ C)
  (DΓ ⊗ D ) A−1 ⊗ C−1
£
=1
where we have used the properties of the Kronecker product (see Appendix A.3).
Using (62), we have:
!
Ã
DG
0
DΓ ⊗ DG =
0 (  +   − 1) DG
where DG is defined by (63). Thus,

(DΓ ⊗ DG ) =
Ã
0
DG
0 (  +   − 1) DG
!
Let us have the following notations:
Λ1 : =
+∞
X


DG
and Λ2 : =
=1
+∞
X
=1
64
  (  +   − 1) DG
Then,
(A ⊗ C)
+∞
X
¡
¢
  (DΓ ⊗ D ) A−1 ⊗ C−1
=1
!
Ã
¢
Λ1 0 ¡ −1
A ⊗ C−1
= (A ⊗ C)
0 Λ2
³
´ !Ã
³
´
!Ã
!
µ
¶Ã
1− 
1− 
−1
−1
C
C − 1−
C
C
Λ
0
1 − 
1
1− 

=
−1
2 −  − 
0
Λ
2
C
C
−C
C−1
³
´
³
´
!Ã
!
¶Ã
µ
1− 
1− 
−1
Λ1 C−1
C − 1−
C
Λ
C
1 − 
1
1− 

=
−1
2 −  − 
C
C
−Λ2 C
Λ2 C−1
´
³
´
´
⎞
³
³
⎛
¶
µ
1− 
1− 
1− 
−1
−1
−1
−1
CΛ
CΛ
CΛ
C
+
C
C
−
C
CΛ
1
2
1
2
1 − 
1− 
1− 
1− 
³
´
⎠
⎝
=
1−
−1
−1
−1
−1

2 −  − 
CΛ1 C + CΛ2 C
CΛ1 C −CΛ2 C
1− 
This implies that:
£
¡
¢¤−1
b
α
I2 −  (A ⊗ C) (DΓ ⊗ D ) A−1 ⊗ C−1
X
b
= (A ⊗ C)
  (DΓ ⊗ D ) (A ⊗ C)−1 α
x∗ =
≥0
³
³
´
³
´
´
⎞Ã
⎛
!
¶
µ
1− 
1− 
1− 
−1
−1
−1
−1
C
+
C
C
−
C
CΛ
CΛ
CΛ
CΛ
1
2
1
2

b
1
1 − 
1− 
1− 
1− 


³
´
⎠
⎝
=
1− 
−1
−1
−1
−1
2 −  − 

b  1
CΛ1 C +CΛ2 C
CΛ1 C −CΛ2 C
1− 
¶
µ
1 − 
=
2 −  − 
⎞
⎛ h
³
´
´
i ³
1− 
1− 

b  b ( G) + 1−
b ((  +   − 1)  G) + 1−

b  [b ( G) − b ((  +   − 1)  G)]

³
⎠
´ 
×⎝
1− 
b  b ((  +   − 1)  G)

b  b ( G) + 

b  [b ( G) − b ((  +   − 1)  G)] + 1−

The last equality is obtained by observing that
!
à +∞
X
  G 1
b ( G) =
=
à =0
+∞
X
= C
=
¡
¢
  CDG C−1
=0
à +∞
X
  DG C−1 1
=0
CΛ1 C−1 1
65
!
!
1
and
b ((  +   − 1)  G) =
=
à +∞
X
à =0
+∞
X
= C
((  +   − 1) ) G
!
1
¡
¢
((  +   − 1) ) CDG C−1
=0
à +∞
X
=0
!
!
1
((  +   − 1) ) DG C−1 1
= CΛ2 C−1 1
By rearranging the terms, we obtain the equilibrium values given in the Proposition.
Proof of Theorem 1: The proof is relatively similar to that of Proposition 2. Indeed,
Γ can be diagonalized as:
⎞
⎛
1 (Γ) 0 · · ·
0
⎜
.. ⎟
... ...
⎜ 0
. ⎟
−1
⎟
⎜
Γ = ADΓ A , where DΓ = ⎜ .
⎟
... ...
.
⎝ .
0 ⎠
0
· · · 0  (Γ)
In this formulation, A is a  ×  matrix where each th column is formed by the eigenvector
corresponding to the th eigenvalue. Let us have the following notations:
⎛
⎛ (−1)
⎞
(−1) ⎞
11 · · · 1
11
· · · 1
⎜
.. ⎟ and A−1 = ⎜ ..
.. ⎟
...
...
A = ⎝ ...
⎝ .
. ⎠
. ⎠
(−1)
 1 · · ·  
 1
(−1)
· · ·  
(−1)
where  is the ( ) cell of the matrix A−1 .
The network adjacency matrix G is symmetric and, therefore, it can also be diagonalized
as:
⎞
⎛
1 (G) 0 · · ·
0
⎜
.. ⎟
... ...
⎜ 0
. ⎟
−1
⎟
⎜
G = CDG C , where DG = ⎜ .
⎟
... ...
.
⎝ .
0 ⎠
0
· · · 0  (G)
We can make use of the Kronecker product to rewrite the equilibrium system of the Bayesian
game as:
66
⎛
⎞
⎛
⎞
x (1)

b 1 1
⎜ .. ⎟
−1 ⎜ . ⎟
⎝ . ⎠ = (I  − Γ ⊗ G) ⎝ .. ⎠

b  1
x ( )
Applying the properties of the Kronecker product, this system becomes
⎛
⎞
⎞

b 1 1
x (1)
¡
¢¤−1 ⎜ . ⎟
⎜ .. ⎟ £
⎝ .. ⎠
⎝ . ⎠ = I  −  (A ⊗ C) (DΓ ⊗ DG ) A−1 ⊗ C−1
⎛

b  1
x ( )
First, let us show that I  − Γ ⊗ G is non-singular. This is true if max (Γ ⊗ G) =
max (Γ) max (G)  1. Since Γ is stochastic we have that max (Γ) = 1 and this condition
boils down to 0    1max (G). This shows that the system above has a unique solution
and thus there exists a unique Nash-Bayesian equilibrium. This solution is interior since we
have assumed that 1  0    0, which implies that 
b 1  0  
b   0.
Second, let us show that the equilibrium effort of agent  is a linear function of the
Katz-Bonacich centrality measures. We have
¡
£
¢¤−1
I  −  (A ⊗ C) (DΓ ⊗ DG ) A−1 ⊗ C−1
!
à +∞
X
¡ −1
¢


A ⊗ C−1
= (A ⊗ C)
 (DΓ ⊗ DG )
=0
We also have that (A ⊗ C), (DΓ ⊗ DG ) and (A−1 ⊗ C−1 ) are each a   ×   matrix with
⎛
11 C · · ·
⎜ ..
...
A ⊗ C =⎝ .
⎜
⎜
 (DΓ ⊗ DG ) = ⎜
⎜
⎝
≥0
X
and
⎛P


 1 C · · ·   C
≥0
A−1 ⊗ C−1
⎞
1 C
.. ⎟
. ⎠
  1 (Γ) DG
0
..
.
0 ···
... ...
... ...
···
0
0
P
(−1)
 1 C−1 · · ·   C−1
67
0
 
≥0  
⎛ (−1) −1
⎞
(−1)
11 C
· · · 1 C−1
⎜
⎟
..
..
...
=⎝
⎠
.
.
(−1)
⎞
0
..
.
(Γ) DG
⎟
⎟
⎟
⎟
⎠
It is easily verified that
X
≥0
⎛

X
¡
¢
(A ⊗ C)   (DΓ ⊗ DG ) A−1 ⊗ C−1
(−1)
1 1 Λ (
⎜
(Γ)  G)
⎜
⎜ =1
⎜
..
⎜
.
⎜
⎜
..
⎜
=⎜
.
⎜
⎜
⎜
..
⎜
.
⎜ 
⎜ X
(−1)
⎝
  1 Λ ( (Γ)  G)
=1
···
···
···
···
···
···
...

X
=1
···
(−1)
  Λ ( (Γ)  G) · · ·
...
···
···
···
where
X
Λ ( (Γ)  G) = C
···

X
(−1)
1  Λ (
(Γ)  G) ⎟
⎟
⎟
⎟
..
⎟
.
⎟
⎟
..
⎟
.
⎟
⎟
⎟
⎟
..
⎟
.
⎟

⎟
X
(−1)
   Λ ( (Γ)  G) ⎠
=1
=1
   (Γ) DG C−1
≥0
is an  ×  matrix. Observe that
Λ ( (Γ)  G) 1 = b ( (Γ)  G)
is a  × 1 vector of Katz-Bonacich centrality measures. For  = 1   , define:

b  = E [| { = }] =

X
=1
P ({ =  } | { = }) 
Then, the result of the product
⎛
⎞

b 1 1
¡
¢¤−1 ⎜ . ⎟
£
I  −  (A ⊗ C) (DΓ ⊗ DG ) A−1 ⊗ C−1
⎝ .. ⎠

b  1
depends on products of the form
à 
!

h
i
X
X
(−1)
(−1)



−1

−1
1  = 

b
C ( (Γ) ) DG C 1 
  C ( (Γ) ) DG C
b
 
=1
= 
b
68
=1

X
=1
(−1)
⎞
  b ( (Γ)  G) 
which is a  ×1 vector with entries equal to the combinations of the Katz-Bonacich centrality
measures  ( (Γ)  G),  = 1      . In other words, the equilibrium effort of each agent
is:
⎛
⎞


X
X
(−1)
(−1)
⎜ 
b1
1 1 b ( (Γ)  G) +  + 
b
1  b ( (Γ)  G) ⎟
⎜
⎟
⎛
⎞ ⎜
=1
=1
⎟
∗
⎜
⎟
..
x (1)
⎜
⎟
.
⎜
⎟ ⎜
⎟
⎜  ⎟ ⎜


⎟
X
X
⎜
⎟ ⎜
⎟
(−1)
(−1)
⎜ x∗ ( ) ⎟ = ⎜ 
  1 b ( (Γ)  G) +  + 
b
   b ( (Γ)  G) ⎟
⎜
⎟ ⎜ b1
⎟
⎜
⎟
=1
=1
⎟
⎝  ⎠ ⎜
⎜
⎟
..
⎜
⎟
.
x∗ ( )
⎜
⎟


⎜
⎟
X
X
(−1)
(−1)
⎝ 
b
  b ( (Γ)  G) +  + 
b
  b ( (Γ)  G) ⎠
  1
1


=1
  

=1
This means, in particular, that if individual  receives a signal  ( ), then her effort is
given by:
∗ ( )
=
b1

X
(−1)
  1 
=1
( (Γ)  G) +  + 
b

X
(−1)
    ( (Γ)  G)
(64)
=1
Hence, we obtain the final expressions of equilibrium efforts as linear functions of own
Katz-Bonacich centrality measures.
Proof of Theorem 4:
First, let us prove that this system has a unique solution and therefore there exists a
unique Bayesian Nash equilibrium of this game. The system
³ has one
´ and only one³well defined
´
e
e⊗G =
and non-negative solution if and only if  max  1max Γ ⊗ G . Since max Γ
³ ´
e max (G), the result follows.
max Γ
e is wellLet us now characterize the equilibrium. Assumptions 1 and 4 guarantee that Γ
defined, symmetric and thus diagonalizable. As a result, the proof is analogous to the case
with incomplete information on  (see proof of Theorem 1).
Proof of Remark 2: Define a new matrix Γ̄ = (̄  ) as follows:
̄  =

X
=1
P({ =   } ∩ { =  } | { = })
P
means that
By definition we have that ̃   ̄  for all   and =1 ̄  =
³ 1´for all . ¡This
¢
0 ≤ Γ̃ ≤ Γ̄ (elementwise inequalities). This implies that max Γ̃ ≤ max Γ̄ = 1. Thus, a
sufficient condition for I  −  max Γ̄ ⊗ G to be non-singular is  max  1max (G), and the
result follows.
69