the role of public and private information in sequential

T HE R OLE OF P UBLIC AND P RIVATE
I NFORMATION IN S EQUENTIAL C OORDINATION
G AMES
Bachelor Thesis
Christian Schäfer
Department of Economics
Berlin Institute of Technology
Berlin, April 2009
Contents
1
Introduction
1
2
Games of regime change
3
2.1
Coordination games . . . . . . . . . . . . . . . . . . . . . . . . .
3
2.1.1
Modeling coordination problems . . . . . . . . . . . . . .
3
2.1.2
Simple games of regime change . . . . . . . . . . . . . . .
3
2.1.3
Multiple equilibria and common knowledge . . . . . . .
5
2.1.4
The role of uncertainty . . . . . . . . . . . . . . . . . . . .
5
Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
2.2.1
Currency pegs and speculation . . . . . . . . . . . . . . .
7
2.2.2
Bank runs and solvency . . . . . . . . . . . . . . . . . . .
7
2.2.3
Debt crises and refinancing . . . . . . . . . . . . . . . . .
8
2.2.4
Riots and political change . . . . . . . . . . . . . . . . . .
8
The global game model . . . . . . . . . . . . . . . . . . . . . . . .
9
2.3.1
Modeling incomplete information . . . . . . . . . . . . .
9
2.3.2
Payoffs and dominant strategies . . . . . . . . . . . . . .
10
2.3.3
Monotone equilibria . . . . . . . . . . . . . . . . . . . . .
11
2.3.4
Iterated dominance . . . . . . . . . . . . . . . . . . . . . .
13
2.2
2.3
3
Equilibria and Learning
17
3.1
The static global game model . . . . . . . . . . . . . . . . . . . .
17
3.1.1
Information and signals . . . . . . . . . . . . . . . . . . .
17
3.1.2
Indifference conditions . . . . . . . . . . . . . . . . . . . .
18
3.1.3
Uniqueness of equilibria . . . . . . . . . . . . . . . . . . .
19
3.2
3.3
4
The dynamic global game model . . . . . . . . . . . . . . . . . .
20
3.2.1
Introduction to the model . . . . . . . . . . . . . . . . . .
20
3.2.2
Information, signals, and learning . . . . . . . . . . . . .
21
3.2.3
Indifference conditions . . . . . . . . . . . . . . . . . . . .
22
3.2.4
Uniqueness of equilibria . . . . . . . . . . . . . . . . . . .
25
Games with non-deterministic status quo . . . . . . . . . . . . .
27
3.3.1
Observable shocks . . . . . . . . . . . . . . . . . . . . . .
27
3.3.2
Unobservable shocks . . . . . . . . . . . . . . . . . . . . .
28
3.3.3
Random walk status quo . . . . . . . . . . . . . . . . . .
29
Bayesian updating
33
4.1
Bayesian statistics . . . . . . . . . . . . . . . . . . . . . . . . . . .
33
4.1.1
Subjective probability . . . . . . . . . . . . . . . . . . . .
33
4.1.2
Bayesian inference . . . . . . . . . . . . . . . . . . . . . .
34
4.1.3
Prior and posterior distributions . . . . . . . . . . . . . .
34
Updating from Gaussian signals . . . . . . . . . . . . . . . . . .
35
4.2.1
Normal prior and normal signals . . . . . . . . . . . . . .
35
4.2.2
Independent normal signals . . . . . . . . . . . . . . . . .
38
4.2.3
Dependent signals . . . . . . . . . . . . . . . . . . . . . .
39
Sufficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
4.3.1
Notion of sufficiency . . . . . . . . . . . . . . . . . . . . .
42
4.3.2
Sufficient statistics in the global game model . . . . . . .
42
4.2
4.3
A Notation and Symbols
45
Bibliography
48
Chapter 1
Introduction
Coordination games of regime change have numerous economic applications
such as currency crisis, bank runs and debt crisis. Unfortunately, these games
have multiple pure strategy Nash equilibria, which prevent comparative static
analyses and meaningful interpretations of the results. Morris and Shin (1998)
present a coordination game model with incomplete information, called global
game model, which enforces a unique equilibrium. Although Morris and Shin
resolve the problem of multiple equilibria, their model lacks many features of
real-world markets. Angeletos, Hellwig, and Pavan (2007), however, successfully extend the global game approach to a dynamic, multi-period model with
an endogenous learning component. In this Bachelor thesis we give an introduction to coordination games with incomplete information and review the
major results obtained by Angeletos, Hellwig, and Pavan.
Scope and structure The thesis is organized as follows.
4 In chapter 2 we give a brief introduction to coordination games of regime
change and the role of information for the multiplicity of Nash equilibria.
We discuss the so-called global game approach, reviewing some results
and typical applications from related literature.
4 In chapter 3 we first review the one-period, static game of regime change
and then present some results concerning the multi-period, dynamic version, strongly drawing upon “Dynamic Global Games of Regime Change:
Learning, Multiplicity, and Timing of Attacks” by Angeletos, Hellwig, and
Pavan (2007).
4 In chapter 4 we give a brief introduction to some fundamentals of Bayesian statistics that are crucial to the global game model. In particular,
we introduce and prove some results that are used without further explanation by Angeletos, Hellwig, and Pavan. We explain the Bayesian
reasoning against the backdrop of the model discussed in chapter 3.
1
2
Before and while reading, we recommend reviewing the notational appendix
at the end of the thesis.
Acknowledgments I am thankful to Prof. Dr. Frank Heinemann for supervising this Bachelor thesis, to Philipp J. König for giving me valuable feedback and patiently answering my questions on global games, and finally to
my father for proof-reading this work.
Chapter 2
Games of regime change
2.1
2.1.1
Coordination games
Modeling coordination problems
Basically, coordination games formalize the idea of coordination problems that
arise in situations in which all agents can realize mutual gains, but only by taking mutually consistent action. The most celebrated examples from introductory game theory, like the Stag hunt or the Battle of the Sexes, already capture
the major problem: Coordination games typically exhibit multiple pure strategy Nash equilibria, see e.g. Fudenberg and Tirole (1991). In this thesis we
concentrate solely on the analysis of coordination games of regime change, which
are the game theoretic backbone of some important models in economics and
social science. For an overview of relevant applications, see section 2.2.
The name originates from a setting where the model world has exactly two
states: the status quo and an alternative. If sufficiently many agents attack the
status quo, a regime change occurs and the world takes the alternative state.
Otherwise, the status quo persists. The attacking agents draw a benefit from
successful attacks and receive a penalty if they fail to change the status quo.
The arising decision problem for the individual agent stems from the uncertainty about the other agents’ actions. The individual agent attacks if and only
if she expects that sufficiently many other players attack as well.
2.1.2
Simple games of regime change
Now we formalize the game described in the preceding section. Suppose there
is a continuum of identical risk-neutral agents indexed by i ∈ [0, 1]. All agents
simultaneously decide to either attack the status quo at cost c > 0 or refrain
from attacking.
3
4
change
no change
attack
1−c
−c
refrain
0
0
Let ai ∈ {0, 1} denote the action taken by agent i ∈ [0, 1] where ai = 1 means
that agent i chooses to attack the status quo. Hence,
[0, 1] 3 λ =:
Z
[0,1]
ai di
denotes the fraction of agents who attack. We assume that the status quo is
abandoned if and only if
λ ≥ θ,
(critical mass condition)
where θ ∈ R is some real value which parameterizes the strength of the status
quo. We rewrite the payoff matrix in terms of λ and θ:
λ≥θ
λ<θ
ai = 1
1−c
−c
ai = 0
0
0
Strategic complements Note that the payoff ui of agent i ∈ [0, 1] is monotonically non-decreasing¡in the other agents’ ¢strategies. We can write the payoff
function as ui (λ) = ai (1 − c)1[θ,∞) (λ) − c . When agent i attacks, her payoff
does not decrease if more agents j 6= i attack; when agent i does not attack,
her payoff always equals zero anyway. Thus, the coordination game of regime
change is one of strategic complements.
Dominance regions Note that for θ ∈
/ (0, 1] the coordination game is rather
trivial:
4 For θ ≤ 0 the regime change always occurs, even if no agent attacks.
4 For θ > 1 the regime change never occurs, even if all agents mutually
attack.
Hence, there are dominance regions (−∞, 0) and [1, ∞) where it is a dominant
strategy to attack or refrain from attacking respectively, no matter which actions the other agents decide to play.
5
2.1.3
Multiple equilibria and common knowledge
Definition 2.1. A game Γ is called a game of common knowledge if it holds that
at all states of the game all agents know the state of Γ, and they all know that
they know the state of Γ, and they all know that they all know the state of Γ,
and so on ad infinitum.
If the coordination game is one of common knowledge, all agents know the
threshold value θ, and all agents know that they know θ and so on. For intermediate values θ ∈ (0, 1] the status quo is stable but vulnerable to sufficiently
large attacks. Multiple equilibria are sustained by the following self-fulfilling
expectations:
4 The individual agent expects everyone else to attack. Hence she finds it
optimal to participate in the attack, the status quo is abandoned, and the
expectations are fulfilled.
4 The individual agent expects no one to attack. Hence she finds it optimal
to refrain from attacking, the status quo persists, and the expectations are
fulfilled.
If we assume common knowledge, the agents can perfectly forecast each others’ behavior and coordinate on multiple equilibria if θ ∈ (0, 1]. In the following section we see that in games without common knowledge a unique equilibrium is possible. Thus, the multiplicity of equilibria seems an unintended
result of the assumption that the game is one of common knowledge.
2.1.4
The role of uncertainty
Any model is a simplification of a real world problem and mostly based on
assumptions that are made to ensure mathematical and theoretical tractability.
Most economists will agree that in most game-theoretic models the agents are
overly rational and extremely well-informed compared to real-world decision
makers. One way to design more realistic models is the introduction of uncertainty to the agents’ decision process. We distinguish between incomplete and
imperfect information:
Definition 2.2. A game Γ is called a game of complete information if the structure of Γ is known to all agents, otherwise it is called incomplete.
Definition 2.3. A game Γ is called a game of perfect information if all agents
observe all other agents’ actions, otherwise it is called imperfect.
Carlsson and van Damme (1993) analyze incomplete information games,
so-called global games, which are based on a slight perturbation in the players
6
perception of the game. The name stems from the idea that the original game
is embedded in a “global“ class of games from which one game is randomly
drawn. Surprisingly, this model extension forces agents in 2 × 2 games to coordinate on a unique equilibrium by iterated deletion of dominated strategies,
even if the game is one with two strict Nash equilibria. Simply put, the higherorder uncertainty, i.e. the uncertainty about the other agents’ state of knowledge, enforces the unique equilibrium. For a complete survey on the global
game theory and its importance for avoiding multiple equilibria, we refer to
Morris and Shin (2003) or Heinemann (2005).
The global game approach developed by Carlsson and van Damme has
also been applied successfully to coordination games of regime change as well.
Morris and Shin (1998) show that the introduction of noisy, private information about the status quo θ indeed yields a unique equilibrium in coordination
games of regime change. Instead of observing the exact value of θ, each agent
has to infer about the status quo from noisy, private signals, which enforces a
unique equilibrium. These original results due to Morris and Shin have since
been broadened and reviewed by several authors, see e.g. Heinemann and
Illing (2002) or Hellwig (2002).
When agents receive noisy signals about the unknown state – who is the
sender of these signals? In games of incomplete information it is common
to introduce Nature as an additional player. We can even transform a game of
incomplete information into a game of imperfect information where the moves
of Nature are partially unobserved (see e.g. Holler and Illing, 2005, section 2.5).
We assume that all agents know the game: they know that they have to infer
the information about θ from noisy, private signals, they know that the other
agents also infer their information about θ from noisy, private signals, and they
even know the exact technology of the noise.
Nevertheless, the game is not one of common knowledge since an individual agent does not know the state of the game after Nature has sent the private
signals. Even if the noise is arbitrarily low, it suffices to induce strategic uncertainty: As the noise vanishes, the individual indeed knows the exact value
of the status quo θ, but still she cannot be sure that the other agents will coordinate on the same equilibrium. In a game of common knowledge, on the
contrary, there is no strategic uncertainty since each agent can perfectly forecast the other agents’ strategies.
2.2
Applications
The framework we developed for coordination games of regime change can
capture the coordination problems in a variety of interesting applications. Before we further formalize the game and analyze the equilibrium characteris-
7
tics, we give a brief overview of economic and social models that are based on
this version of the coordination game.
2.2.1
Currency pegs and speculation
The most celebrated application of coordination games is probably the modeling of self-fulfilling currency crises, originally due to Obstfeld (1996), and enriched by Morris and Shin (1998) using a global game approach: A central
bank fixes the exchange rate of their domestic currency to a key currency. The
central bank maintains the exchange rate target by selling or buying the key
currency. If the central bank runs out of reserves of the foreign currency, the
ability to defend the peg is lost, and the domestic currency devaluates.
Speculators anticipating that the central bank is short of foreign currency
borrow domestic money and convert it into the key currency. If the peg is
abandoned and the domestic currency devaluates, the speculators can rebuy
the domestic currency at a lower exchange rate and pay off the debts while
still making a profit; if the central bank successfully defends the peg, the speculators do not make a profit but face interest and opportunity costs.
In the terminology of coordination games of regime change, the status quo
is the currency peg and a large number of agents are deciding whether to attack the currency or not. The central bank will abandon the currency peg if
and only if sufficiently many speculators attack.
2.2.2
Bank runs and solvency
Goldstein and Pauzner (2005) apply the global game technique to a model of
self-fullfilling bank runs, originally developed by Diamond and Dybvig (1983),
where the status quo is the normal banking business and a large number of
depositors decide whether to withdraw their deposits or not. Since the bank
only holds limited monetary resources it will suspend its payments if sufficiently many customers demand their deposits.
It is a natural feature of modern banking that a commercial bank can never
disburse all depositors simultaneously. In normal business, only a small fraction of customers withdraw their deposits at a time and by virtue of the multitude of clients, the bank is not obliged to have much cash available. If customers believe, however, that there is going to be a banking crisis, they all
might suddenly want to withdraw their deposits.
If a sufficiently large fraction of depositors withdraw their deposits, the
bank goes into default, which leads to a banking crisis. We refer to these
phenomena as self-fullfilling prophecies since the customers’ predictions indirectly cause themselves to come true. In contrast, if depositors do not expect a
8
banking crisis, they do not withdraw their deposits, and the crisis never arises.
If the individual customer chooses not to act and the crisis occurs, her assets
are at risk; if the customer withdraws all deposits and the crisis does not occur,
she faces unnecessary liquidity costs.
2.2.3
Debt crises and refinancing
Morris and Shin (2004) present a coordination game model of self-fulfilling debt
crises where the status quo is a project financed by a group of creditors. The
status quo is abandoned if the borrower fails to obtain refinancing by a sufficiently large fraction of the creditors. In particular, Morris and Shin point out
that the creditors’ coordination failure will also be reflected in the price of the
debt.
Investments of considerable size are rarely financed by a single institution,
and most projects are maintained by several creditors such that a coordination problem arises: A creditor stops refinancing a project if she fears that the
project will not pay in full at maturity. Even if the project is operable and will
pay off, each creditor fears that the other investors could foreclose on the loan,
partially liquidate the assets and thus let the project break down.
If a sufficiently large fraction of the creditors expects a failure and revokes
its support, the borrower is compelled to sell assets and the inefficient liquidation will ruin the project although its fundamentals are sound. Fears about
the future refinancing of a project lead to self-fulfilling prophecies since the
creditors’ expectations of failure make themselves come true. In contrast, if
all investors believe in the profitability of the project, refinancing is easily obtained and the project meets with success.
2.2.4
Riots and political change
Atkeson (2000) reviews the models discussed by Morris and Shin and criticizes
them for applying the global game model to situations where the assumptions
do not meet with economic reality. In a dynamic market economy, Atkeson
argues, prices contain the aggregate information and serve precisely to coordinate actions, aligning the individual agents’ beliefs. Nevertheless, Atkeson does not reject the global game model, but proposes another application,
which fits the assumptions more naturally:
We consider a crowd facing a patrol of the riot police in the street. Individuals in the crowd must decide whether to riot or not. If enough people
riot, the riot police are overwhelmed, and each rioter may loot the surrounding shops; if too few people riot, the riot police contain the riot, and the rioters
get arrested. The potential rioters may or may not overwhelm the police force
depending on the number of the rioters and the strength of the police force.
9
Indeed, this non-economic setting is captured well by the global game
model and similar social situations are numerous. Consider the call for an
unauthorized strike: If enough workers stay home, the factory closes down; if
too few workers participate, the production carries on, and the wildcat strikers face dire consequences. Think of demonstrators under a dictatorial regime:
If enough people join the demonstrations, the regime has to give in; if too
few people participate, the demonstrators are detained. Consider companies
buying a network technology: If enough companies buy the technology, it becomes a standard and pays off; if too few companies buy the technology, it
turns out to be useless.
2.3
The global game model
In this section we formalize the coordination games of regime change with
incomplete information. In order to distinguish them from the simple games
of regime change in section 2.1.2, we refer to the coordination games of regime
change with incomplete information as global game models.
2.3.1
Modeling incomplete information
The global game model is constructed to go along with Bayesian decision theory. For a brief introduction to the basic concepts of Bayesian statistics, the
reader is referred to section 4.1.
Public information The status quo θ ∈ R is a determined real number that
is not random and results from the underlying economic or social model. In
the global game model, however, the agents are ignorant of the status quo and
every value is equally likely. First, the agents all receive the same noisy signal
Z := θ + ζ
where ζ is some real-valued random term with a symmetric density and finite
variance. The agents observe the signal Z = z and conclude that
θ = z − ζ.
In a classical, frequentist statistical approach that does not make θ a random
variable since objectively θ is still not a mapping from a probability space to
R. From the perspective of the Bayesian decision maker, however, θ is indeed
a random variable since subjectively θ can only be observed through a random noise term. In the sequel, we analyze the game from a Bayesian agent’s
perspective and thus treat θ as a random variable; in order to distinguish the
random and the deterministic term we write Θ for the random variable and θ
10
for its realization (note that from a frequentist point of view, however, θ is the
true value of the status quo and not the realization of some random variable
Θ). Some authors cut this whole reasoning short by just stating that Nature
draws the status quo Θ from a distribution which is common knowledge.
Private information The game would still be one of common knowledge if
the agents only received a public signal since all agents would know the state
of the game and be aware that the other agents also know. In addition to the
commonly observed public signal Z, however, Nature sends a private signal
Xi to each agent i ∈ [0, 1] which is not observed by any other agent j 6= i.
The Bayesian agents update the common prior distribution obtained from
the public signal with their respective private signals; see section 4.1.3 for the
role of prior and posterior distributions. We write Yi = Y (z, Xi ) for the statistic
which agent i uses to infer about the unknown status quo θ from the signals.
Plainly spoken, Yi is what the agent i believes the true status quo θ to be; in
more mathematical terms, the statistic Yi is an estimator with respect to the
parameter θ.
Note that the statistics Yi and Yj for i 6= j are different but linked through
public signal a Z = z. We shall see that the game admits a unique equilibrium if
and only if this link is sufficiently weak. In the next chapter we review a model
presented by Hellwig (2002) where the agents infer about the status quo Θ
from normally distributed signals, which allow for an easy updating process,
see section 4.2. Indeed, the static global game model as used in (Hellwig, 2002)
has a unique equilibrium if the private signals are sufficiently more precise
than the public signals; see section 3.1.3 for the details.
2.3.2
Payoffs and dominant strategies
In the previous section we argued how a Baysian analysis of the game leads
to a random status quo Θ and a statistic Yi for agent i ∈ [0, 1]. Now we use
this insight to derive the agents’ expected payoffs. The crucial point about the
global game model is that the individual agent i does not only draw conclusions from Yi about the status quo, but also about the statistics Yj of the other
agents j 6= i. Just knowing that the status quo is weak does not help agent i at
all; she has to be convinced that sufficiently many agents j 6= i have come to
the same conclusion and are also willing to attack the status quo. In the sequel
we omit the index since the agents’ information is perfectly symmetric.
The fraction of attacking agents λ depends on the agents’ statistics Y, which
in turn depends on the status quo Θ: If Θ is low, the agents receive (on average)
low signals and derive low statistics Y which lead to a high fraction of attackers
λ(Θ, Y ) and vice versa. Conditional on the event that the statistic is Y = y, the
size of the attack depends solely on Θ, and there is a real number θ ∗ ∈ R such
11
that regime change occurs if and only if Θ ≤ θ ∗ . Hence, if an individual agent’s
statistic Y yields the value y, the probability of regime change is
P [Θ ≤ θ ∗ | Y = y] ,
and the risk-neutral agent attacks if and only if her utility or payoff is nonnegative. The expected payoff from attacking, conditional on the event that
the statistic yields Y = y, is given by
u(θ ∗ , y) =(1 − c)P [“change” if Y = y] − cP [“no change” if Y = y]
=(1 − c)P [Θ ≤ θ ∗ | Y = y] − cP [Θ > θ ∗ | Y = y]
=(1 − c)P [Θ ≤ θ ∗ | Y = y] − c(1 − P [Θ ≤ θ ∗ | Y = y])
(2.1)
=P [Θ ≤ θ ∗ | Y = y] − c.
Dominance regions Since higher values of Y imply higher values of the status quo Θ we conclude that u(Θ, y) is monotonically non-increasing in y and
there exist unique values
¯
¯
©
ª
©
ª
y := sup y ∈ R ¯ u(0, y) = 0 , y := inf y ∈ R ¯ u(1, y) = 0 .
4 For any belief Y = y < y it is dominant to attack, since the expected
payoff u is positive even if no other agents attack.
4 For any belief Y = y > y it is dominant to refrain from attacking, since
the expected payoff u is negative even if all agents mutually attack.
Hence, there are dominance regions, (−∞, y ) and ( y, +∞), where it is a dominant strategy to attack or refrain from attacking respectively, no matter which
actions the other agents decide to play.
2.3.3
Monotone equilibria
We start to analyze the equilibria of the game by looking at monotone Nash
equilibria. We find this type of equilibria whenever there are monotonic payoff
functions and just two possible actions:
Definition 2.4. Let Γ be a game with two possible actions {0, 1}, and y ∈ R a
parameter the optimal action a = a(y) depends on. If there exists a threshold
value y∗ ∈ R such that
(
0 if y > y∗
argmax u ( a) =
,
1 if y < y∗
a∈{0,1}
and the agent is indifferent between 0 and 1 for y = y∗ , then the strategy a(y∗ )
is called a monotone Nash equilibrium. We identify the equilibrium a(y∗ ) with
the indifference parameter y∗ .
12
Monotonicity A rational agent attacks the status quo if and only if she believes that the probability of a regime change is high enough to justify taking
the risk of an unsuccessful attack. The probability for regime change conditional on the event Y = y,
P [Θ ≤ λ(θ ∗ , y) | Y = y] ,
is monotonically non-decreasing in y since a higher value of the agent’s statistic
Y cannot imply a lower probability of regime change. Hence, there is some
threshold value y∗ ∈ ( y, y ) such that the agent attacks if and only if Y ≤ y∗ .
Conditional on the event that the true status quo is Θ = θ, the fraction λ
of agents attacking (playing action a = 1) is thus equivalent to the fraction of
agents whose statistic Y is lower than y∗ . Since there are infinitely many agents
participating in the game we obtain
Z
Z
h
i
(∗)
1{(Yi | Θ=θ ) ≤ y∗ } di = E 1{(Y | Θ=θ ) ≤ y∗ }
[0,1]
[0,1]
h
i
= E 1 {Y ≤ y ∗ } | Θ = θ = P [Y ≤ y ∗ | Θ = θ ]
λ(θ, y∗ ) =
ai di =
where the equality (∗) follows from the strong Law of Large Numbers (see e.g.
Georgii, 2008, section 5.1.3). Note that assuming a continuum of agents [0, 1]
significantly simplifies the analysis of the game since the Law of Large Numbers relates the size of the attack to the distribution of the agents’ statistic.
On the other hand, the status quo is abandoned if the size of the attack λ is
greater than Θ. The probability for a sufficiently strong attack conditional on
the true status quo Θ = θ,
P [θ ≤ λ(θ, Y ) | Θ = θ ] ,
is monotonically non-increasing in θ since a higher value of the true status quo θ
cannot imply a higher probability of a successful attack. Hence, there must be
some threshold value θ ∗ ∈ [0, 1] such that the status quo is abandoned if and
only if Θ ≤ θ ∗ .
Ultimately, we treat the regime maker like a player that anticipates the upcoming defeat and automatically abandons the status quo if Θ ≤ θ ∗ . Note that
by construction of the game the regime maker has an implicit utility function
in λ which is positive if and only if θ > λ.
Characterization We found two monotone equilibria for the agent and the
regime maker respectively such that we can identify the equilibria of the game
by a threshold vector (θ ∗ , y∗ ). Now we derive sufficient and necessary conditions which determine this threshold vector: The regime maker is indifferent
between proceeding and resigning if the size of the attack λ just equals the
strength of the status quo θ.
13
A risk-neutral agent is indifferent between attacking and refraining if her
expected payoff u(θ ∗ , y) just equals zero. This reasoning leads to the two conditions
λ(θ ∗ , y∗ ) = θ ∗
(critical mass condition)
(2.2)
∗
∗
u(θ , y ) = 0
(payoff indifference condition).
that are equivalent to
P [Y ≤ y ∗ | Θ = θ ∗ ] = θ ∗
and
P [Θ ≤ θ ∗ | Y = y∗ ] = c
We come back to this result in the next chapter when we apply it to the examples worked out by Hellwig (2002) and Angeletos, Hellwig, and Pavan (2007),
respectively.
Existence Naturally, the question arises whether there is a solution to (2.2)
at all. Since we assumed the random noise terms to have densities, the functions P [Y ≤ y] and P [Θ ≤ ϑ ] are differentiable and thus continuous in y and
ϑ respectively. Hence, the function
[0, 1] × [ y, y ] → [0, 1] × [ y, y ],
(ϑ, y) 7→ (λ(ϑ, y), u(ϑ, y) + y)
has a fixed point by Schauder’s Fixed Point Theorem (see e.g. Zeidler, 1986) if
u(ϑ, y) + y ∈ [ y, y ]
for all
(ϑ, y) ∈ [0, 1] × [ y, y ].
(2.3)
Condition (2.3) is fullfilled if and only if y + u(1, y) ≤ y. Since u(1, y) ≤ 1 − c
for all y ∈ R, the condition
1−c ≤ y−y
is sufficient (but not necessary) for the existence of at least one solution to (2.2),
i.e. at least one monotone equilibrium.
2.3.4
Iterated dominance
In the previous subsection we fully characterized the monotone equilibria by
their threshold values (θ ∗ , y∗ ). This result, however, did not yet treat questions
of uniqueness and leaves open the existence of other, non-monotone equilibria.
Since we did not specify the distribution of the noise terms, which induce the
randomness into the model, we do not explicitly know the distributions of
Θ and Y. Therefore, we cannot treat aspects of uniqueness without further
assumptions.
14
For the sake of simplicity, suppose that there is a unique monotone equilibrium y∗ . We can show that the only equilibrium that survives arbitrarily many
rounds of iterated elimination of dominated strategies is exactly this unique
monotone equilibrium. Let ŷ ∈ R be some threshold value such that every
agent attacks if and only if Y ≤ ŷ, and let λ(ŷ) denote the size of the aggregate
attack. Let i ∈ [0, 1] be a distinct agent participating in the game. Recall that
the expected payoff from attacking for agent i, conditional on the value of her
statistic Yi = yi , is
£
¤
u(θ̂ (ŷ), y)i = P Θ ≤ θ̂ (ŷ) | Yi = yi − c,
where θ̂ = θ̂ (ŷ) solves the critical mass equation λ(θ̂ ) = θ̂. Note that θ̂ is
increasing in ŷ since higher threshold beliefs ŷ lead to higher fractions of attacking agents and thus to a higher value of θ̂.
Consequently, if agents act aggressively (high ŷ), attacks are more likely
to be successful, and the expected payoff from attacking increases; in other
words, ui (yi , ŷ) is increasing in the threshold value ŷ. On the other hand, if
agent i acts more cautiously because she expects a strong status quo (high
yi ), attacks are more likely to fail, and the expected payoff from attacking decreases; thus, ui (yi , ŷ) is decreasing in the value of the agent’s statistic yi .
The above derivation holds for any global game model with strategic complements since the argument is merely based on the rationality of risk-neutral
agents seeking to maximize their expected payoff. Recall that the distribution
of the status quo Θ and the statistic Yi have densities such that u(yi , ŷ) is continuous in both arguments. Since
lim u(yi , ŷ) = 1 − c
yi →−∞
and
lim u(yi , ŷ) = −c,
yi → ∞
we can define a function h : R → R such that yi = h(ŷ) is the unique solution
to u(yi , ŷ) = 0 with respect to yi applying the Mean Value Theorem. Hence,
when all agents j 6= i attack if and only if Yj ≤ ŷ, the best response for agent i is
to attack if and only if Yi ≤ h(ŷ).
In section 2.3.2 we determined dominance regions (−∞, y ) and ( y, +∞)
where it is dominant for any agent to attack (or to refrain from attacking respectively) independent of the other agents’ strategies. We assume it is common knowledge that all agents j 6= i attack if and only if Yj ≤ y. Then agent
i finds it optimal to attack if and only if Yi ≤ h(ŷ) since this strategy yields a
non-negative expected payoff ui .
The other agents j 6= i anticipate the optimal strategy for i and strategic
complementarity implies that they also find it optimal to attack if and only if
Yj ≤ h(ŷ). Agent i in turn anticipates their reasoning and finds it optimal to
attack if and only if Yi ≤ h(h(y)). Again, the other agents revise their strategies
and find it optimal to attack if and only if Yj ≤ h(h(ŷ)). We refer to this process
as iterated elimination of dominated strategies.
15
Repeating the argument, we construct an increasing sequence of threshold
values. Analogous reasoning leads to a decreasing sequence of threshold values starting at y. We denote by y∗ the threshold value that identifies the unique
monotone equilibrium of the game. Since h(y∗ ) = y∗ we obtain that
lim h(n) (y) = y∗
n→∞
and
lim h(n) (y) = y∗ ,
n→∞
where h(n) denotes the n-fold iteration of mapping h. Since both monotone
sequences converge to h(y∗ ) = y∗ , the monotone equilibrium is the only equilibrium that survives infinitely many rounds of iterated deletion of dominated
strategies. Hence, y∗ is the unique equilibrium of the game.
Chapter 3
Equilibria and Learning
3.1
The static global game model
We illustrate the global game model discussed in the introductory chapter, establishing some results originally due to Morris and Shin (1998). In the sequel,
we somewhat incline on the notation used by Angeletos, Hellwig, and Pavan
(2007).
3.1.1
Information and signals
Recall from section 2.3.1 that at first the agents are completely ignorant of the
status quo Θ. The agents update their non-informative prior by the public signal to obtain their common prior and then each agent updates the common
prior by her private signals.
Public information All agents observe a public signal
Z = Θ + ζ,
ζ ∼ N (0, 1/α)
and then conclude that Θ comes from a normal population with mean Z = z
and precision α. We refer to this distribution as the common prior.
Private information Each agent i ∈ [0, 1] receives a private signal
Xi = Θ + ε i ,
ε i ∼ N (0, 1/β)
which is not observed by any agent but i.
17
18
Sufficient statistic Updating the common prior distribution N (z, 1/α) with
her private signals, the agent i ∈ [0, 1] obtains the sufficient statistic
Yi =
αz + βXi
α+β
which summarizes her information about the status quo Θ; for details on how
to derive the statistic Y see section 4.2.2. Note that the agents’ beliefs about
Θ are linked through the mean z of the common prior distribution. Since the
game is perfectly symmetric we shall omit the index i in the sequel.
3.1.2
Indifference conditions
In section 2.3.3 we argue that monotone or threshold equilibria (θ ∗ , y∗ ) are
determined by the indifference conditions
λ(θ ∗ , y∗ ) = θ ∗
u(θ ∗ , y∗ ) = 0
(critical mass condition)
(payoff indifference condition)
where λ is the fraction of attacking agents and u is the expected payoff. The
first condition states that the fraction of attacking agents just reaches the critical mass to trigger regime change. We could say that the regime maker is
indifferent between maintaining or giving up the status quo. The second condition states that the expected payoff is just zero. A risk-neutral agent is thus
indifferent between attacking and refraining.
Critical mass condition The distribution of the statistic Y conditional on the
event that the status quo Θ is the equilibrium state θ ∗ is given by
µ
¶
αz + βθ ∗
β
∗
Y |(Θ = θ ) ∼ N
,
.
α + β ( α + β )2
We know that in the equilibrium the fraction of attacking agents equals the
equilibrium state θ ∗ . We thus obtain the equation
!
Ã
α + β ∗ αz + βθ ∗
∗ ∗
∗
∗
,
λ ( θ , y ) = P [Y ≤ y | Θ = θ ] = Φ p y − p
β
β
where Φ denotes the cumulative density function of the standard normal distribution.
19
Payoff indifference condition The distribution of the status quo Θ conditional on the event that an agent’s statistic yields the equilibrium value y∗ is
given by
µ
¶
1
∗
∗
Θ|(Y = y ) ∼ N y ,
.
α+β
Hence, the expected payoff for the equilibrium (θ ∗ , y∗ ) is
³
´
p
u(θ ∗ , y∗ ) = P [Θ ≤ θ ∗ | Y = y∗ ] − c = Φ (θ ∗ − y∗ ) α + β − c.
Equilibrium characterization The monotone equilibria are thus implicitly
given by the equations
Ã
Φ
α + β ∗ αz + βθ ∗
p y − p
β
β
³
Φ (θ ∗ − y∗ )
3.1.3
p
!
= θ∗
(critical mass condition)
(3.1)
´
α+β = c
(payoff indifference condition)
Uniqueness of equilibria
In section 2.3.4 we saw that a unique monotone equilibrium is the unique equilibrium of the game. The following proposition states under which conditions
there exists a unique monotone equilibrium.
Proposition 3.1 (Morris and Shin). Let α > 0 and β > 0 be the precision of
the public and private signals, respectively. The coordination game with incomplete
information admits a unique equilibrium if and only if the condition β > α2 /(2π )
holds.
Proof. Since Φ is strictly monotonically increasing, there is an inverse function
Φ−1 , and we can solve the critical mass equation in (3.1) for
p
αz + βθ ∗ + β Φ−1(θ ∗ )
y =
.
α+β
∗
20
Now we plug y∗ into the formula for the expected payoff and obtain
!!
Ã
Ã
p
−1( θ ∗ )
∗+
p
β
Φ
αz
+
βθ
p
−c
u(θ ∗ ) = Φ
α + β θ∗ −
α+β
Ã
=Φ
(α + β) θ ∗ − αz − βθ ∗ −
p
α+β
Ãs
=Φ
β
α+β
Ã
p
β Φ−1(θ ∗ )
α
p (θ ∗ − z) − Φ−1(θ ∗ )
β
!
−c
!!
−c
Recall that θ ∗ is a zero of u(θ ). Since u(θ ) is continuous and
lim u(θ ) = 1 − c > 0
θ →0
and
lim u(θ ) = −c < 0
θ →1
at least one solution exists by the Mean Value Theorem. In order to ensure its
uniqueness, it suffices to give a condition such that u(θ ) is strictly monotonic.
The derivative of the expected payoff with respect to θ is
Ãs
Ã
!!
α
∂u(θ )
β
p (θ ∗ − z) − Φ−1(θ ∗ )
=φ
×
∂θ
α+β
β
s
Ã
!
α
1
β
p −
α+β
β φ (Φ−1(θ ))
where φ denotes the density of the standard normal distribution. Note that
µ −1
¶ h√
´
√
(Φ (θ ))2
1
=
2π
exp
2π,
∞
for θ ∈ (0, 1),
∈
2
φ (Φ−1(θ ))
and consequently, u(θ ) is strictly monotonic if and only if
√
α
p − 2π < 0
β
⇔
β > α2 /(2π ),
which completes the proof.
3.2
3.2.1
The dynamic global game model
Introduction to the model
Angeletos, Hellwig, and Pavan (2007) extend the simple one-period coordination game with incomplete information to a dynamic, multi-period coordination game. The agents have the opportunity to attack in every period as long
as the status quo remains. The game is over if regime change occurs in some
21
period t < ∞ or it continues forever. For the sake of simplicity, agents do not
go bankrupt but the continuum of players is the same in every period.
This game is dynamic in a sense that equilibrium play in period t > 1
depends on equilibrium play in previous periods. Note that not every multiperiod model needs to be dynamic; Morris and Shin (1999) develop a model
of repeated currency attacks, but since the agents observe the status quo at the
end of each period, this multi-period game is just a sequence of one-period
games without dynamics.
From the fact that regime change did not occur yet, the Bayesian agents
infer additional information about the strength of the status quo. Plainly spoken, the agents observe an unsuccessful attack and learn that the status quo Θ
is higher than the equilibrium θ ∗ in that period. Chamley (2003) also explains
the dynamics of speculative attacks through the learning of Bayesian agents,
but his approach is not a global game model.
To the contrary, Chamley criticizes the global game model by Morris and
Shin (1998) for focussing too much on the uniqueness of equilibria while neglecting important features of financial markets. Angeletos, Hellwig, and Pavan (2007) show that the global game approach can be extended to richer environments and examine how the endogenous learning influences the dynamics
of coordination in multi-period global games of regime change.
3.2.2
Information, signals, and learning
The agents draw their conclusions about the common prior from a public signal
in the first period t = 1, jut as in the static game. In every period t ≥ 1 the
agents receive an independent, private signal and update their statistic Yt . For
a brief discussion on the assumption of independence see section 4.2.3.
Public information As in the one-period game, the agents receive a public
signal
Z = Θ + ζ, ζ ∼ N (0, 1/α)
and infer that Θ comes from a population that is normal with mean Z = z and
precision α which we call the common prior.
Learning In the multi-period game there is an additional source of public
information: The agents calculate the threshold equilibria {θ1∗ , . . . , θt∗ } up to
the current period t. Note that θ1∗ ≤ · · · ≤ θt∗−1 holds because, apparently,
the status quo cannot be in place in period t − 1 without being in place in the
preceding periods. The fact that the status quo survived past attacks implies
that the relation
θt∗−1 < Θ
22
holds, which is an endogenous piece of public information. Fortunately, all
information that can be learned from past periods is summarized in the threshold value θt∗−1 of the preceeding period. Without the impact of learning Θ
was normally distributed. Learning implies that the agents obtain a lower
bound for Θ and the status quo follows a £truncated normal ¤distribution with
cumulative distribution function F (ϑ ) = P Θ < ϑ | θt∗−1 < Θ .
Private information In every period, each agent i ∈ [0, 1] receives a private
signal
Xi,s = Θ + ε i,s , ε i,s ∼ N (0, 1/β s ) , 1 ≤ s ≤ t
which is not observed by any agent but i. The precision of the information
may also vary from period to period as long as
α2 /(2π ) ≤
t
∑ βs < ∞
s =1
t
and
∑ βs = ∞
t→∞
lim
s =1
hold for the precision of the private information accumulated until period t.
The first condition ensures that the static game in which agents only move
in period t has a unique equilibrium. The second condition ascertains that
private information becomes infinitely precise in the limit.
Sufficient statistic Updating the common prior distribution N (z, 1/α) with
the private signals Xi,1 , . . . , Xi,t , the agent i ∈ [0, 1] obtains the sufficient statistic
Yi,t =
αz + ∑ts=1 β s Xi,s
α + ∑ts=1 β s
which summarizes her knowledge about the status quo Θ up to period t; for
details on how to derive the statistic Yt and why it is sufficient see sections
4.2.2 and 4.3.2. As in the one-period game, we omit the index i in the sequel
since the game is symmetric.
3.2.3
Indifference conditions
Just as we identified a monotone equilibrium of the one-period game with a
vector of threshold values in section 2.3.3, we identify a monotone equilibrium
of the multi-period game with a sequence of threshold values (θt∗ , y∗t )∞
t=1 . The
following lemma (Angeletos et al., 2007, Lemma 1, p. 9) states that this is
possible for any monotone equilibrium.
Lemma 3.2 (Angeletos, Hellwig, Pavan). Any monotone equilibrium is characterized by a sequence (θt∗ , y∗t )∞
t=1 such that at any t ≥ 1, an agent attacks if and only if
Yt ≤ y∗t , and the status quo Θ is in place in period t ≥ 2 if and only if θt∗−1 < Θ.
23
Hence, the sequence (θt∗ , y∗t )∞
t=1 has to fulfill the indifference conditions
λ(θt∗ , y∗t ) = θt∗
(critical mass condition)
u(θt∗ , θt∗−1 , y∗t ) = 0
(payoff indifference condition)
for every t ≥ 1.
Critical mass condition The agent’s statistic for the status quo Θ is
Yt =
αz + ∑ts=1 β s Xs
.
α + ∑ts=1 β s
Thus, the agent’s belief conditional on the event that the status quo Θ equals
the equilibrium value θ ∗ is normally distributed
¶
µ
αz + ∑ts=1 β s θt∗
∑ts=1 β s
∗
,
Yt |(Θ = θt ) ∼ N
α + ∑ts=1 β s (α + ∑ts=1 β s )2
such that we obtain


t
∗ − αz − t
∗
(
α
+
β
)
y
β
θ
∑ s =1 s
∑ s =1 s t 
q
P [Yt ≤ y∗ | Θt = θt∗ ] = Φ 
t
∑ s =1 β s
where Φ denotes the cumulative distribution function of the standard normal
distribution.
Payoff indifference condition The posterior probability of regime change in
period t ≥ 2 is given by
P [Θ < θt∗ | Yt = y∗t , Θ > θt∗−1 ]
since it is common certainty that θt∗−1 < Θ. For any P-measurable sets A, B, C
with P [ B ∩ C ] > 0 it holds that
P [ A | B ∩ C] =
P [ A ∩ B ∩ C ] P [C ]
P [ A ∩ B | C]
=
.
P [ B ∩ C ] P [C ]
P [B | C]
Thus, we can rewrite the posterior probability of regime change as
£
¤
P θt∗−1 < Θ < θt∗ | Yt = y∗t
∗
∗
∗
£
¤
P [Θt < θt | Yt = yt , Θ > θt−1 ] =
P θt∗−1 < Θ | Yt = y∗t
=
£
¤
1 − P Θ < θt∗−1 | Yt = y∗t − 1 + P [Θ < θt∗ | Yt = y∗t ]
¤
£
1 − P Θ < θt∗−1 | Yt = y∗t
= 1−
1 − P [Θ < θt∗ | Yt = y∗t ]
£
¤.
1 − P Θ < θt∗−1 | Yt = y∗t
24
The posterior distribution of the status quo Θ, conditional on the event that
the agent’s belief Yt about is y∗t , is normally distributed
Θ|(Yt =
y∗t )
µ
∼N
y∗t ,
¶
1
α + ∑ts=1 β s
,
such that we obtain
v
u
u
P [Θ ≤ θt∗ | Yt = y∗t ] = Φ (θt∗ − y∗t ) tα +

t

∑ βs  .
s =1
We can now calculate the expected payoff
u(θt∗ , θt∗−1 , y∗t ) = P [Θt < θt∗ | Yt = y∗t , Θ > θt∗−1 ] − c
= 1−
1 − P [Θ < θt∗ | Yt = y∗t ]
£
¤ −c
1 − P Θ < θt∗−1 | Yt = y∗t
µq
+ ∑ts=1
β s (θt∗
− y∗t )
¶
1−Φ
α
µ
¶ −c
= 1−
q
¡ ∗
¢
t
∗
1−Φ
α + ∑ s =1 β s θ t −1 − y t
¶
µq
t
∗
∗
α + ∑ s =1 β s ( y t − θ t )
Φ
¶ − c.
= 1 − µq
¡ ∗
¢
t
∗
Φ
α + ∑ s =1 β s y t − θ t −1
Equilibrium characterization The sequence of monotone equilibria is implicitly given by the equations

Φ
(α
+ ∑ts=1
)y∗
− ∑ts=1
βs
− αz
q
∑ts=1 β s
µq
+ ∑ts=1
β s (y∗t

β s θt∗ 
− θt∗ )
= θt∗
(3.2)
¶
Φ
α
¶ =c
1 − µq
¡ ∗
¢
t
∗
Φ
α + ∑ s =1 β s y t − θ t −1
which have to hold for every t ≥ 1.
(critical mass)
(payoff indifference)
25
3.2.4
Uniqueness of equilibria
We can solve the critical mass condition in (3.2) for y∗t and obtain
q
t
∗
αz + ∑s=1 β s θt + ∑ts=1 β s Φ−1(θt∗ )
y∗t =
α + ∑ts=1 β s
For ease of notation set β̃ t := ∑ts=1 β s . Plugging y∗ into the denominator of the
payoff indifference condition, we obtain
q
α + β̃ t (y∗t − θt∗−1 )
µ
¶
q
¡
¢ ∗
1
∗
−1 ∗
=q
αz + β̃ t θt + β̃ t Φ (θt ) − α + β̃ t θt−1
(3.3)
α + β̃ t
µ
¶
q
1
=q
α(z − θt∗ ) + β̃ t Φ−1(θt∗ ) + β̃ t (θt∗ − θt∗−1 )
α + β̃ t
Finally, plugging equation (3.3) into the formula for the expected payoff yields
u(θt∗ , θt∗−1 )
µ
¡
¢−1/2
µ
α (z − θt∗ ) +
q
β̃ t Φ−1(θt∗ )
¶¶
α + β̃ t
µ
¶¶ − c
= 1− µ
q
¡
¢−1/2
∗
∗
∗
∗
−
1
Φ α + β̃ t
α(z − θt ) + β̃ t Φ (θt ) + β̃ t (θt − θt−1 )
Φ
¡
¢
Note that u = u θt∗ , θt∗−1 , z, α, β̃ t is continuous in all arguments. Applying
l’Hôpital’s rule we can verify that
lim u(θt∗ , θt∗−1 ) = 1 − c
(for θt∗−1 < 0),
lim u(θt∗ , θt∗−1 ) = −c
(for θt∗−1 < 1).
θt∗ →0
θt∗ →1
Obviously, the determination of monotone equilibria is now significantly more
difficult than in the static model. We cite the following theorem (Angeletos
et al., 2007, Theorem 1, p. 15) which states that the number of monotone equilibria depends on the common prior mean z.
Theorem 3.3 (Equilibria). There exist threshold values z, z ∈ R such that:
4 If z ∈ (∞, z] there exists a unique monotone equilibrium such that an attack
occurs only in period t = 1.
4 If z ∈ (z, z) there are at most finitely many monotone equilibria and there exists
t∗ < ∞ such that no attack occurs after period t = t∗ .
4 If z ∈ [z, +∞) there are infinitely many equilibria.
26
Proof of Theorem 3.3. Since the proof is lengthy and requires some preparations
the interested reader is referred to (Angeletos et al., 2007). We only prove the
first proposition:
Note that the denominator in the payoff function goes to one if there is no
lower bound such that limθt∗−1 →−∞ u(θ, θt∗−1 ) = u(θ ), where u(·) is the payoff
function from the static game. Since we assumed that
t
∑ βs ≥ α2 /(2π ),
s =1
we know from Proposition 3.1 that the payoff function u(θ ) is strictly decreasing in θ and there is a unique solution θ̂t such that u(θ̂t ) = 0. Thus, we conclude that
u(θ, θt∗−1 ) ≤ u(θ, −∞) < u(θ̂t , −∞) = u(θ̂t ) = 0,
for all
θ > θ̂t .
Consequently, the threshold values of the dynamic game are bounded from
above such that
θt∗−1 ≤ θt∗ ≤ θ̂t
(3.4)
because otherwise there is not solution to equation (3.2).
Let u(θ, Yt ) denote the payoff function of the static game in two arguments.
If the prior mean z from the public signal is so low that new arrival of private
information (on average) only increases the statistic Yt , the threshold value θ̂t
solving u(θ̂t , Yt ) = 0 decreases in each period since u(θ, Yt ) is decreasing in Yt .
Hence, we can find z ∈ R such that for all z ≤ z we have θ̂t ≤ θ̂1 for all t ≥ 1.
Note that θ1∗ = θ̂1 , since the dynamic game coincides with the static one in the
first period. Using equation (3.4), we obtain that
θ1∗ = θ̂1 ≤ θt∗ ≤ θ̂t ≤ θ̂1 = θ1∗
such that the sequence θt∗ ≡ θ1∗ for all t ≥ 1 is the unique equilibrium of the
game.
Why is it reasonable that the common prior mean z governs the equilibria
of the dynamic game? In contrast to the static game, the arrival of new private
information causes the variance of the statistic Yt to decrease, while the learning process establishes lower bounds θt∗−1 on the status quo. Plainly spoken,
at the beginning of the game an agent’s knowledge about the status quo relies heavily upon the common prior z while later in the game the agent infers
about the status quo from sources that are more precise. Hence, the prior mean
might be fallacious if it takes extreme values:
If the public signal leads to a very low prior mean, the agents start with
a low statistic Yt such that attacks are likely to occur in the first period. In
the following periods, however, the arrival of new private information (on
27
average) increases Yt and the learning from past thresholds establishes lower
bounds on the status quo. Hence, the low prior mean z was misleading the
agents into aggressive behavior.
If the public signal leads to a very high prior mean, the agents start with a
high statistic Yt such that attacks are unlikely to occur in the first period. In the
following periods, however, the arrival of new private information (on average) decreases Yt and attacks become more likely, even though past thresholds
θt∗−1 reflect the strength of the status quo. Hence, the high prior mean z was
misleading the agents into inappropriately cautious behavior, but attacks after
the first period are possible.
3.3
Games with non-deterministic status quo
Angeletos, Hellwig, and Pavan (2007) extend the classic global game model
to a dynamic setting where learning takes the form of truncating the support
of beliefs about the status quo Θ. Unfortunately, uniqueness is lost in the dynamic setting in so far as uniqueness only holds for the trivial case that the
only attack takes place in the first period. The main achievement of global
game modeling, however, lies in the construction of a unique equilibrium for
coordination games which naturally exhibit multiple pure Nash equilibria.
One might hope that uniqueness could be restored for the dynamic game if
the effect of learning becomes sufficiently unprecise. Angeletos, Hellwig, and
Pavan (2007, chapter 5) introduce extensions to their dynamic game, adding
observable and non-observable shocks to the status quo. In this section we
try to capture the main ideas and results of these extensions. For a detailed
analysis including the proofs, the reader is referred to the original paper.
3.3.1
Observable shocks
We extend the dynamic global game model by introducing independent, identically distributed random shocks ωt ∼ N (0, δ) to the status quo Θ in each
period. We assume that regime change occurs in period t ≥ 1 if and only if
λ t ≥ Θ + ωt ,
where λt denotes the size of the aggregate attack in period t. Hence, beside
the true value of the status quo the agents are concerned with the size of the
random shock, which is independent of Θ and observable. For an application
in the context of currency crises, the status quo Θ may capture the attitude of
the central banker, while ωt may represent some observable economic fundamental like the interest rate in period t.
28
Public information
signal
As in the previous games, the agents receive a public
Z = Θ + ζ,
ζ ∼ N (0, 1/α)
which leads to the common prior distribution Θ ∼ N (z, 1/α).
Private information In every period, each agent i ∈ [0, 1] receives a private
signal
Xi,s = Θ + ωs + ε i,s ,
ε i,s ∼ N (0, 1/β s )
1≤s≤t
i.i.d.
which is not observed by any agent but i.
Monotone equilibria An agent’s optimal action in period t ≥ 1 depends
on the unknown status quo Θ and the observable shocks ωt := (ω1 , . . . , ωt ).
We can find threshold functions θt∗ (ωt ) and y∗t (ωt ) such that in period t ≥ 1
regime change occurs if and only if Θ ≤ θt∗ (ωt ), and an agent attacks if and
only if Yt ≤ y∗t (ωt ).
Angeletos, Hellwig, and Pavan (2007, section 5.3) show that θt∗ (ωt ) and
indeed characterize the monotone equilibria of the game. This is not
surprising since the observable shocks do not interfere with the structure of
higher-order beliefs. In particular, the learning process is not affected by observable shocks. Angeletos, Hellwig, and Pavan also present a continuity result (partially in the online supplementary) in a sense that for a sufficiently
small variance δ > 0 of the shock, the equilibria of the extended game are
arbitrarily close to the equilibria of the original dynamic game.
y∗t (ωt )
3.3.2
Unobservable shocks
In this section the setting totally agrees with the previous section, but the random shocks are not observable any more.
Monotone equilibria Note that the sufficient statistic Yt still holds all information available to an individual agent. Hence, we can still characterize
the monotone equilibria with respect to an individual agent by a sequence of
threshold values (y∗ )∞
t=1 such that an agent decides to attack if and only if
∗
Yt ≤ yt in period t ≥ 1.
Since ωt := (ω1 , . . . , ωt ) affects the strength of the status quo and is unobserved, we cannot characterize the monotone equilibria with respect to the sta∞
tus quo by a sequence (θt∗ (ωt ))t=1 of truncation points any more. Instead, the
agents update the information from learning induced by the strategy (y∗ )∞
t=1 in
each period. More precisely, in each period t > 1 the agents update the learned
29
information with the probability that no regime change occurs where the posterior from the previous period serves as prior distribution. The prior used in
the second period is the common prior deduced from the public signal.
The shocks ωt have a smoothing effect: The support of the distribution
of the status quo is not truncated since the posterior is positive on the whole
real line. Angeletos, Hellwig, and Pavan show that for δ → 0 the posterior
converges pointwise to the truncated normal distribution from the original
dynamic game. Further, the equilibria of the original dynamic game can be
approximated arbitrarily well by a perturbed game with a sufficiently small
variance of the shocks δ > 0.
3.3.3
Random walk status quo
We present another extension where the status quo follows a random walk
such that Θ0 is the initial state and
Θs = Θs − 1 + η s ,
ηs ∼ N (0, γ)
1≤s≤t
i.i.d.
is the status quo in period t. Since a moving status quo disturbs the learning process and dilutes the importance of public information, one could hope
that this version of the dynamic global game model has a non-trivial unique
equilibrium. Unfortunately, we shall see that this model is mathematically not
tractable unless some simplifications are added.
Public information
signal
As in the previous games, the agents receive a public
Z = Θ0 + ζ,
ζ ∼ N (0, 1/α) ,
which leads to the common prior distribution Θ ∼ N (z, 1/α). Since the status
quo Θt follows a Gaussian random walk, however, the quality of prior information deteriorates very quickly.
Learning Just as in the previous sections, the agents learn about the status
quo from preceding periods. Since the status quo Θt is making a jump in each
period, however, the endogenous information cannot be summarized in the
maximum of all past equilibrium thresholds. Similar to the game with unobserved shocks, we obtain a noisy learning process with posterior having full
support in R.
Private information According to the preceding sections, each agents receives a private signal
X t = Θt + ε t ,
ε t ∼ N (0, 1/β) , t ∈ N.
30
about the current status quo. For the sake of simplicity, we assume the noise
ε t to be of constant precision β for all t ≥ 1. Note that the signals { X1 , . . . , Xt }
are correlated since
s∧t
t
Xt = Θ0 +
∑ ηr + ε t
and thus
Cov [ Xs , Xt ] =
s =1
∑ Var [ε r ] =
s =1
s∧t
.
β
Although we can still give a sufficient statistic Yt with respect to the current
status quo Θt , the statistic Yt is not sufficient with respect to earlier states of
the status quo Θs where 1 ≤ s < t.
Monotone equilibria Since we cannot characterize monotone equilibria by
threshold values of sufficient statistics any more, we have to look at functions
h : Rt → R,
g : Rt → R
for all
t ≥ 1,
such that the status quo is abandoned in period t ≥ 1 if and only if
Gt := g (Θ1 , . . . Θt ) ≤ 0,
and an individual agent attacks in period t ≥ 1 if and only if
Ht := h ( Z, X1 , . . . Xt ) ≤ 0.
Assume there are, for every t ≥ 1, functions Gt∗ and Ht∗ that comply with the
conditions
λ( Gt∗ , Ht∗ ) = Gt∗
u( G1∗ , . . . , Gt∗ , Ht∗ ) = 0
(critical mass condition)
(payoff indifference condition)
where
u( G1 , . . . , Gt , Ht ) = P [ Gt ≤ 0 | Ht = 0, G1 = 0, . . . , Gt−1 = 0] − c
and
λ( Gt , Ht ) = P [ Ht ≤ 0 | Gt = 0] .
We can then identify the monotone equilibria of the game with a sequence of
threshold functions ( Gt∗ , Ht∗ )t≥1 . Hence, without the use of a one-dimensional,
sufficient statistic the equilibrium analysis becomes somewhat intractable or
even infeasible.
Angeletos, Hellwig, and Pavan (2007) attempt to get around this problem
by introducing short-lived agents such that a new cohort of agents replaces the
old one in each period. New cohorts still learn from the fact that the regime
survived past attacks, but since the agents only play one period (and consequently have a sufficient statistic for this period), monotone equilibria can
again be characterized by real-valued thresholds
31
The original dynamic model Obviously, the original dynamic model is a
special case where Θ ≡ Θs for all 1 ≤ s ≤ t. We find that Gt and Ht are easy
functions. Let
Gt = Θ − θt
and
Ht = h ( Z, X1 , . . . Xt ) =
αZ + ∑ts=1 β s Xs
− y∗ .
α + ∑ts=1 β s
Recall that in the original dynamic model learning consists in truncating the
support of the beliefs about Θ. Therefore, we obtain
P [ Gt ≤ 0 | G1 = 0, . . . , Gt−1 = 0] = P [Θ ≤ θ | Θ > θ1∗ , . . . , Θ > θt∗−1 ]
= P [Θ ≤ θ | Θ > max{θ1∗ , . . . , θt∗−1 }] = P [Θ ≤ θ | Θ > θt∗−1 ]
= P [ Gt ≤ 0 | Gt−1 = 0]
which allows for an easy analysis of the learning process.
Chapter 4
Bayesian updating
4.1
Bayesian statistics
In this section we give a brief introduction to the basic concepts of Bayesian
statistics, focussing on the construction of Bayesian players in the global game
model. For more complete introductions to Bayesian statistics see e.g. (Box
and Tiao, 1973) or (Robert, 2001).
4.1.1
Subjective probability
Since the structure of private and public information is a major topic in the
theory of global games (see section 2.1.4) it seems appropriate to dedicate
some lines to the modeling of knowledge. Recall that the players participating in a coordination game with incomplete information are assumed to be
risk-neutral, rational agents seeking to maximize their expected utility.
In the global game model discussed in this thesis, the agents want to infer
about the status quo from noisy public and private signals and then opt for
the strategy which is optimal from their subjective point of view. To do so,
however, the agents need some statistical tool that allows them to calculate
their expected utility.
Clearly, a statistical approach based on the assumption that probabilities
are objective and the results of repeated, independent experiments is not appropriate to solve an individual decision problem. An agent cannot handle
her decision problem by testing the null hypothesis “regime change occurs”
against the alternative since the experiment is not repeatable. The agent has
only one shot and needs to take her decision on the basis of her individual
state of knowledge.
33
34
4.1.2
Bayesian inference
Mathematically, we model prior signals about the outcome of an event by conditional probabilities. If an agent knows what kind of signal she receives if the
regime changes, how probable the event of regime change is, and how probable the signal is, then she can infer about the probability of regime change
conditional on her signal by
·
¸
·
¸
“regime change”
“signal” if
P [“regime change”]
P
=P
.
if “signal”
“regime change”
P [“signal”]
We restate the above equation in mathematical terms. Let (Ω, BΩ , P) be a
probability space and A, B ∈ BΩ be two P-measurable sets with P [ B] > 0. The
relationship
P [ A | B] =
P [ A ∩ B]
P [ A ∩ B] P [ A]
P [ B | A] P [ A]
=
=
,
P [ B]
P [ A] P [ B]
P [ B]
(4.1)
mostly referred to as Bayes’ Theorem, relates the conditional and the marginal
probabilities. We can picture A to be the event of regime change and B the signal that the agent receives. Hence, the agent can infer about the future of the
status quo from her signal if the structure of the signal is known. More mathematically spoken, the agent needs to know the distribution and parametrization of the signal.
4.1.3
Prior and posterior distributions
Let (Rn , BRn , Pθ : θ ∈ P ) be a statistical model with n real-valued observations. In contrast to standard notation in Bayesian statistics, we will distinguish between the random variable and its realization. As customary in probability theory, we use upper-case letters for random variables and lower-case
letters for the corresponding realization.
Consider a probability distribution with a density f that depends on a parameter θ ∈ P from a parameter space P . In the case of the normal distribution
with known variance the only parameter is the distribution mean, i.e. θ = µ
and P = R. We do not know θ, but we observe an event A ∈ BRn , for example
that some signals take the value X = x. If all values of θ in P were equally
probable before we observed the signal, what can we infer about θ from the
information contained in A?
In parametric Bayesian statistics, the parameter θ of the statistical model is
treated like a random variable which we denote by Θ. The prior distribution
is a probability distribution on the parameter space P made up of information
available before the signals X are received (or the sample X is drawn). The posterior distribution PΘ | X = x is the prior distribution PΘ conditional on the signals
35
received; we say the prior is updated by the signals. We denote the density of
the prior distribution by π (θ ) and the density of the posterior distribution by
π ( θ | X = x ).
Example 4.1 (Common prior). Recall the brief discussion in section 2.3.1 about
the status quo θ being a random variable Θ from a Bayesian perspective. The
agent in the global game model receives the public signal
Z = Θ + ζ,
ζ ∼ N (0, 1/α)
If Z = z is observed, it holds that Θ = z + ζ. From the observation of the
public signal z we already gain some information on which values for Θ are
more probable. Hence, we find a prior distribution Θ ∼ N (z, 1/α) on the
parameter space P = R.
The inference based on the distribution of Θ conditional on the event that
X = x is defined by
π (θ | X = x) = R
f ( x | Θ = θ )π (θ )
P f ( x | Θ = θ ) π ( θ ) dθ
Note the analogous structure to Bayes’ Theorem in equation (4.1).
4.2
Updating from Gaussian signals
In chapter 3 the agents summarize their information about the status quo Θ in
a one-dimensional, sufficient statistic denoted by Y. In this section we show
how to derive these statistics in general and then apply our results to the global
game model.
4.2.1
Normal prior and normal signals
Recall from the example 4.1 in the previous section that the prior distribution
for the estimation problem in the global game model is N (z, 1/α).
Theorem 4.2 (Bayesian estimator). Let Σ be a positive-definite covariance matrix
and ( X1 , . . . , Xt )T ∼ N (θ 1t , Σ) the signals with mean Θ = θ. Let N (z, 1/α) be
the prior distribution for Θ. Then the posterior distribution is
³
´
N z + c T C −1 ( x − z ), α −1 − c T C −1 c ,
where c := α−1 1t and C := α−1 1t 1tT + Σ.
Obviously, it suffices to invert the covariance C matrix in order to explicitly
state the posterior distribution. The following lemma is often useful for that
purpose and also necessary for the proof of the theorem.
36
Lemma 4.3. Let A and xy T + A be regular matrices, and x, y two column vectors.
Then it holds that
y T A −1
y T ( xy T + A)−1 =
.
y T A −1 x + 1
Proof of Lemma 4.3. Note that
y T A−1 ( xy T + A) = y T ( A−1 xy T + I ) = (y T A−1 x + 1)y T .
Multiply by ( xy T + A)−1 and divide by (y T A−1 x + 1) to complete the proof.
Proof of Theorem 4.2. The density of the prior distribution is
µ
¶
1
1
2
exp − α(Θ − z) ,
π (Θ) = √
2
2πα−1
and the density of the joint distribution of ( X1 . . . , Xt ) is
¶
µ
1
1
T −1
exp − ( x − θ 1t ) Σ ( x − θ 1t ) ,
f (x | Θ = θ ) = p
2
(2π )t det(Σ)
see e.g. Georgii (2008), theorem 9.2, for a proof.
For convenience of notation we use the proportionality symbol ∝ which
means that terms are equal up to a constant. We obtain
π (Θ | x) = R
f ( x | Θ = θ )π (θ )
P f ( x | Θ = θ ) π ( θ ) dθ
µ
¶
µ
¶
1
1
2
T −1
∝ exp − α(Θ − z) exp − ( x − θ 1t ) Σ ( x − θ 1t )
2
2
µ
´¶
1³
2
T −1
α(Θ − z) + ( x − θ 1t ) Σ ( x − θ 1t )
.
∝ exp −
2
Applying Lemma 4.3 with x = −α−1 1t , y = 1t and A−1 = C −1 we obtain
³
´
1tT Σ−1 = 1tT −α−1 1t 1tT + C −1 =
1tT C −1
c T C −1
=
.
1 − α−1 1tT C −1 1t
α − 1tT C −1 1t
(4.2)
Tedious calculations and use of equation (4.2) yields that indeed
³
³
´´2
θ − z + c T C −1 ( x − z )
α ( θ − z )2 + ( x − θ 1t ) T Σ −1 ( x − θ 1t ) =
α −1 − c T C −1 c
which implies that π (Θ | x) is the density of a normal distribution with mean
z + c T C −1 ( x − z) and variance α−1 − c T C −1 c.
37
The proof presented above is the Bayesian way to obtain Theorem 4.2. In
fact, for the special case that both the prior distribution and the signals are
normally distributed, there is another way to see that Theorem 4.2 is correct.
We can use basic properties of the multi-normal distribution to obtain the same
result. This reasoning, however, has nothing to do with Bayesian thinking in
terms of prior and posterior distribution.
Recall that the agents in the global game model receive a public signal
Z = Θ + ζ,
ζ ∼ N (0, 1/α)
and private signals
Xt = Θ + ε t ,
ε t ∼ N (0, 1/β t ) .
If we know that Z = z and therefore Θ ∼ N (z, 1/α), the random vector
(Θ, X1 , . . . , Xt )T is normally distributed. If the signals exhibit a non-degenerate
joint distribution with covariance matrix Σ, it holds for all 1 ≤ i, j ≤ t that
Cov [Θ, Xi ] = Cov [z + ζ, z + ζ + ε i ] = α−1
£
¤
£
¤
Cov Xi , X j = Cov z + ζ + ε i , z + ζ + ε j = α−1 + σi,j .
We again obtain the Bayesian estimator for Θ as in Theorem 4.2 by applying
the following theorem.
Theorem 4.4 (Conditional multivariate normal). Let (Θ, X1 , . . . , Xt )T be a random vectors such that
µµ ¶ µ −1
¶¶
µ ¶
z
α
cT
Θ
∼N
,
X
z 1t
c C
where C := α−1 1t 1tT + Σ and c := α−1 1t . The distribution of Θ conditional on
X = x is given by
³
´
¡ ¯
¢
T −1
−1
T −1
¯
Θ X = x ∼ N z + c C ( x − z1t ), α − c C c
such that we obtain another version of Theorem 4.2.
Proof. In contrast to the Bayesian proof we do not need a lot of preparation.
Let
Θ∗ := Θ − c T C −1 X,
a := (1, −c T C −1 )T ,
B : = (0t , I t )
where I t is the t-dimensional unit matrix. We can write Θ∗ and X as
Θ∗ = a T (Θ, X T )T
and
X ∗ = B (Θ, X T )T
38
Recall that the normal distribution is closed under linear transformations (see
e.g. Georgii, 2008, theorem 9.5 for a proof). Hence, Θ∗ is normally distributed
with mean E [Θ∗ ] = z − z c T C −1 1t and variance
h
i
h
i
Var [Θ∗ ] = Var a T (Θ, X T )T = a T Var (Θ, X T )T a
T
= (1, −c C
= (α
−1
−1
T
−c C
α −1 c T
)
c C
µ
−1
c,
0tT )
µ
¶µ
1
− C −1 c
1
− C −1 c
¶
¶
= α−1 − cT C −1 c.
Note that Θ∗ and X are independent since for the covariance between Θ∗ and
X ∗ it holds that
µ −1
¶
µ −1
¶ µ T¶
cT
α
cT
0t
∗
T α
T
T −1
Cov [Θ , X ] = a
B = (1, −c C )
c C
c C
It
= (α
−1
T
−c C
−1
T
T
c, c − c C
−1
0T
C) t
It
µ
¶
= 0tT .
Since Θ = Θ∗ + c T C −1 X, we find that for a fixed value of X = x, the random
variable Θ is equivalent to Θ∗ plus a constant term c T C −1 x. In other words,
we can write Θ conditional on the event X = x as
(Θ | X = x) = Θ∗ + cT C −1 x
³
´
and since Θ∗ ∼ N z − z c T C −1 1t , α−1 − c T C −1 c by construction we obtain
³
´
(Θ | X = x) ∼ N z + cT C −1 ( x − z1t ), α−1 − cT C −1 c
which completes the proof. Note that a slightly modified version of Theorem
4.4 would still hold if Θ was a multi-normal vector; the proof for the extended
theorem remains the same.
4.2.2
Independent normal signals
In this section we apply Theorem 4.4 to the case of independent normal signals,
i.e. { X1 , . . . , Xt } are not correlated. In Angeletos, Hellwig, and Pavan (2007)
the private signals are
ε t ∼ N (0, 1/β t ) , t ∈ N
³
´
1
−1
with the joint distribution ( X1 , . . . , Xt )T ∼ N Θ 1t , diag( β−
,
.
.
.
,
β
)
. The
t
1
following proposition gives a Bayesian point estimator for Θ. We discussed
the derivation of the prior distribution from the public signal in example 4.1 in
section 4.1.3.
Xt = Θ + ε t ,
39
¡
¢
Proposition 4.5. Let N z, α−1 be the prior distribution and
´
³
1
−1
( X1 , . . . , Xt )T ∼ N θ 1t , diag( β−
,
.
.
.
,
β
)
t
1
the joint distribution of the signals. By means of Bayesian updating, an agent’s posterior belief is
µ
¶
αz + ∑rt =1 β r xr
1
N
,
α + ∑rt =1 β r α + ∑rt =1 β r
distributed conditional on X1 = x1 , . . . , Xt = xt .
Although Theorem 4.4 states the same relation between Θ and ( X1 , . . . , Xt )
as Theorem 4.2, we should keep in mind that in the global game model agents
are updating information in a Bayesian way which is theoretically justified by
Theorem 4.2.
Proof of Proposition 4.5. Since the signals are just the random state plus an independent noise we set
1
−1
Σ := diag( β−
1 , . . . , β t ),
C := α−1 1t 1tT + Σ,
and
c : = α −1 1t .
Applying Lemma 4.3 with x = 1t , y = c and A = Σ yields
c T C −1 = c T (c1tT + Σ)−1 =
c T Σ −1
α−1 1tT Σ−1
1tT Σ−1
=
=
.
1 + c T Σ −1 1t
1 + α−1 1tT Σ−1 1t
α + 1tT Σ−1 1t
Since Σ−1 = diag( β 1 , . . . , β t ) and hence 1tT Σ−1 x = ∑rt =1 β r xr we obtain
z + c T C −1 ( x − z ) =
α −1 − c T C −1 c =
zα + ∑rt =1 β r xr
zα + z1tT Σ−1 1t + 1tT Σ−1 ( x − z)
=
,
α + ∑rt =1 β r
α + 1tT Σ−1 1t
1 + 1t Σ −1 c − 1t Σ −1 c
1
=
.
−1
α + ∑ts=1 β s
α + 1t Σ 1t
Applying Theorem 4.2 completes the proof.
4.2.3
Dependent signals
In the previous section we derived the Bayesian point estimator for independent normal signals { X1∗ , . . . , Xt∗ } as used in the benchmark model of Angeletos, Hellwig, and Pavan (2007). One could argue that a new piece of information is hardly ever independent of previous news such that independent
private signals are an oversimplification of real-world economics.
Now we introduce sequentially correlated signals that model adaptive arrival of new information. Let us assume that each agent receives an initial
signal
X1∗ = Θ + ε 1 , ε 1 ∼ N (0, 1/β) ,
40
and a signal in each period which depends on earlier signals in a sense that
Xt∗ = δXt∗−1 + (1 − δ)(Θ + ε t ),
ε t ∼ N (0, 1/β) , i.i.d.
t ∈ N,
where δ ∈ [0, 1] describes the persistence of old information. For δ = 0 note
that { X1∗ , . . . , Xt∗ } are independent while for δ = 1 we have X1∗ = · · · = Xt∗ .
The covariance for the more interesting case δ ∈ (0, 1) is given in the following
lemma.
Lemma 4.6. For δ ∈ (0, 1) the covariance between two signals Xs∗ and Xt∗ , conditional on the event Θ = θ, is given by
Cov
£
Xs∗ , Xt∗
¯
¤
¯ Θ = θ = β −1
µ
¶
2δ t+s−2 1 − δ |t−s|
.
δ
+
δ
1+δ
1+δ
Proof. We can write the signal Xt∗ = Θ + δt−1 ε 1 + (1 − δ) ∑rt =2 δt−r ε r explicitly
such that for Θ = θ we obtain
¯
¤
£
β Cov Xs∗ , Xt∗ ¯ Θ = θ
"
t
s
r =2
r =2
= β Cov δt−1 ε 1 + (1 − δ) ∑ δt−r ε r , δs−1 ε 1 + (1 − δ) ∑ δs−r ε r
#
2δ t+s−2 1 − δ |t−s|
=
+
δ
δ
1+δ
1+δ
since tedious calculations and use of the geometric series yield
t∧s
(1 − δ)2 t+s−2 1 − δ−2((t∧s)−1)
δ
δ2
1 − δ −2
r =2
µ
¶
(1 − δ )2 t + s −2( t ∧ s )
1−δ
1 − δ |t−s|
t + s −2
t + s −2
+
= δ
(δ
−δ
) = 1−
δ t + s −2 +
δ
.
2
1−δ
1+δ
1+δ
δ t + s −2 + ( 1 − δ )2 δ t + s
∑ δ−2r = δt+s−2 +
Dividing by β completes the proof.
The question arises whether the induced correlation between the signals
affects the equilibria of the coordination game:
4 Does a strong correlation between the signals restrain the cumulation of
private information and influence the equilibrium structure of the game?
4 Does the equilibrium structure of the game depend continuously on δ as
it approaches 0 or 1?
Curiously, the induced correlation has no effect at all, as the following proposition states:
41
¡
¢
Proposition 4.7. Let N z, α−1 be the prior distribution and
( X1∗ , . . . , Xt∗ )T ∼ N (θ 1t , Σ)
the joint distribution of the signals where δ ∈ (0, 1) and
µ
¶
2δ i+ j−2 1 − δ |i− j|
−1
Σi,j = β
δ
+
δ
.
1+δ
1+δ
By means of Bayesian updating, an agent’s posterior belief is
´
³


xt −δx1
1
αz + β ∑rt−
=1 x r + 1− δ
1 
,
N
α + βt
α + βt
distributed conditional on X1 = x1 , . . . , Xt = xt where
X1 = X1∗
and
Xs =
Xs∗ − δXs∗−1
,
1−δ
1 < s ≤ t,
are the increments divided by 1 − δ.
Proof. We can prove Proposition 4.7 by inverting the matrix C := α−1 11T + Σ
and applying Theorem 4.2. The inversion of the covariance matrix C is feasible
but requires rather tricky decompositions of the matrix C and involves lengthy
calculations.
We get around the inversion by noting that although the signals are correlated,
the increments { X1 , . . . , Xt } defined in Proposition 4.7 are independent. Hence
we can apply Proposition 4.5 to the set { X1 , . . . , Xt }, which immediately completes the proof.
We learn from Proposition 4.7 that the induced correlation between the signals does not influence the Bayesian estimator as long as Xs = Θ + ε s can be
regained from the observable signals. On the other hand, the example shows
that the Bayesian estimator would be affected if we chose the observable signals to be a non-injective function
h : { X1 , . . . , Xt } → R with
Xs∗ = h( X1 , . . . , Xs )
1<s≤t
such that there is no inverse function h−1 : R → { X1 , . . . , Xt } allowing to
reclaim the independent variates from { X1∗ , . . . , Xt∗ }.
Obviously, we can construct a function h that induces a correlation structure which alters the Bayesian estimator, but there is no economic justification
why h should not be linear. The simple stochastic process Xt∗ seems the generic
choice, and similar random walk models are commonly accepted and widely
used in economic analysis. We settle on the insight that using correlated signals in the global game model is no enrichment and that such extension does
not merit further consideration.
42
4.3
Sufficiency
As we pointed out in section 3.3.3, the mathematical tractability of the dynamic
global game model is due to the fact that agents can summarize all information
in a single one-dimensional, sufficient statistic. In this section we give a brief
introduction to the concept of sufficiency and prove that the statistics used in
the model are indeed sufficient.
4.3.1
Notion of sufficiency
We evaluate the performance of a point estimator by indicators such as variance, bias, consistence, and sufficiency. Plainly spoken, if a point estimator
Y ( x) for a parameter θ uses all available, relevant information about θ that can
be obtained from the sample x, we say the statistic is sufficient with respect to
the parameter θ.
In mathematical terms we call a statistic Y ( x) sufficient for an underlying parameter θ if the conditional probability distribution of the data X, given the
statistic Y ( x), does not depend on the parameter Θ = θ, i.e.
P [ X = x | Y = y, Θ = θ ] = P [ X = x | Y = y] .
This general formulation, however, is hard to prove in applications. We shall
therefore use the factorization criterion:
Theorem 4.8 (Fisher-Neyman factorization theorem). If the probability density
function is f ( x | Θ = θ ), then Y ( x) is sufficient with respect to θ if and only if there
are functions g and h such that
f ( x | Θ = θ ) = g ( x ) h (Y ( x ) , θ ) .
Proof. For a proof of the factorization criterion (see e.g. Lehmann, 1997, section
2.6, Theorem 8).
4.3.2
Sufficient statistics in the global game model
Recall that Σ = Cov [ X, X ], C = α−1 1t 1tT + Σ and c = α−1 1t parameterize the
signals in the dynamic global game model.
Proposition 4.9 (Sufficiency of the Bayesian estimator). The Bayesian point estimator for θ conditional on the event that X = x is
Y ( x) = z + c T C −1 ( x − z1t )
by Proposition 4.5. The statistic Y ( x) is sufficient with respect to θ.
43
Proof. We need to show that the density f ( x | Θ = θ ) can be factored into a
product such that the function g does not depend on θ and another function h
which depends only on Θ and Y ( x) but not directly on x.
¡
¢
We know that c T C −1 = α−1 − 1tT C −1 1t 1tT Σ−1 from equation (4.2) in the proof
of Theorem 4.2. Thus it holds that
Y ( x ) = z (1 − c T C −1 1t ) + c T C −1 x
¡
¢
= z(1 − cT C −1 1t ) + α−1 − 1tT C −1 1t 1tT Σ−1 x,
and we can write 1tT Σ−1 x in terms of the statistic Y ( x) such that
1tT Σ−1 x =
Y ( x ) − z (1 − c T C −1 1t )
α−1 − 1tT C −1 1t
For convenience of notation, we use the proportionality symbol ∝, meaning
that terms are equal up to a constant. The signals are multi-normally distributed, and for the density of X conditional on the event Θ = θ we obtain
³
´
f ( x | Θ = θ ) ∝ exp ( x − θ1t )T Σ−1 ( x − θ1t )
³
´
∝ exp xΣ−1 x + θ 2 1t Σ−1 1t − 2θ1tT Σ−1 x
Ã
∝ exp
³
Y ( x ) − z (1 − c T C −1 1t )
xΣ−1 x + θ 2 1t Σ−1 1t − 2θ
α−1 − 1tT C −1 1t
∝ exp xΣ
−1
´
µ
x exp θ 1t Σ
2
−1
2θ Y ( x)
1t −
−
1
α − 1tT C −1 1t
!
¶
.
Since there is factorization of f such that f ( x | Θ = θ ) = g( x) h(Y ( x), θ ), we
conclude by the factorization criterion that the statistic Y ( x) is sufficient with
respect to θ.
Appendix A
Notation and Symbols
Throughout this thesis, vectors and matrices are denoted in bold face; sets, random variables and matrices are denoted by capital letters. Vectors are always
defined as column vectors.
The global game model
Γ
A game.
Θ
θ
The status quo from the Bayesian point of view.
The true status quo. Realization of Θ.
Y
y
Sufficient statistic with respect to the status quo θ.
Realization of Y.
X
x
β
ε
The agent’s private signal about the status quo Θ.
Realization of X.
Precision of the private signal X.
Noise of the private signal.
Z
z
α
ζ
The public signal about the status quo θ.
Realization of Z.
Precision of the public signal Z.
Noise of the public signal.
ω
δ
Random shock.
Variance of the random shock ω.
λ
c
a
u
u
Fraction of attacking agents.
Cost of an attack.
Action in the game where a ∈ {0, 1}.
Utility function.
Expected utility function.
45
46
General mathematics
a≡b
a∝b
a is identical with b.
a is proportional to b, i.e. equal up to a constant.
N
R
R
Set of the natural numbers.
Set of the real numbers.
Closure. R = R ∪ {−∞, +∞}.
x∨y
x∧y
Maximum. x ∨ y = max{ x, y}.
Minimum. x ∧ y = min{ x, y}.
diag ( x)
1t
It
MT
M −1
Diagonal matrix with main diagonal x.
Vector of ones. 1t = (1, . . . , 1)T ∈ Rt .
Unit matrix. I t = diag (1t ).
Transpose of matrix M.
Inverse of matrix M.
|x|
1 A (x)
argmax f ( x )
Absolute value of x.
Indicator function of set A.
Some x ∗ ∈ A with f ( x ) ≤ f ( x ∗ ) for all x ∈ A.
x∈ A
Probability and Statistic
Ω
BΩ
P
PX
P [ A]
P [ A | B]
Probability space.
Borel sigma algebra over Ω.
Probability measure on B .
Probability distribution of the random variable X.
Probability of event A.
Probability of event A conditional on the event B.
E [X]
E [ X | B]
Var [ X ]
Cov [ X, Y ]
¡
¢
N µ, σ2
Φ( x )
φ( x )
Expected value of X.
Expected value of X conditional on the event B.
Variance of the random variable X.
Covariance between the random variables X and Y.
P
π (θ )
π (θ, x)
Parameter space.
Density of prior distribution.
Density of posterior distribution.
Normal distribution with mean µ and variance σ2 .
Cumulative distribution function of N (0, 1).
Density of N (0, 1).
Bibliography
G.-M. Angeletos, C. Hellwig, and A. Pavan. Dynamic global games of regime
change: Learning, multiplicity, and the timing of attacks. Econometrica, 75:
711–756, 3 2007.
A. Atkeson. Discussion on Morris and Shin. NBER Macroeconomics Annual,
2000.
G. E. P. Box and G. C. Tiao. Bayesian inference in statistical analysis. AddisonWesley, 1973.
H. Carlsson and E. van Damme. Global games and equilibrium selection.
Econometrica, (61):989–1018, 1993.
C. Chamley. Dynamic speculative attacks. American Economic Review, 93:603–
621, 2003.
D. W. Diamond and P. H. Dybvig. Bank runs, deposit insurance, and liquidity.
University of Chicago Press, 91:401–19, 1983.
D. Fudenberg and J. Tirole. Game Theory. MIT Press, 1991.
H.-O. Georgii. Stochastics. Walter de Gruyter, 3 edition, 2008.
I. Goldstein and A. Pauzner. Demand deposit contracts and the probability of
bank runs. Journal of Finance, (60):1293–1328, 3 2005.
F. Heinemann. Die Theorie globaler Spiele: Private Information als Mittel zur
Vermeidung multipler Gleichgewichte. Journal für Betriebswirtschaft, 55(3):
209–241, 2005.
F. Heinemann and G. Illing. Speculative attacks: unique equilibrium and transparency. Journal of International Economics, 58:429–450, 2002.
C. Hellwig. Public information, private information, and the multiplicity of
equilibria in coordination games. Journal of Economic Theory, 107:191–222, 2
2002.
M. J. Holler and G. Illing. Einführung in die Spieltheorie. Springer, 6th edition,
2005.
47
48
E. L. Lehmann. Testing Statistical Hypothesis. Springer, 2nd edition, 1997.
S. Morris and H. S. Shin. Global games: Theory and applications. Advanced in
Economics and Econometrics, 1, 2003.
S. Morris and H. S. Shin. Coordination risk and the price of debt. European
Economic Review, (48):133–153, 2004.
S. Morris and H. S. Shin. Unique equilibrium in a model of self-fulfilling currency attacks. American Economic Review, 88:587–97, 6 1998.
S. Morris and H. S. Shin. A theory of the onset of currency attacks. In Asian
Financial Crisis: Causes, Contagion and Consequences. Cambridge University
Press, 1999.
M. Obstfeld. Models of currency crises with self-fulfilling features. European
Economic Review, (40):1037–1047, 1996.
C. P. Robert. The Bayesian Choice. Springer, 2nd edition, 2001.
E. Zeidler. Nonlinear Functional Analysis and its Applications - Fixed point theorems, volume 1. Springer, 1986.