A GAME WITH NO APPROXIMATE BAYESIAN EQUILIBRIA 1.1

A GAME WITH NO APPROXIMATE BAYESIAN EQUILIBRIA
ZIV HELLMAN
A BSTRACT. Although there have been examples of 3-player Bayesian
games with no Bayesian equilibria, it has been an open question whether
or not there are games with no approximate Bayesian equilibria. We
present an example of a Bayesian game with two players, two actions
and a continuum of states that possesses no approximate Bayesian equilibria, thus resolving the question. As a side benefit we also have for the
first time an an example of a 2-player Bayesian game with no Bayesian
equilibria and an example of a strategic-form game with no approximate
Nash equilibria. The construction makes use of techniques developed in
an example by Y. Levy of a discounted stochastic game with no stationary equilibria.
1. P RELIMINARIES AND N OTATION
1.1. Information Structures and Knowledge.
A space of states is a pair (Ω, B), with a set of states Ω and a σ-field B of
measurable subsets (events) of Ω.
We will work throughout this paper with a two-element set of players
I = {player 1, player 2} and an information structure over the state space
(Ω, B) given by a pair of partitions Π1 and Π2 of Ω. For each state ω ∈ Ω
we denote by Πi (ω) the element in Πi that contains ω. Furthermore, denote
by Γi the sub-σ-algebra of B generated by Πi .
1.2. Types and Priors.
A type function ti of player i for (Ω, B, (Πi )i∈I ) is a function ti : Ω →
∆(Ω) from each state to a probability measure over (Ω, B) such that the
mapping ti (·) satisfies:
(1) ti (ω)(E) is measurable for any fixed event E,
Date: This version: May 2012 .
The Department of Mathematics and the Centre for the Study of Rationality, The Hebrew University of Jerusalem, email: [email protected]. This
research was supported in part by the European Research Council under the European
Commission’s Seventh Framework Programme (FP7/20072013)/ERC grant agreement no.
249159.
1
A GAME WITH NO APPROXIMATE BAYESIAN EQUILIBRIA
2
(2) ti (ω)(Πi (ω)) = 1,
(3) ti (ω) = ti (ω 0 ) for all ω 0 ∈ Πi (ω).
For each ω, ti (ω) is called player i’s type at ω. Therefore, a quintuple
(I, Ω, B, (Πi )i∈I , (ti )i∈I ), where each ti is a type function, is a type space.
A probability measure µi over (Ω, B) is a prior for a type function ti if
for each event A
Z
(1.1)
µi (A) = ti (ω)(A) dµi (ω).
A probability measure µ that is a prior for each type function in a type space
is a common prior.
1.3. Ergodic Games.
Recall that a Bayesian game over a type space is an assignment of a
strategic-form game to each state in Ω, i.e., for each player i and state ω
there is an action set Aωi and a payoff function uωi : Aω → R, where Aω =
×i∈I Aωi is the set of action profiles at ω. Further, a Bayesian game satisfies
0
the constraint that Aωi = Aωi for each pair of states ω, ω 0 such that ω 0 ∈
Πi (ω). An ε-Bayesian equilibrium ψ of a Bayesian game is measurable if
ψi is a measurable function over the domain Ω for each i ∈ I.
Simon (2003) defines a subclass of the class of Bayesian games called
ergodic games. An ergodic game satisfies the following conditions:
(1) there are finitely many players,
(2) each action set Aωi of each player i is finite,
(3) the state space Ω is a compact Polish space with a common prior µ
that is an atomless Borel probability distribution,
(4) each player’s payoff function is a continuous function over Ω,
(5) each player’s type ti (ω) at every state ω is a discrete probability
distribution with a finite support set, and the type function ti is a
continuous function,
(6) if B is a Borel set such that B ∈ Γi for all i ∈ I then µ(B) is equal
either to zero or one.
We also recall here the definition of the agent game K associated with a
Bayesian game B. The game K is a strategic-form game in which there is a
bijection η between the set of agents of K and the partition elements of B.
The action set of each player θ in K is equal to the action set of the player
j in B associated with η(θ), and the payoff to player θ for an action profile
is the corresponding expected payoff of j at η(θ). Every strategy ψ̂ of B is
naturally associated in this way with a strategy ψ in K.
A GAME WITH NO APPROXIMATE BAYESIAN EQUILIBRIA
3
As first shown by Harsányi (1967), the analysis of the equilibria of a
Bayesian game B can be accomplished by analysing the associated strategicform game K in the sense that a strategy ψ in K is a Nash equilibrium if and
only if ψ̂ is a Bayesian equilibrium in B. The extension of this statement to
ε-Nash equilibria and ε-Bayesian equilibria is immediate.
2. C ONSTRUCTING THE E RGODIC G AME
We define in this section an ergodic game B that we will eventually show
is an example of a game with no ε-Bayesian equilibrium.
2.1. The State Space.
Let X be the set of infinite sequences of 1 and −1, i.e. X := {−1, 1}N∪{0} .
An element x ∈ X is a sequence x0 , x1 , x2 , . . ., where we denote the i-th
coordinate of x by xi .
Let Y := {e, o}. The space of states of our example is Ω := X × Y ;
∗
i.e., every state ω ∈ Ω is of the form (x, e) or (x, o), where x ∈ {−1, 1}Z .
Given a state ω, we will write ω i to stand for xi , where x is the projection
of ω on X. The projection of ω ∈ Ω to Y will be called the parity of ω and
we will say that a state ω ∈ Ω is even if this projection equals e and odd if1
this projection equals o. This naturally leads to defining Even := X × {e}
and Odd := X × {o}, with Ω = Even ∪ Odd.
Let µX denote the Lebesgue measure on X; that is, the ( 21 , 21 )-Bernoulli
measure on X giving equal probability independently to −1 and 1 in all
coordinate positions. Let µY to be the elementary counting measure on the
two-element set {e, o}. Then the measure we will use for Ω is the product
measure µ = µX × µY over the product σ-algebra. Note that in particular
this means that the sets Even and Odd are measurable sets.
Notation 1.
• For each ω ∈ Ω, let by κ(ω) := ω 0 be the 0-coordinate of the
projection of ω on X. The interpretation we will adopt is that κ(ω)
is the ‘state of nature’ at the state of the world ω.
• Denote by j the sole non-trivial involution on Y , namely j(e) = o
and j(o) = e.
1
The terms odd and even here have nothing to do with the usual usage of these words as
qualifiers of integers; this is just a convenient way of distinguishing between two sorts of
states. The choice of the terms was inspired by the action of the S operator introduced in
this section, which maps odd states to even states and even states to odd states alternatively,
reminiscent of the way the successor operator on the integers maps odd integers to even
integers and even integers to odd integers alternatively.
A GAME WITH NO APPROXIMATE BAYESIAN EQUILIBRIA
4
• Denote by ι the measure-preserving involution on X defined by
ι(x0 , x1 , x2 , . . .) 7→ (−1 · x0 , x1 , x2 , . . .).
• Denote by S the standard left-shift operator on X, i.e.,
S(x0 , x1 , x2 , . . .) 7→ (x1 , x2 , . . .),
which is well known to be a measurable and measure-preserving
transformation.2
• Id will denote the identity mapping.
The transformations defined above for X and Y are extended to Ω as
follows.
Notation 2.
• Define ι to be the measure-preserving map on Ω given by
ι := ι × I.
(2.1)
By definition, ι is parity preserving. For each ω ∈ Ω, we will call
ι(ω) the twin of ω.
• Define S to be the measure-preserving map on Ω given by
S := S × j.
(2.2)
Note that we are explicitly defining S in a way that does not preserve
parity. The mapping S is almost everywhere a two-to-one function;
its inverse, S −1 , which maps almost all states to a pair of states, also
fails to preserve parity onto its image.
2.2. The Knowledge Partitions and Type Functions.
We next define the partitions of player 1 and player 2.
For ω ∈ Even:
For ω ∈ Odd:
Π1 (ω) = {ω, ι(ω), S(ω)}
Π2 (ω) = {ω, ι(ω), S(ω)}
Π2 (ω) = {ω} ∪ S −1 (ω)
Π1 (ω) = {ω} ∪ S −1 (ω)
One way to visualise this is as follows: generically, a partition element
of player 1 looks like
ω = e; x0 , x1 , x2 , . . .
ι(ω) = e; −x0 , x1 , x2 , . . .
S(ω) = o; x1 , x2 , x3 , . . .
2
See, for example, Halmos (1956).
A GAME WITH NO APPROXIMATE BAYESIAN EQUILIBRIA
5
while generically a partition element of player 2 looks like
ω = o; x0 , x1 , x2 , . . .
ι(ω) = o; −x0 , x1 , x2 , . . .
S(ω) = e; x1 , x2 , x3 , . . .
By definition each partition element Π1 (ω) contains two (twin) even states
and one odd state, while each partition element Π2 (ω) contains two (twin)
odd states and one even state. Note that all the elements in Πi (ω), for
i = 1, 2 and for all ω, are distinct from each other: ω and ι(ω) are by
construction distinct at their 0-th coordinate and S(ω) 6= ω because S does
not preserve parity. It follows that there is a bijection η between Ω and the
collection of types {Π1 (ω)}ω∈Ω ∪ {Π2 (ω)}ω∈Ω as follows:
Π1 (ω)
if ω is odd
(2.3)
η(ω) =
Π2 (ω)
if ω is even.
As for the type functions,
For ω ∈ Even:
1
t1 (ω)(ω) =
4
t1 (ω)(ι(ω)) = 14
t1 (ω)(S(ω)) = 12
For ω ∈ Odd:
t2 (ω)(ω) =
t2 (ω)(ι(ω)) =
t2 (ω)(S(ω)) =
1
4
1
4
1
2
Lemma 2.1. The functions t1 and t2 satisfy the conditions for being type
functions with µ as their common prior.
The proof of Lemma 2.1 is in the appendix.
2.3. The Game Forms.
The action set we will work with is identical at every state for both players: Aω1 = Aω2 = {L, R}, for all ω ∈ Ω.
The payoff function is more complicated, and before defining it formally
we will explain in words the intuitive properties that we want it to satisfy:
(i) at even states Player 2 is ‘profiting’ while Player 1 is ‘disinterested’,
receiving nothing no matter what action he plays;
(ii) at odd states Player 1 is ‘profiting’ while Player 2 is ‘disinterested’,
receiving nothing no matter what action she plays;
(iii) the action of the disinterested player, however, is significant, because the payoff of the profiting player at each state of the world
depends on the state of nature and on the actions of both players:
A GAME WITH NO APPROXIMATE BAYESIAN EQUILIBRIA
6
(a) if the state of nature is 1 and the disinterested player is highly
likely to choose L then the profiting player will want to match
the disinterested player’s expected action;
(b) if the state of nature is −1 and the disinterested player is highly
likely to choose L then the profiting player will want to mismatch the disinterested player’s expected action.
The formal definition of the payoff functions uω1 and uω2 for pure actions
is given by the following rules, where aj (ω) denotes an action of player j
at ω and as before κ(ω) is the state of nature at ω.
If ω is an even state, then uω1 (a) = 0 for any a ∈ {L, R}, and uω2 is as
depicted in Table 1.
If κ(ω) = 1:
a1 (ω) =
L R
uω2 (L, a1 (ω)) = 1
0
ω
u2 (R, a1 (ω)) = 0.3 0.3
If κ(ω) = −1:
a1 (ω) =
L R
uω2 (L, a1 (ω)) = 0.7 0.7
uω2 (R, a1 (ω)) = 1
0
TABLE 1. The game form at an even state
If ω is an odd state, then uω2 (a) = 0 for any a ∈ {L, R}, and uω1 is as
depicted in Table 2.
If κ(ω) = 1:
a2 (ω) =
L R
uω1 (L, a2 (ω)) = 1
0
ω
u1 (R, a2 (ω)) = 0.3 0.3
If κ(ω) = −1:
a2 (ω) =
L R
uω1 (L, a2 (ω)) = 0.7 0.7
uω1 (R, a2 (ω)) = 1
0
TABLE 2. The game form at an odd state
2.4. Reduction to an Agent Game.
To form the agent game K corresponding to B, we use the bijection
between Ω and the types in B given in by Equation (2.3); we can therefore
uniquely associate each agent of K with a state ω ∈ Ω. Abusing notation,
we will use ω to identify both the state and the agent associated with that
state, trusting that this will cause no confusion. Since the agent set of K is
Ω, we will use the same measure µ over (Ω, B) in both B and K,
The agents in K all share the same action set, Aω = {L, R}, hence each
strategy profile ψ of K is given by ψ : Ω → ∆({L, R}).
The pure action payoff function uω of each agent ω depends only on the
action a(ω) chosen by agent ω and the action a(S(ω)) of agent S(ω); in
A GAME WITH NO APPROXIMATE BAYESIAN EQUILIBRIA
7
effect, each agent ω perceives himself or herself as playing a game against
S(ω).
To see this, consider for example Player 1’s perspective when the true
state of the world is an even state ω. The relevant partition element is
Π1 (ω) = {ω, ι(ω), S(ω)}. By construction, S(ω) is an odd state. Since
Player 1’s action choice – which we recall must be uniform in Π1 (ω) – can
lead to a non-zero payoff for him only at S(ω), for calculating his expected
payoff at Π1 (ω) he needs only to take into account the state of nature associated with S(ω) and accordingly to plan a best reply to Player 2’s actions
at S(ω).
Explicitly, for each agent ω, the payoff uω is defined in Table 3, as derived
from Tables 1 and 2.
If κ(ω) = 1:
a(S(ω)) =
L R
uω (L, a(S(ω))) = 1
0
uω (R, a(S(ω))) = 0.3 0.3
If κ(ω) = −1:
a(S(ω)) =
L R
uω (L, a(S(ω))) = 0.7 0.7
uω (R, a(S(ω))) = 1
0
TABLE 3. The pure strategy profile payoff function uω in K
Notation 3. Given a mixed strategy profile ψ in K, denote by
(2.4)
q(ω) := ψ(ω)[L]
the probability that agent ω chooses L; since the probability of choosing R
is 1 − q(ω) this is sufficient to determine ψ(ω).
We now use this to write out explicitly the expected payoff E ω (ψ) for
each agent ω under a mixed strategy profile ψ, as determined by the pure
strategy profile payoff in Table 3:
3
q(ω)q(S(ω)) + 10
(1 − q(ω))
if κ(ω) = 1
ω
(2.5)
E (ψ) =
7
q(ω) + (1 − q(ω))q(S(ω))
if κ(ω) = −1
10
The definition of the mixed strategy profile ψ̂ in B associated with a
mixed strategy profile ψ in K explicitly gives
ψ̂1 (ω)
if ω is odd
ψ(ω) =
ψ̂2 (ω)
if ω is even.
We use this in the proof of Lemma 2.2, which is in the appendix.
Lemma 2.2. A strategy profile ψ̂ = (ψ̂1 , ψ̂2 ) in B is µ-measurable if and
only if its associated profile of mixed strategies ψ in K is µ-measurable.
A GAME WITH NO APPROXIMATE BAYESIAN EQUILIBRIA
8
2.5. Observations on Equilibria in K.
It follows from Lemma 2.2 and the Harsányi equivalence between εBayesian equilibria in B and ε-Nash equilibrium in K that to show that
there is no measurable ε-Bayesian equilibrium in B it suffices to show that
there is no measurable ε-Nash equilibrium in K.
From here to the end of the paper, fix the following values
1
1
; δ = 10ε < .
50
5
Assume, by way of contradiction, that there exists a measurable ε-Nash
equilibrium in K denoted by ψ.
0<ε<
Lemma 2.3. Let ω ∈ Ω. If κ(ω) = 1 then
1
2
=⇒ q(ω) < δ, q(S(ω)) ≥ =⇒ q(ω) > 1 − δ,
5
5
and if κ(ω) = −1 then
q(S(ω)) ≤
q(S(ω)) ≤
3
4
=⇒ q(ω) > 1 − δ, q(S(ω)) ≥ =⇒ q(ω) < δ,
5
5
Proof. The proof is shown for κ(ω) = 1; the other case follows similar
reasoning. Suppose that q(S(ω)) = 51 . Agent ω’s expected payoff, obtained
by placing q(S(ω)) = 15 in Equation (2.5), is
1
3
E ω (ψ) = q(ω) + (1 − q(ω))
5
10
ω
Hence 1/5 ≤ E (ψ) ≤ 3/10, and the best reply is for agent ω to play L
with probability 0. Weakening the requirement to an ε-best reply implies
< ε, implying that q(ω) < 10ε = δ. The same reasoning extended
that q(ω)
10
to any q(S(ω)) < 15 a fortiori leads to the conclusion that q(ω) < δ.
Now suppose that q(S(ω)) = 52 . Agent ω’s expected payoff is then
2
3
E ω (ψ) = q(ω) + (1 − q(ω)),
5
10
ω
leading to 3/10 ≤ E (ψ) ≤ 2/5. The best reply is for agent ω to play L
with probability 1. Weakening the requirement to an ε-best reply implies
that 1−q(ω)
< ε, i.e., 1 − q(ω) < δ and therefore q(ω) > 1 − δ. The same
10
result holds for q(S(ω)) > 52 .
An agent ω ∈ Ω will be called L-quasi-pure (respectively, R-quasi-pure)
in ψ if q(ω) > 1 − δ (respectively, q(ω) < δ). If ω is either L- or R-quasipure, we may refer to ω as being quasi-pure (q.p. for short) without further
qualification.
A GAME WITH NO APPROXIMATE BAYESIAN EQUILIBRIA
9
Denote ΞL = {ω | ω is L-quasi-pure}, ΞR = {ω | ω is R-quasi-pure}
and ΞM = Ω \ (ΞL ∪ ΞR ).
Lemma 2.4. If S(ω) ∈ (ΞL ∪ ΞR ) then ω ∈ (ΞL ∪ ΞR ). If S(ω) ∈ ΞL then
ω ∈ ΞL only if κ(ω) = 1 and similarly if S(ω) ∈ ΞR then ω ∈ ΞR only if
κ(ω) = 1.
Proof. The proof follows by repeated application of Lemma 2.3.
•
•
•
•
S(ω) L-q.p., κ(ω) = 1 =⇒ q(S(ω)) < δ < 15 and q(ω) < δ.
S(ω) L-q.p., κ(ω) = −1 =⇒ q(S(ω)) < δ < 35 and q(ω) > 1 − δ.
S(ω) R-q.p., κ(ω) = 1 =⇒ q(S(ω)) > 1 − δ > 25 and q(ω) > 1 − δ.
S(ω) R-q.p., κ(ω) = −1 =⇒ q(S(ω)) > 1 − δ > 45 and q(ω) < δ.
Lemma 2.5. For all ω ∈ Ω, at least one of the two states in S −1 (ω) is in
ΞL or ΞR .
Proof. One of the following inequalities must always hold:
2
3
q(S(ω)) > ; q(S(ω)) <
5
5
Suppose that the left inequality holds. Lemma 2.3 then shows that if ω 0 ∈
S −1 (ω) satisfies κ(ω 0 ) = 1 then q(ω 0 ) > 1 − δ and hence ω 0 ∈ ΞL . 5alternatively, if q(S(ω)) > 45 and κ(ω 0 ) = −1 then ω 0 ∈ ΞR . If the right inequality
holds, we similarly deduce that if ω 00 ∈ S −1 (ω) satisfies κ(ω 00 ) = −1 then
again ω 00 ∈ ΞL .
2.6. Nonexistence of ε-Equilibria.
The Main Theorem. The game B has no ε-Bayesian equilibria.
Proof. Although the following statement does not appear as an explicit
theorem in Levy (2012), it is in effect what is proved by putting together the
results Lemma 4.0.8, Lemma 4.0.9 and Proposition 4.0.10 of that paper:
Let F be a finite set and τ be a permutation of F . Denote Θ := X × F
and let S denote the measure-preserving operator S := S × τ on Θ (cf.
Equation (2.2) above). Then there cannot exist a decomposition of Θ into
three disjoint measurable subsets ΞR , ΞL and ΞM satisfying
(1) Θ = ΞR ∪ ΞL ∪ ΞM up to a null set;
(2) S(ω) ∈ (ΞL ∪ ΞR ) implies ω ∈ (ΞL ∪ ΞR ), S(ω) ∈ ΞL implies
ω ∈ ΞL only if κ(ω) = 1 and S(ω) ∈ ΞR implies ω ∈ ΞR only if
κ(ω) = 1;
(3) for almost all θ ∈ Θ at least one of the two elements in S −1 (θ) is in
ΞR or ΞL .
A GAME WITH NO APPROXIMATE BAYESIAN EQUILIBRIA
10
But this is exactly the situation we have developed for K: Y is a finite
set and Ω := X × Y , with j a permutation of Y . Assuming the existence
of a measurable ε-Nash equilibrium for K led to measurable ΞL , ΞR and
ΞM as defined above partitioning Ω and to Lemmas 2.4 and 2.5. This is a
contradiction. If K has no ε-Nash equilibrium then B has no ε-Bayesian
equilibrium.
2.7. Robustness to Perturbations.
For γ > 0, a γ-perturbation of a Bayesian game B is a Bayesian game
B0 over the same type space and action sets, with a set of payoff functions
viω satisfying kviω − uωi k∞ < γ for all i ∈ I.
Fixing a game B0 that is an ε-perturbation of the ergodic game B defined
above, it is straightforward to show that the inequalities in the proofs of
Lemmas 2.3, 2.4 and 2.5 continue to hold with respect to the agent game
of B0 (we need to restrict to ε because if we perturb a payoff from 0 to −ε
agent ω can still concentrate only on the actions of S(ω) and ignore the
other states in calculating an ε-best reply). We therefore immediately have
the following corollary.
Corollary 2.1. Any ε-perturbation of the game B has no ε-Bayesian equilibria.
3. F INAL R EMARKS
An ε-Harsányi equilibrium of a Bayesian game with a common prior µ
is a profile of mixed strategies Ψ = (Ψi )i∈I such that for each player i and
b i,
any unilateral deviation strategy Ψ
Z
Z
b i (ω), Ψ−i (ω)) dµ(ω) − ε.
uωi (Ψ(ω)) dµ(ω) ≥
uωi (Ψ
Ω
Ω
Simon (2003) shows that the existence of a measurable 0-Harsányi equilibrium implies the existence of a measurable 0-Bayesian equilibrium, but
this does not hold for ε > 0. The example in this paper of a game without a
measurable ε-Bayesian equilibrium does not, therefore, imply that there is
no ε-Harsányi equilibrium. We leave the open question of whether or not
there are examples of games that have no measurable ε-Harsányi equilibria
for future research.
4. A PPENDIX
Proof of Lemma 2.1. By construction, ti (Πi (ω)) = 1 for all ω and ti (ω) = ti (ω 0 )
for ω 0 ∈ Πi (ω). Two more Ritems need to be checked: that for each event A, ti (A)
is measurable and µ(A) = ti (ω)(A) dµ(ω).
A GAME WITH NO APPROXIMATE BAYESIAN EQUILIBRIA
11
We will prove these for i = 1, with the proof for i = 2 conducted similarly.
Fix an event A. Denote Ac := Ω \ A, OA := A ∩ Odd and EA := A ∩ Even.
Let r ∈ [0, 1]. If r ≥ 43 , then
{ω | t1 (ω)(A) > r} =(OA ∩ S(EA ∩ ι(EA )))
∪ (EA ∩ ι(EA ) ∩ S −1 (OA )),
which is measurable. Similarly, if
1
2
≤ r < 34 , then
{ω | t1 (ω)(A) > r} =(OA ∩ S(EA )) ∪ (EA ∩ S −1 (OA ))
∪ (ι(EA ) ∩ S −1 (OA ));
if
1
4
≤ r < 12 , then
{ω | t1 (ω)(A) > r} =OA ∪ S(EA ∩ ι(EA )) ∪ (EA ∩ ι(EA ))
∪ (EA ∩ S −1 (OA ) ∪ (ι(EA ) ∩ S −1 (OA ));
and if r < 41 , then
{ω | t1 (ω)(A) > r} = OA ∪ EA ∪ ι(EA ) ∪ S −1 (OA ) ∪ S(EA ).
Next, we divide up A as follows:
A1 :={ω ∈ A | ω ∈ Even and ι(ω), S(ω) ∈
/ A},
A2 :={ω ∈ A | ω ∈ Even, and ι(ω) ∈ A, S(ω) ∈
/ A},
A3 :={ω ∈ A | ω ∈ Even, and ι(ω), S(ω) ∈ A},
A4 :={ω ∈ A | ω ∈ Odd, and S −1 (ω) ⊂ Ac },
A5 :={ω ∈ A | ω ∈ Odd, and S −1 (ω) ∩ A 6= ∅, S −1 (ω) 6⊂ A},
A6 :={ω ∈ A | ω ∈ Odd, and S −1 (ω) ⊂ A}.
The sets (ARj ) are all disjoint from each other. The proof proceeds by showing
that µ(Aj ) = t1 (ω)(Aj ) dµ(ω) for each 1 ≤ j ≤ 6, which is straight-forward
but tedious. We show how it is accomplished in two of the cases, trusting that the
technique for the rest of the cases will be clear enough.
Case A1 . Define B := A1 ∪ ι(A1 ) ∪ S(A1 ∪ ι(A1 )). By the measure-preserving
properties of ι and S, one has µ(ι(A1 )) = µ(A1 ) and
µ(S(A1 ∪ ι(A1 ))) = µ(A1 ∪ ι(A1 )),
hence µ(B) = 4µ(A1 ). On the other hand,
Z
Z
1
dµ(ω) = µ(B)/4,
t1 (ω)(A1 ) dµ(ω) =
t1 (ω)(A1 ) dµ(ω) =
Ω
B
B 4
R
leading to the conclusion that Ω t1 (ω)(A1 ) dµ(ω) = µ(A1 ).
Z
A GAME WITH NO APPROXIMATE BAYESIAN EQUILIBRIA
12
Case A5 . Define C := A5 ∪ (ι(A5 ∩ Even)). Using similar reasoning as in the
previous case, relying on measure-preserving properties, one deduces that
µ(C) = 43 A5 . On the other hand,
Z
Z
Z
3
3
t1 (ω)(A5 ) dµ(ω) =
t1 (ω)(A5 ) dµ(ω) =
dµ(ω) = µ(C),
4
4
C
C
Ω
R
hence Ω t1 (ω)(A5 ) dµ(ω) = µ(A5 ).
Proof of Lemma 2.2. Let A be a Borel subset of R.
A strategy profile ψ̂ in B is measurable iff both ψ̂1 and ψ̂2 are measurable.
Hence both ψ̂1−1 (A) and ψ̂2−1 (A) are µ-measurable subsets of Ω. Recalling that
the sets Even and Odd are µ-measurable sets, it then suffices in one direction to
note that
ψ −1 (A) = (ψ̂1−1 (A) ∩ Odd) ∪ (ψ̂2−1 (A) ∩ Even).
To complete the proof, in the other direction if the profile ψ in K is µ-measurable
we have
ψ̂1−1 (A) = (ψ −1 (A) ∩ Odd) ∪ S −1 (ψ −1 (A) ∩ Odd).
and
ψ̂2−1 (A) = (ψ −1 (A) ∩ Even) ∪ S −1 (ψ −1 (A) ∩ Even).
R EFERENCES
Halmos, P. (1956), Lectures on Ergodic Theory, New York, Chapman and
Hall/CRC.
Harsányi, J. C. (1967), Games with Incomplete Information Played by Bayesian
Players, Management Science, 14, 159–182.
Levy, Y., (2012), A Discounted Stochastic Game with No Stationary Nash Equilibrium, Discussion Paper 596, The Centre for the Study of Rationality, Hebrew
University of Jerusalem.
Simon, R. S., (2003), Games of Incomplete Information, Ergodic Theory and the
Measurability of Equilibria Israel Journal of Mathematics, (138), 73–92.