Complexity and Mixed Strategy Equilibria: Supplemental Material 1

Complexity and Mixed Strategy Equilibria:
Supplemental Material
Tai-Wei Hu∗
Northwestern University
The supplemental material gives four results: the first establishes the genericity of
the mutual complexity condition (Proposition 1.2); the second shows that, for any oracle
θ ∈ {0, 1}N , a θ-incompressible sequence can be used to compute θ-random sequences for
any general measure that is independent across periods (Theorem 2.1); the third shows
that for any such θ-random sequence, a version of the Law of Iterated Logarithm holds
(Theorem 3.1); the fourth result extends the equilibrium existence result in [2] to any
finite non-zero sum game (Theorem 4.2). These results require an equivalent notion of
θ-randomness given by Martin-Löf [3].
1
Martin-Löf randomness
Here I introduce Martin-Löf randomness [3] (henceforth ML randomness). This concept
defines (algorithmically) random sequences in terms of statistical regularities. Its definition begins with a formulation of idealized statistical tests, defined as follows. Let X be a
finite set. The set of infinite sequences X N over X is endowed with the product topology.
Any open set can be written as a union of basic sets, where a basic set has the form
Nσ = {ζ ∈ X N : σ ≺ ζ} for some σ ∈ X ∗ .
Definition 1.1. Let X be a finite set and let θ ∈ {0, 1}N be an oracle. Suppose that µ is a
computable probability measure over X N , i.e., the mapping σ 7→ µ(Nσ ) is computable. A
sequence of open sets {Vt }∞
t=0 is a µ-test relative to θ if it satisfies the following conditions:
(1) The sequence {Vt } is θ-effective: there is a θ-computable function f : N → N × X ∗
S
such that for all t ∈ N, Vt = {Nσ : (∃n)(f (n) = (t, σ))}.
∗
E-mail: [email protected]; Corresponding address: 2001 Sheridan Road, Jacobs Center
548, Evanston, IL 60208-0001, United States; Phone number: 847-467-1867.
1
(2) For all t ∈ N, µ(Vt ) ≤ 2−t .
A sequence ξ ∈ X N is ML-random relative to θ for µ if it passes all µ-tests relative to θ,
T
/ ∞
i.e., for any µ-test {Vt }∞
t=0 relative to θ, ξ ∈
t=0 Vt .
A test {Vt }∞
t=0 is used to establish the statistical regularity corresponding to the comT
plement of ∞
t=0 Vt . Conditions (1) and (2) require that this test, as a sequence of open
sets, be generated effectively from θ, and hence is constructive with respect to θ. A
sequence is ML-random if it passes all such tests.
ML-randomness relative to θ is equivalent to θ-randomness defined in the main paper
[2] using betting functions. This equivalence holds for any computable measure µ, but for
the purposes here I will focus only on computable measures of the form µp ; given a comQ
t
putable sequence p = (p0 , p1 , ...) of probability measures over X, µp (Nσ ) = |σ|−1
t=0 p (σt )
for any σ ∈ X ∗ . To give this equivalence result, first I shall extend the notion of betting
functions to measures of the form µp .
Definition 1.2. Let X be a finite set and let p = (p0 , p1 , ...) be a computable sequence
of distributions over X such that pt [x] > 0 for all x ∈ X and for all t ∈ N. A function
P
B : X ∗ → R+ is a betting function for µp if for all σ ∈ X ∗ , B(σ) = x∈X p|σ| [x]B(σx).
For any oracle θ, a betting function B for µp if B can be θ-computably approximated from
below. A betting function B succeeds over a sequence ξ ∈ X N if lim supn→∞ B(ξ[n]) = ∞.
We have the following theorem.
Theorem 1.1. Let θ be an oracle. Let p be a computable sequence such that pt [x] > 0
for all x ∈ X and for all t ∈ N. A sequence ξ ∈ X N is ML-random relative to θ for µp if
and only if there exists no θ-effective betting function B for µp that succeeds over ξ.
Proof. (Sketch.) (⇒) Suppose that lim supT →∞ B(ξ[T ]) = ∞ and B is a θ-effective betting
function for µp . Let Vt = {ξ : (∃s)(B(ξ[s]) > 2t )}. Let A0 = {σ ∈ {0, 1}∗ : B(σ) ≥ 2t }.
Enumerate elements (without repetitions) in A0 as A0 = {σ 1 , σ 2 , ..., σ k , ...}. Define Ak+1
inductively as follows: Ak+1 = Ak if there is no other σ ∈ A0 such that σ ≺ σ k ; otherwise,
T
S
let Ak+1 = Ak − {σ k }. Let A = ∞
k=0 Ak . Then, A is prefix-free and Vt =
σ∈A {ζ :
2
P
Q|σ|−1 t
σ ≺ ζ}.
σ∈A
t=0 p [σt ]B(σ) ≤ B() because A is prefix-free. Therefore, µp (Vt ) =
P
Q|σ|−1 t
P
Q|σ|−1 t
−t
−t
σ∈A
t=0 p [σt ] ≤ 2
σ∈A
t=0 p [σt ]B(σ) ≤ 2 . It is routine to check that {Vt } is
T
θ-effective because B is. Then ξ ∈ ∞
t=0 Vt and hence is not µp -random relative to θ.
(⇐) Suppose that ξ ∈
T∞
Vt for a µP -test {Vt } relative to θ. Construct a betting function
P
t
B as follows: Let B t (σ) = µp (Vt ∩ {ζ : σ ≺ ζ})/µp ({ζ : σ ≺ ζ}) and let B = ∞
t=0 B . Bt
P
is a betting function for µp for each t and B() = ∞
t=0 µp (Vt ) = 1; so B is well-defined.
T
B is θ-effective because {Vt } is. ξ ∈ ∞
t=0 Vt implies that lim supT →∞ B(ξ[T ]) = ∞.
t=0
Apparently the logic of the proof of Theorem 1.1 applies to all computable measures
µ with appropriate definition of betting functions. Moreover, because the two definitions
are equivalent, I will use the term θ-randomness to refer to both. Although these two
definitions are equivalent, either one is more convenient for certain arguments than others.
One example that illustrates the usefulness of the ML-randomness is the following proposition, which states that for any oracle θ, and for any computable probability measure µ,
the set of θ-random sequences for µ has probability 1 with respect to µ.
Proposition 1.1. Suppose that X is a finite set and µ is a computable measure over X N .
Then, for any oracle θ, µ({ξ ∈ X N : ξ is θ-random for µ}) = 1.
Proof. Because there are only countably many θ-computable functions, the cardinality of
µ-tests relative to θ is also countable. Enumerate them as {{Vti }t }i . The set of µ-random
S T
T
sequences relative to θ, denoted by MLRθµ , is equal to X N − i ( t Vti ). But µ( t Vti ) = 0
S T
and hence µ( i ( t Vti )) = 0. Thus, µ(MLRθµ ) = 1.
Here I use Proposition 1.1 to show that mutual complexity (Definition 3.1 in the main
paper [2]) is a generic property in the space of oracles. By Lemma 3.1 in [2], for any
oracle θ ∈ {0, 1}N , a sequence ξ ∈ {0, 1}N is θ-random for µ( 1 , 1 ) if and only if it is θ2 2
incompressible. For any two oracles θ1 , θ2 ∈ {0, 1}N , define θ1 ⊗ θ2 ∈ ({0, 1} × {0, 1})N by
(θ1 ⊗ θ2 )t = (θt1 , θt2 ) for all t ∈ N. The following proposition shows that mutual complexity
holds with probability 1 on the space ({0, 1}N )2 under the uniform distribution.
3
Proposition 1.2. Let M C = {θ1 ⊗θ2 ∈ ({0, 1}×{0, 1})N : θ1 and θ2 are mutually complex}.
Then M C has measure 1 under the uniform distribution.
Proof. By the van Lambalgen Theorem ([4], Theorem 3.4.6.), θ1 ⊗ θ2 is ML-random for
µ( 1 , 1 , 1 , 1 ) if and only θi is θ−i -random for µ( 1 , 1 ) for both i = 1, 2, that is, θi is θ−i 4 4 4 4
2 2
incompressible for both i = 1, 2. Thus, µ( 1 , 1 , 1 , 1 ) (M C) = µ( 1 , 1 , 1 , 1 ) (MLRµ( 1 , 1 , 1 , 1 ) ) = 1 by
4 4 4 4
4 4 4 4
4 4 4 4
Proposition 1.1.
On the other hand, randomness based on betting functions is more useful to establish
frequency results. In [2], Lemma 5.2 shows that any θ-random sequence for p has limit
frequency p along any subsequence selected by a θ-computable selection function. Here I
show that the same result holds for any θ-random sequence for µp with limt pt = p.
Theorem 1.2. Let θ be an oracle. Suppose that ξ is θ-random for µp with ptx > 0 for
all t ∈ N and for all x ∈ X and limt→∞ pt = p. Then, for any θ-computable selection
function r such that ξ r is an infinite sequence,
lim
T →∞
T −1
X
cx (ξ r )
t
t=0
T
= p[x] for all x ∈ X.
(1)
Proof. Suppose, by contradiction, that there exists some ε > 0, some y ∈ X, and a
PTk −1 cy (ξtr )
≥ p[y] + ε. I construct a θ-effective
sequence {Tk }∞
k=0 such that for all k ∈ N,
t=0
Tk
betting function B for µp that succeeds over ξ.
Let d > 0 be so small that d <
1
2
min{pt [y], 1 − pt [y]} for all t. Define B as follows:
(a) B() = 1; (b) B(σy) = (1 + d(1 − p|σ| [y]))B(σ) and B(σx) = (1 − dp|σ| [y])B(σ) for all
x 6= y if r(σ) = 1; (c) B(σx) = B(σ) for all x ∈ X if r(σ) = 0. Clearly, by construction,
B is θ-computable because r and p is. B is a betting function for µp : If r(σ) = 1, then
X
p|σ| [x]B(σx) = p|σ| [y](1 + κ(1 − p|σ| [y]))B(σ) +
x∈X
if r(σ) = 0, then
X
p|σ| [x](1 − κp|σ| [y])B(σ) = B(σ);
x6=y
P
x∈X
p|σ| [x]B(σx) =
P
x∈X
p|σ| [x]B(σ) = B(σ).
Now I show that lim supT →∞ B(ξ[t]) = ∞. For each k ≥ 1, define
Dk = {t ≤ k − 1 : r(ξ[t]) = 1, ξt+1 = x} and Ek = {t ≤ k − 1 : r(ξ[t]) = 1, ξt+1 6= x}.
4
Then, B(ξ[k]) =
Q
t∈Dk (1
+ d(1 − pt+1 [y]))
Q
t∈Ek (1
− dpt+1 [y]). Let Lk be such that
#{0 ≤ t ≤ Lk − 1 : r(ξ[t]) = 1} = Tk , i.e., (ξ[Lk ])r = ξ r [Tk ]. Because ξ r is an infinite
sequence, Lk is well defined for all k ∈ N. Because limt→∞ pt = p, let T be so large that
t ≥ T implies that |pt [y] − p[y]| < δ. Let K be the first k such that Tk > T . Then, for all
k > K,
B(ξ[Lk ]) =
Y
(1 + d(1 − pt+1 [y]))
t∈DLk
Y
(1 − dpt+1 [y])
t∈ELk
Tk
≥ A (1 + d(1 − p[y] − δ))p[y]+ε (1 − dp[y] − dδ)1−p[y]−ε ,
Q
where A =
t∈DL
K
(1+d(1−pt+1 [y])
#DL
K
(1+d(1−p[y]−δ))
Q
t∈EL
K
(1−dpt+1 [y]))
(1−dp[y]−dδ)
#EL
K
. Notice that for each k, |DLk | ≥ Tk p[y] +
Tk ε. It is straightforward to verify that (1 + d(1 − p[y] − δ))p[y]+ε (1 − dp[y] − dδ)1−p[y]−ε > 1
and hence limk→∞ B(ξ[Lk ]) = ∞.
2
Generating random sequences
Here I give a theorem which shows that incompressible sequences can be used to compute
µp -random sequences, for any computable sequence p = (p0 , p1 , ...) of probability measures
over X, assuming that pt [x] > 0 for all x ∈ X and all t ∈ N. The theorem generalizes the
main result in Zvonkin and Levin [5], which considers the X = {0, 1}.
Theorem 2.1. Let X be a finite set and let θ be an oracle. Suppose that η ∈ {0, 1}N
is a θ-incompressible sequence. Then, there exists a θ-random sequence for µp that is
η-computable.
Proof. First notice that by Lemma 5.1 in [2], there is a λX -random sequence ξ 0 ∈ X N
relative to θ that is η-computable, where λX is the uniform distribution over X N . I
construct a partial computable functional Φ : X N → X N such that if ξ 0 is a λX -random
sequence, then Φ(ξ 0 ) is a µp -random sequence. Φ is constructed through a computable
function φ : X ∗ → X ∗ by setting Φ as Φ(ξ)t = φ(ξ[min{k : |φ(ξ[k])| ≥ t}])t . I show that
Φ satisfies the following properties:
1. Φ is well-defined over any sequence in X N that is not computable.
5
2. λX (Φ−1 (A)) = µp (A) for any measurable A.
3. If ξ 0 is a λX -random sequence, then Φ(ξ 0 ) is a µp -random sequence.
(Construction of φ) φ is constructed through the distribution function of µp . Define
P∞
1
the mapping Γ : X N → [0, 1] as Γ(ζ) =
t=0 ι(ζt ) nt+1 , where X = {x1 , ..., xn } and
ι(x) = i − 1 if and only if x = xi . Γ is onto but not one-to-one. However, the set
{ζ ∈ X N : Γ(ζ) = Γ(ζ 0 ) for some ζ 0 6= ζ} is countable. Γ can be extended to X ∗ by
P
ι(σt )
setting Γ(σ) = |σ|−1
t=0 nt+1 . Given Γ, the distribution function of µp over [0, 1] is defined
by g : [0, 1] → [0, 1] as g(r) = µp ({ζ : Γ(ζ) ≤ r}). Define h = g −1 ; h exists because µp has
no atoms. Therefore, r ≤ g(s) if and only if h(r) ≤ s. Hence, µp (Γ−1 ([0, r])) = g(r) =
λX (Γ−1 ([0, g(r)])) = λX (Γ−1 (h−1 ([0, r]))).
Now I construct the function φ using the distribution function g. The idea of construction is the following: Because g is continuous, any open interval has a pre-image that is
also open. Because each finite sequence in X ∗ can be regarded as an open interval, it can
be mapped into another via g. As the length of the finite sequence increases, the interval
shrinks and finally the functional Φ obtains.
Let be the empty string. Define g 0 () = 0 and g 1 () = 1. For τ ∈ X ∗ − {}, define
P
P
g 0 (τ ) = {µp (Nσ ) : Γ(σ) ≤ Γ(τ ) − n1|τ | , |σ| = |τ |} and g 1 (τ ) = {µp (Nσ ) : Γ(σ) ≤
Γ(τ ), |σ| = |τ |}. For any ζ ∈ X N , Γ(ζ) ≤ Γ(τ ) if and only if Γ(ζ[|τ |]) ≤ Γ(τ ) −
Γ(ζ) = Γ(τ ), and Γ(ζ) ≤ Γ(τ ) +
1
n|τ |
1
n|τ |
if and only if Γ(ζ[|τ |]) ≤ Γ(τ ) or Γ(ζ) = Γ(τ ) +
or
1
.
n|τ |
Because µp has no atoms, g 0 (τ ) = g(Γ(τ )) and g 1 (τ ) = g(Γ(τ ) + n1|τ | ). Therefore, for each
t > 0, the class of intervals {[g 0 (τ ), g 1 (τ )] : τ ∈ X ∗ , |τ | = t} forms a partition of [0, 1].
Construct φ as follows: given a string σ ∈ X ∗ (recall that |X| = n), let
aσ = Γ(σ) and bσ = Γ(σ) +
1
.
n|σ|
Let φ(σ) be the longest τ with |τ | ≤ |σ| such that [aσ , bσ ] ⊂ [g 0 (τ ), g 1 (τ )].
φ(σ)
is well-defined, because the intervals [g 0 (τ ), g 1 (τ )] with |τ | = t form a partition and
[g 0 (), g 1 ()] = [0, 1]. φ is computable. Define Φ as Φ(ξ)t = φ(ξ[min{k : |φ(ξ[k])| ≥ t}])t .
(Φ satisfies property 1.) Φ is well defined if limt→∞ φ(ξ[t]) = ∞ and it outputs a finite
sequence otherwise (i.e., for large t’s, Φ(ξ)t is not defined). First I show that φ satisfies
6
that for any σ, τ ∈ X ∗ , σ ⊂ τ implies φ(σ) ⊂ φ(τ ). Suppose that σ ⊂ σ 0 and τ = φ(σ),
τ 0 = φ(σ 0 ). It is easy to check that aσ ≤ aσ0 and bσ0 ≤ bσ . Now, if Γ(τ 0 ) ≥ Γ(τ ) +
then aσ0 ≥ g 0 (τ 0 ) = g(Γ(τ 0 )) ≥ g(Γ(τ ) +
1
)
n|τ |
1
,
n|τ |
= g 1 (τ ) ≥ bσ ≥ bσ0 , a contradiction to
aσ < bσ . Hence, Γ(τ 0 ) < Γ(τ ) + n1|τ | . By construction of φ, |τ 0 | ≥ |τ |. If Γ(τ 0 ) < Γ(τ ), then
Γ(τ 0 ) ≤ Γ(τ )− n1|τ | , and hence, bσ ≤ bσ0 ≤ g 1 (τ 0 ) = g(Γ(τ 0 )+ n|τ1 0 | ) ≤ g(Γ(τ )) = g 0 (τ ) ≤ aσ ,
a contradiction to aσ < bσ . Therefore, Γ(τ ) ≤ Γ(τ 0 ) < Γ(τ ) +
Then I show that, for any sequence ζ such that h(Γ(ζ)) 6=
1
n|τ |
m
nt
and so τ ⊂ τ 0 .
for any m, n, t ∈ N (recall
that h = g −1 ), limt→∞ φ(ζ[t]) = ∞. Consider any such ζ. For any given K, there exists
some l ∈ N such that h(Γ(ζ)) ∈ ( nlK , l+1
). Let ε = min{h(Γ(ζ)) −
nK
l
, l+1
nK nK
− h(Γ(ζ))}.
Because h is continuous, there is some T such that t ≥ T implies that
min{|h(bζ[t] ) − h(Γ(ζ))|, |h(Γ(ζ) − h(aζ[t] )|} ≤
ε
l l+1
and so [h(aζ[t] ), h(bζ[t] )] ⊆ ( K , K ).
2
n
n
)] = [g 0 ( nlK ), g 1 ( nlK )], and so
Thus, if t ≥ max{T, K}, then [aζ[t] , bζ[t] ] ⊂ [g( nlK ), g( l+1
nK
|φ(ζ[t])| ≥ K. Clearly, any sequence ζ that satisfies h(Γ(ζ)) =
m
nt
for some m, n, t ∈ N is
computable, and so if Φ(ζ) is not well-defined, ζ is computable.
(Φ satisfies property 2.) I first claim that if Φ is well-defined over ζ (the set of such
ζ’s is denoted by D(φ)), then Γ(Φ(ζ)) = h(Γ(ζ)). Let ε be given, and let K be so
large that ε <
1
.
nK−1
Since ζ ∈ D(φ), there exists T such that t ≥ T implies that
|φ(ζ[t])| ≥ K. Then, for all t ≥ T , h(Γ(ζ)) ∈ [h(aζ[t] ), h(bζ[t] )] ⊆ [aφ(ζ[t]) , bφ(ζ[t]) ], and so
h(Γ(ζ)) − Γ(φ(ζ[t])) ≤
1
nK
≤ ε. Thus, Γ(Φ(ζ)) = limt→∞ Γ(φ(ζ[t])) = h(Γ(ζ)). Moreover,
for almost all r ∈ [0, 1] (except for countably many of them), there is a sequence ζ ∈ X N
such that Γ(Φ(ζ)) = r, because h is strictly increasing and is continuous. Also,
Γ(Φ(ζ)) ≥ Γ(Φ(ζ 0 )) ⇔ Γ(ζ) ≥ Γ(ζ 0 ).
(2)
I show that λX
Φ = µp by demonstrating that they share the same distribution function g,
X
−1
∗
where λX
Φ (A) = λ (Φ (A)): for any ζ ,
∗
X
∗
λX
Φ ({ζ : Γ(ζ) ≤ Γ(Φ(ζ ))}) = λ ({ζ : Γ(Φ(ζ)) ≤ Γ(Φ(ζ ))})
= λX ({ζ : Γ(ζ) ≤ Γ(ζ ∗ )}) = Γ(ζ ∗ ) = g(Γ(Φ(ζ ∗ ))).
7
(Recall that, for all but a countable set of numbers r ∈ [0, 1], there is a ζ ∗ such that
Γ(Φ(ζ ∗ )) = r. The gaps may be filled by assigning arbitrary values on Φ when it is not
well-defined. The first equality comes from the definition of λX
Φ and the second comes
from equation (2).)
(Φ satisfies property 3.) Recall that Φ is well-defined over any incomputable sequence.
Thus, if ξ 0 is θ-random for λX , ξ 0 ∈ D(φ). Let ζ 0 = Φ(ξ 0 ). Now I show that ζ 0 is θ-random
for µp . Suppose not, and suppose that there is a µp -test {Vt }∞
t=0 relative to θ such that
T
∞
ζ0 ∈ ∞
t=0 Vt . Let Ut = {ξ : (∃ζ ∈ Vt )ζ = Φ(ξ)}. Because φ is computable, {Ut }t=0 is
θ-effective. Moreover, λX (Ut ) = λX (Φ−1 (Vt )) = µp (Vt ) ≤ 21t . Therefore, {Ut }∞
t=0 is a
T
T
∞
0
λX -test relative to θ. But ξ 0 ∈ ∞
t=0 Ut because ζ ∈
t=0 Vt , a contradiction. Since φ is
computable, ζ 0 is ξ 0 -computable and hence is η-computable.
3
Law of the Iterated Logarithm
Here I give a general Law of the Iterated Logarithm that is satisfied by any ML-random
sequence for µp .
Theorem 3.1. Suppose that ξ is a ML-random sequence for µp with p = (p0 , p1 , ..., pt , ...).
Then, for any x ∈ X,
lim sup r
T →∞
PT −1
(cx (ξt ) − pt [x])|
= 1,
qP
PT −1 t
T −1 t
t
t
2( t=0 p [x](1 − p [x])) log log ( t=0 p [x](1 − p [x]))
|
t=0
(3)
Proof. The positive part of equation (3) is equivalent to the following two conditions:
(a) for all rational ε > 0,
(∃S)(∀T ≥ S)
T −1
X
v
v
u
u T −1
u
T −1
uX
X
u
(cx (ξt )−pt [x]) ≤ t2(1 + ε)(
pt [x](1 − pt [x])) log log t(
pt [x](1 − pt [x])).
t=0
t=0
t=0
(b) for all rational ε > 0,
(∀S)(∃T ≥ S)
T −1
X
v
v
u
u T −1
u
T −1
uX
X
u
pt [x](1 − pt [x])) log log t(
pt [x](1 − pt [x])).
(cx (ξt )−pt [x]) ≥ t2(1 − ε)(
t=0
t=0
8
t=0
I show that (a) and (b) hold and the negative part is completely symmetric. Let


v
v
u


u T −1


u
T −1
T −1
 X

uX
X
u
ε
t
t
t
t
t
t
ET = ζ :
(cx (ζt ) − p [x]) > t2(1 + ε)(
p [x](1 − p [x])) log log (
p [x](1 − p [x])) ,




t=0
t=0
 t=0

and


v
v
u


u T −1


u
T −1
T −1
 X

uX
X
u
t
ε
t
t
t
t
t
(cx (ζt ) − p [x]) < t2(1 − ε)(
FT = ζ :
p [x](1 − p [x])) log log (
p [x](1 − p [x])) .




t=0
t=0
 t=0

T
S∞
ε
Clearly, condition (a) is equivalent to ξ ∈
/ ∞
S=0
T =S ET and condition (b) is equivaS
T∞
ε
lent to ξ ∈
/ ∞
S=0
T =S FT . By Theorem 7.5.1 in Chung [1],
µp (
∞
∞ [
\
ETε ) = 0 and µp (
∞
∞ \
[
FTε ) = 0.
S=0 T =S
S=0 T =S
T
ε
ε
It then follows that µp ( ∞
T =S FT ) = 0 for any S ∈ N. Because FT is computable (uniformly
ε
in T ), {FTε }∞
T =S is a µp -test for any S (notice that µp (FT ) is also computable). Therefore,
T∞
S
ε
ξ∈
/ ∞
T =S FT . This proves (b).
S=0
On the other hand, the set ETε is computable (uniformly in T ) and so the sets
S
S
ε ∞
ε
For { ∞
{ ∞
T =S ET }S=0 to be a test, we need to show that
T =S ET }S∈N is effective.
S
ε
µp ( ∞
T =S ET ) has a computable upper bound for all S. From the proof in Theorem
7.5.1 in Chung [1], we know that there exists a constant A > 0 and a number k > 0 such
that for all k ≥ k (with the provision that c2 (1 + 2ε ) < 1 + ε), c.f. p. 216),
Tk+1 −1
µp (
[
ETε ) <
T =Tk
where Tk = max{T :
qP
T
t=0
A
ε ,
(k log c)1+ 2
pt [x](1 − pt [x]) ≤ ck } and c = 1 +
ε
10
(for ε small enough,
c2 (1 + 2ε ) < 1 + ε).
Let’s define G0 =
ST1 −1
T =0
ETε and Gk =
∞ [
∞
\
STk+1 −1
T =Tk
Gk =
S=0 k=S
ETε for k > 0. Clearly,
∞ [
∞
\
S=0 T =S
9
ETε .
S∞
Gk }∞
S=0 is also an effective sequence
S
open sets. I now show that there is a computable mapping i 7→ Si so that µp ( ∞
k=Si Gk ) ≤
Now, because Tk is a computable function of k, {
1
.
2i
k=S
It is easy to verify that
∞
X
k=S
A
ε ≤
(k log c)1+ 2
Z
∞
a=S−1
ε
ε
A
= A(S − 1)− 2 (log c)−1− 2 .
1+ 2ε
(a log c)
ε
2
Let B ∈ N be such that B > (A(log c)−1− 2 ) ε and let N ∈ N be such that N > 2ε . Take
S∞
S
1
∞
Si = B2N i + 1, and it follows that µp ( ∞
k=Si Gk ) ≤ 2i . This shows that { k=S Gk }S=0 is
T
S∞
T∞ S∞
ε
a µp -test, and so ξ ∈
/ ∞
S=0
k=S Gk =
S=0
T =S ET . This proves (a).
As a corollary, for any p ∈ ∆(X) and any ML-random sequence ξ for µp ,
P
| Tt=0 cx (ξt ) − T p[x]|
lim sup p
= 1.
T →∞
2p[x](1 − p[x])T log log T
(4)
Notice that (4) follows from (3) by taking pt = p for all t ∈ N. However, if ξ is MLrandom for µp such that limt→∞ pt = p, (4) may not hold for ξ, even though Theorem 1.2
implies that the frequency condition (1) holds for ξ for any selection function r. Indeed,
the following theorem shows that, for any θ-incompressible sequence η, there is a ηcomputable sequence ξ that satisfies (1) but fails (4).
Theorem 3.2. Suppose that η is θ-incompressible. For any non-degenerate p ∈ ∆(X),
there exists a η-computable sequence ξ such that
(a) it satisfies the convergence requirement (1) for any θ-computable selection function r
so that ξ r is an infinite sequence;
(b) for some y ∈ X with p[y] ∈ (0, 1),
PT
n=0 cy (ξn ) − T p[y]
lim p
= ∞.
T →∞
2p[y](1 − p[y])T log log T
(5)
Proof. Let y, y 0 be such that p[y] ∈ (0, 1) and p[y 0 ] ∈ (0, 1). For any real number s, let
xsy be the largest integer no greater than s. Construct the sequence p = (p0 , p1 , ...) as
follows (t is the smallest t such that xt0.4 y >
1
):
p[x]
(a) pt [x] = p[x] if x 6= y and x 6= y 0 ;
(b) pt [y] = p[y] if t ≤ t and pt [y] = p[y] −
1
xt0.4 y
10
otherwise;
(c) pt [y 0 ] = p[y 0 ] if t ≤ t and pt [y 0 ] = p[y 0 ] +
1
xt0.4 y
otherwise.
By construction, pt [x] = 0 if and only if p[x] = 0, and limt→∞ pt = p. p is computable.
By Theorem 2.1, there is a θ-random sequence ξ for µp that is η-computable. Now, let
X0 = {x ∈ X : p[x] > 0}, then the sequence ξ that is θ-random for µp can be regarded as
a sequence in X0N . Theorem 1.2 implies that ξ satisfies the convergence requirement (1).
By Theorem 3.1,
PT −1
t
t=0 (cy (ξt ) − p [y])|
lim sup q P
= 1.
p PT −1
T −1 t
T →∞
t
t
t
2( t=0 p [y](1 − p [y])) log log ( t=0 p [y](1 − p [y]))
|
For any T > t, √
PT −1
t=0
cy (ξt )−T p[y]
2T p[y](1−p[y]) log log T
=√
PT −1
t=0
(cy (ξt )−pt [y])
2T p[y](1−p[y]) log log T
+√
(6)
PT −1
1
t=t+1 xt0.4 y
2T p[y](1−p[y]) log log T
.
I claim that
PT −1
lim p
T →∞
1
t=t+1 xt0.4 y
2T p[y](1 − p[y]) log log T
= ∞;
(7)
and there exists some B > 0 such that for all T large enough,
PT −1
(cy (ξt ) − pt [y])|
| t=0
p
< B.
2T p[y](1 − p[y]) log log T
(8)
The theorem follows directly from (7) and (8).
P −1
1
1
Now I prove the claim. For all t, xt0.4 y ≤ t0.4 and so t0.4
≤ xt0.4
. Then, Tt=1
y
R T −1 −0.4
PT −1 1
≥
a da − 1 ≥ (T − 1)0.6 − 2. Therefore, for T large enough,
0.4
t=1 t
a=1
1
xt0.4 y
≥
PT −1
1
t=t+1 xt0.4 y
0.5T 0.6
T 0.1
p
≥p
= C√
log log T
2T p[y](1 − p[y]) log log T
2T p[y](1 − p[y]) log log T
for some constant C > 0. Because limT →∞
0.1
√ T
log log T
(9)
= ∞, (9) implies (7).
Because of (6), to prove (8), it suffices to show that for T large enough,
q P
p P −1
−1 t
2( Tt=0
p [y](1 − pt [y])) log log ( Tt=0
pt [y](1 − pt [y]))
p
2T p[y](1 − p[y]) log log T
(10)
is bounded. Now, for T large enough,
T −1
X
t
t
p [y](1 − p [y])) = T p[y](1 − p[y]) + (2p[y] − 1)
t=0
T −1
X
t=t+1
11
1
xt0.4 y
−
T −1
X
(
t=t+1
1
xt0.4 y
)2 .
(11)
P −1
1
Because for t large enough, 12 t0.4 < xt0.4 y, there is a constant A > 0 such that Tt=t+1
<
xt0.4 y
PT −1 2
P −1
1
0.6
)2 <
+A. Similarly, there is a constant A0 > 0 such that Tt=t+1
( xt0.4
t=t+1 t0.4 +A < 2T
y
P
T −1 t
PT −1 4
0
p [y](1−pt [y])
0
0.2
+ A0 . Hence, t=0
< 1 + T 0.42|2p[y]−1|
+ |2p[y]−1|A+A
+
t=t+1 t0.8 + A < 4T
T p[y](1−p[y])
p[y](1−p[y])
T p[y](1−p[y])
4
T 0.8 p[y](1−p[y])
, and so, for T large enough,
PT −1
pt [y](1−pt [y]))
T p[y](1−p[y])
t=0
< 2. Equation (11) also implies
that, for T large enough,
T −1
X
pt [y](1 − pt [y]) ≤ (2|2p[y] − 1| + 5 + p[y](1 − p[y]))T = A00 T,
t=0
and hence
p PT −1 t
( t=0 p [y](1 − pt [y]))
log(log T + log A00 )
log 2 + log log T
≤
≤
.
log log T
log log T
log log T
√ P −1 t
t
log log ( T
t=0 p [y](1−p [y]))
So for T large enough,
≤ 2. Thus, the expression in (10) is
log log T
log log
bounded by 2, and this proves (8).
4
Extensions to N -player games
Here I show that the existence result in [2] can be extended to N -player games. Let
g = h(X1 , ..., XN ), (h1 , ..., hN )i be a finite N -person normal-form game, where Xi is the
set of actions and hi is the payoff function for player i. In the repeated game with
complexity constraints, each player i is endowed with an oracle θi ∈ {0, 1}N to implement
his strategy with an oracle program. Hence, a strategy is feasible for player i if and only
if it is θi -computable. The set of all θi -computable total functions is denoted by C(θi ).
Definition 4.1. Let g = h(X1 , ..., XN ), (h1 , ..., hN )i be a finite game and let (θ1 , θ2 , ..., θN )
be N oracles. The repeated game with oracles (θ1 , θ2 , ..., θN ) based on g, denoted by
RG(g, θ1 , ..., θN ), is a tuple h(A1 , ..., AN ), (u1 , ..., uN )i such that
∗
(a) Ai = {αi : X−i
→ Xi : αi ∈ C(θi )} is the set of player i’s strategies;
(b) ui : A1 × ... × AN → R is player i’s payoff function defined as
ui (α1 , ..., αN ) = lim inf
T →∞
T −1
X
hi (ξtα,1 , ..., ξtα,N )
t=0
12
T
,
(12)
where (ξtα,1 , ..., ξtα,N ) is the outcome of period t for the strategy profile α = (α1 , ..., αN )
α,j
defined by ξ0α,j = αj () and for any t ≥ 0, ξt+1
= αj (ξ0α,−j , ξ1α,−j , ..., ξtα,−j ), for all j =
1, ..., N .
The notion of mutual complexity can be extended to the N -player case. For any finite
NN i
1
2
N
collection of oracles (θ1 , ..., θN ), define the product oracle
i=1 θ = θ ⊗ θ ⊗ ... ⊗ θ
N
1
N
N
i
1
as ( N
i=1 θ )t = (θt , ..., θt ) for all t ∈ N. We say that the oracles (θ , ..., θ ) are muN
j
tually complex if for each i = 1, ..., N , θi is
j6=i θ -incompressible. As in the 2-player
case, mutual complexity is a generic property for the N -player case as well. Theorem 2.1
shows that, under mutual complexity, for each i and any p ∈ ∆(Xi ), there exists a
N
j
θi -computable sequence ξ i that is
j6=i θ -random for µp . However, equilibrium exisN
tence requires j6=i ξ j to be θi -random for µ⊗j6=i pj ; for a given collection of distributions
(p1 , ..., pN ) ∈ ∆(X1 ) × ... × ∆(XN ), µ⊗Ni=1 pi ∈ ∆(X1 × ... × XN ) is the i.i.d. measure genQN i
i
N
i
erated by ⊗N
i=1 p (⊗i=1 p [(x1 , .., xN )] =
i=1 p [xi ]). To this end I give another theorem.
Theorem 4.1. Let X and Y be two finite sets and let θ ∈ {0, 1}N be an oracle. Suppose
that p ∈ ∆(X) and q ∈ ∆(Y ).
(a) If ξ ⊗ ζ ∈ (X × Y )N is θ-random µp⊗q , then ξ is ζ ⊗ θ-random for µp .
(b) If ξ ∈ X N is ζ ⊗ θ-random for µp and ζ is random for µq , then ξ ⊗ ζ is θ-random for
µp⊗q .
Proof. (a) Suppose that ξ is not ζ ⊗θ-random for µp . Then ξ ∈
N
sequence of sets {Vt }∞
t=0 in X such that µp (Vt ) ≤
1
.
2t
T∞
t=0
Vt for a ζ ⊗θ-effective
By the Enumeration Theorem, there
η⊗θ
η⊗θ
exists an effective enumeration of all oracle machines, {ϕη⊗θ
1 , ϕ2 , ..., ϕk , ...}, such that
N
(1) the function U (k, n) = ϕη⊗θ
k (n) is η ⊗ θ-computable for any η ∈ Y ; (2) for each k,
N
N
the function ϕη⊗θ
k (·) can be thought of as a functional from (Y × {0, 1}) to N , and
it is computable in the sense that the function h(σ, n) = ϕσk (n) is computable, where
ϕσk (n) = m if the k-th oracle machine halts within |σ| steps and using information only
contained in σ ∈ (Y × {0, 1})∗ and h is undefined otherwise.
η
Now, there exists a number k such that f = ϕkζ⊗θ . For any η ∈ Y N , define Ut,s
by
13
η
taking ξ 0 ∈ Ut,s
if and only if for some σ ≺ ξ 0 ,
(η⊗θ)[s]
(∃n < s)ϕk
(n) = (t, σ)
(13)
η
(notice that Ut,s
only depends on η[s], that is, the first s elements of η) and let Utη =
S
η
η
η
µp (Ut,s
)≤ 1t Ut,s . For each s and t, the set Ut,s can be written as a union of finite basic sets
2
and those basic sets are uniformly computable in s and t. Thus, the sequence {Utη }t∈N is
η ⊗ θ-effective. Let Vt = {ξ 0 ⊗ η ∈ (X × Y )N : ξ 0 ∈ Utη }. Then ξ 0 ⊗ η ∈ Vt if and only if for
τ
some σ ⊗ τ ≺ ξ 0 ⊗ η such that µp (Ut,s
)≤
1
2t
and (13) holds for σ and η[s] = τ [s]. Thus,
1
{Vt }∞
t=0 is θ-effective. Now I show that µp⊗q (Vt ) ≤ 2t :
Z
µp⊗q (Vt ) =
χVt (ξ 0 ⊗ η)dµp⊗q (ξ 0 ⊗ η)
N
(X×Y )
Z
Z Z
1
0
0
µp (Utη )dµq (η) ≤ t .
χUtη (ξ )dµp (ξ )dµq (η) =
=
2
YN
Y N XN
Thus, {Vt }∞
t=0 is a θ-effective µp⊗q -test. But ξ ⊗ ζ ∈ Vt for all t ∈ N, and so ξ ⊗ ζ is not
θ-random for µp⊗q .
(b) Suppose that ξ⊗ζ ∈ (X ×Y )N is not θ-random for µp⊗q . Then, ξ⊗ζ ∈
θ-effective test {Ut } in (X × Y )N such that µp⊗q (Ut ) ≤
1
.
4t
T∞
t=0
Ut for some
Suppose that the θ-effective
function f , ξ 0 ⊗ ζ 0 ∈ Ut if and only if for some σ ∈ (X × Y )∗ and for some n, f (n) = (t, σ).
0
0
Let f = ϕθk . Define Vtζ = {ξ 0 ∈ X N : ξ 0 ⊗ ζ 0 ∈ Ut } and Wt = {ζ 0 ∈ Y N : µp (Vtζ ) >
1
}.
2t
A
0
0
similar argument as that in (a) shows that for each ζ 0 ∈ Y N , {Vtζ }∞
t=0 is ζ ⊗ θ-effective
and {Wt } is θ-effective. Now I show that µq (Wt ) ≤ 21t :
Z
Z
0
0
0
0
µp⊗q (Ut ) =
χUt (ξ ⊗ ζ )dµp⊗q (ξ ⊗ ζ ) =
χV ζ0 (ξ 0 )dµp⊗q (ξ 0 ⊗ ζ 0 )
t
(X×Y )N
(X×Y )N
Z
Z
0
1
1
=
µp (Vtζ )dµq (ζ 0 ) >
χWt (ζ 0 )dµq (ζ 0 ) = t µq (Wt ).
t
2
YN 2
YN
Thus, µq (Wt ) < 2t µp⊗q (Ut ) ≤
1
.
2t
Because ζ is q-random relative to θ, by Solovay’s Theorem (Nies [4], Proposition
3.2.19), there is some L ∈ N such that ζ ∈
/ Wt for all t ≥ L. Thus, by construction, for
all t ≥ L, µp (Vtζ ) ≤
1
.
2t
But ξ ∈ Vtζ for all t, and so ξ is not ζ ⊗ θ-random for µp .
The following theorem shows that mutual complexity implies equilibrium existence.
14
Theorem 4.2 (Existence). Suppose that the oracles (θ1 , ..., θN ) satisfy mutual complexity. For any mixed equilibrium p = (p1 , ..., pN ) of g, there exists a Nash equilibrium of
RG(g, θ1 , ..., θN ), consisting of history-independent strategies (ξ 1 , ..., ξ N ), such that the
equilibrium payoff for player i is hi (p) and the limit frequency of ξ i is pi .
Proof. By Theorem 2.1, for each i = 1, ..., N , there exists a sequence ξ i ∈ XiN that is
N
j
1
N
θi -computable and is
j6=i θ -random for µpi . I now show that (ξ , ..., ξ ) is a Nash
equilibrium. It suffices to show that ui (ξ i ; ξ −i ) = hi (p) and for all αi ∈ Ai , ui (αi ; ξ −i ) ≤
hi (p), that is, for all αi ∈ Ai ,
lim inf
T →∞
lim inf
T →∞
Because θi is
N
k6=j
T −1
X
hi (ξ i ; ξt−i )
t
t=0
T
−1
X
t=0
T
= hi (p)
(14)
hi (αi (ξ −i [t]); ξt−i )
≤ hi (p).
T
(15)
θk -computable for any j 6= i, ξ j is θi -random for µpj for each j 6= i.
By Theorem 4.1, it follows that for any j, k 6= i, ξ j ⊗ ξ k is θi -random for µpj ⊗pk . A simple
N
N
induction argument shows that j6=i ξ j is θi -random for µ⊗j6=i pj and that j=1,...,N ξ j is
random for µ⊗Nj=1 pj . Then (14) follows from Theorem 1.2.
∗
→ {0, 1} be the selection
As for (15), let αi ∈ Ai be given. For each y ∈ Xi , let ry : X−i
function for X−i such that ry (σ) = 1 if αi (σ) = x, and ry (σ) = 0 otherwise. Notice that
N
ry is θi -computable. Because ζ ≡ j6=i ξ j is θi -random for µ⊗j6=i pj , by Theorem 1.2,
lim
T →∞
y
T −1
X
cx−i (ζtr )
t=0
T
=
Y
pj [xj ] for all x−i ∈ X−i .
(16)
j6=i
y
if ζ r is an infinite sequence.
y
Define Ly (T ) = #{t ∈ N : 0 ≤ t ≤ T − 1, ry (ζ[t]) = 1} and ζ y = ζ r . Let
E 1 = {y ∈ Xi : lim Ly (T ) = ∞} and E 2 = {y ∈ Xi : lim Ly (T ) < ∞}.
T →∞
T →∞
For each y ∈ E 2 , let By = limT →∞ Ly (T ) and let Cy =
PBy −1
t=0
hi (y; ζty ). On the other
hand, for any y ∈ E 1 , because ξ satisfies (16),
lim
T →∞
T −1
X
hi (y; ζty )
t=0
T
T −1
X X
cx−i (ζty )hi (y; x−i )
= lim
= hi (y; p−i ) ≤ hi (p).
T →∞
T
t=0
x ∈X
i
−i
15
I claim that for any ε > 0, there is some T 0 such that T > T 0 implies that
T −1
X
hi (αi (ζ[t]); ζt )
t=0
T
≤ hi (p) + ε.
Fix some ε > 0. Let T1 be so large that T > T1 implies that, for all y ∈ E 1 ,
(17)
hi (y;ζty )
t=0
T
PT −1
≤
ε
and, for all y ∈ E 2 , CTy > − 2|X1 ×...×X
. Let T 0 be so large that, for
N|
P
all y ∈ E1 , Ly (T 0 ) > T1 and hi (p) y∈E1 LyT(T ) ≤ hi (p) + 2ε for all T > T 0 . If T > T 0 , then
hi (p) +
ε
,
2|X1 ×....×XN |
(T )−1
Ly (T )−1
X Ly (T ) LyX
hi (y; ζty ) X X hi (y; ζty )
=
+
T
T
Ly (T )
T
t=0
t=0
t=0
y∈E1
y∈E2
X Ly (T )
X
ε
ε
≥
hi (p) −
−
≥ hi (p) + ε.
T
2|X
2|X
1 × .... × XN |
1 × .... × XN |
y∈E
y∈E
T −1
X
hi (αi (ζ[t]); ζt )
1
2
Notice that Ly is weakly increasing, and Ly (T ) ≤ T for all T . Thus, T > T 0 implies that
Ly (T ) ≥ Ly (T 0 ) > T1 , and so T > T1 . This proves (17), which implies (15).
References
[1] Chung, C.-L. (1968). A Course in Probability Theory. Harcourt, Brace and World,
Inc.
[2] Hu, T.-W. (2012). “Complexity and Mixed Strategy Equilibria.” working paper.
[3] Martin-Löf, P. (1966). “The Definition of Random Sequences.” Information and Control, vol. 9, pp. 602-619.
[4] Nies, A. (2009). Computability and Randomness. Cambridge.
[5] Zvonkin, A. K. and L. A. Levin. (1970). “The Complexity of Finite Objects and the
Basing of the Concepts of Information and Randomness on the Theory of Algorithms.”
Uspehi Mat. Nauk. vol. 25, pp. 85127.
16