Best response dynamic in a multitask environment

Best response dynamic in a multitask environment
Ryoji Sawa∗ 1 and Dai Zusai†2
1
Center for Cultural Research and Studies, University of Aizu
2
Department of Economics, Temple University
April 15, 2015
Preliminary1
Abstract We formulate best response dynamic in a multitasking environment; while
agents engage in two separate games concurrently, an agent can switch action only in
one of them upon receipt of a revision opportunity. The choice of the game in which to
revise action makes the multitasking dynamic significantly behave differently from the
separate dynamics so the transition of action distribution in each game might look inconsistent with incentives if the endogenous choice of the focusing game is ignored. Despite
of such complexity in the transitory phase of the dynamic, we verify that, in a wide class
of games, global stability of equilibria can be predictable from that in the separate dynamics.
Keywords: Evolution; Multitasking; Bounded rationality; Cognitive load; Best response
dynamics; Evolutionary stable state
JEL Classification Numbers: C72, C73, D03
∗ Address:
Tsuruga, Ikki-machi, Aizu-Wakamatsu City, Fukushima, 965-8580 Japan, telephone: 81-24237-2500, e-mail: [email protected].
† Address: 1301 Cecil B. Moore Ave., RA 873 (004-04), Philadelphia, PA 19122, U.S.A, telephone:+1-215204-1762, e-mail: [email protected].
1 For the most recent version, visit http://sites.temple.edu/zusai/research/multiBRD/
1
1
Introduction
Multitasking has become prevalent nowadays. In the ethnographic field study by
González and Mark (2005), an average office worker deals with 12 distinctive tasks on an
average day and switches tasks in 10 minutes on average. Lindbeck and Snower (2000)
document the structural change of workplace organizations from Tayloristic organizations to Holistic organizations, finding causes in not only development of information
technology but also needs for flexibility of job assignments.2 On the other hand, cognitive
scientists warn decrease in individual productivity due to loss of attention. In multitasking environments, individual responses get slower and involve more errors.3
From the game theoretical perspective, this leads to the question whether cognitive
limitation on prompt decisions in multitasking eventually results in any different longrun outcomes from an array of separate tasks. To answer this question, we formulate
the best response dynamic in multitasking environments. In BRD, an agent occasionally
receives an opportunity to revise his strategy to myopically maximize his payoff. But,
in our dynamic, a revising agent can switch action only in one task, while concurrently
working on two tasks (games).4 He can optimize his action only in this task, leaving his
action in the other task. We allow an agent to choose in which task to optimize the action;
so the agent change the action of the task in which the optimal revision of the action yields
greater improvement of payoff than optimization of action of the other task.
Because an agent’s decision on a revision opportunity involves not only (partial) revision of strategy but also choice of task, the dynamic of strategy distribution in the multitasking game cannot be decomposed to the dynamics of action distribution in each individual task, even when the payoff in the multitasking game is just the sum of the payoffs
in the two task and the payoff in each task is determined independently from the other
task.
Despite of this complexity in dynamic decisions, we prove that, in a wide class of
games, stability of a Nash equilibrium in the multitasking dynamic is wholly identified
by examining stability in the dynamic of actions in each task as if agents engage only in
a single task. The applicable class of games include potential games such as congestion
2 Boucekkine
and Crifo (2008) also conclude that multitasking is a steady trend in developed countries,
and further develop a macroeconomic model to relate organizational change with human capital accumulation.
3 See Appelbaum, Marchionni, and Fernandez (2008) and Spink, Cole, and Waller (2008) for the extensive surveys of multitasking on cognitive ability and work performance in cognitive sciences, psychology
and organization science.
4 It is straightforward to extend our model and results to the multitasking of any finitely many games.
2
games and any binary games, contractive games such as two-player zero-sum games and
wars of attrition, and games with (Taylor’s regular) evolutionary stable states.
But, in the short run, the aggregate transition of action distribution in each task may
look inconsistent with incentives: when we focus on one task, the decay of a worse suboptimal action may be slower than that of a better suboptimal action. So the dynamic in
the multitasking environment may appear to be irrational if the other task is ignored.
In sum, we are looking at cognitive limitation on rational behavior when individuals
deal with distinctively different tasks, motivated by organization science and occupational psychology. There is another trend in game theoretical study on concurrent play
of multiple games; they see spillover of similar choices over different but similar games
through analogy between games. Mengel (2012) formalizes a learning dynamic in which
an agent categorizes different games by similarity.5 There are several experimental research to test behavioral spillover over games: see Grimm and Mengel (2012) and Bednar,
Chen, Liu, and Page (2012). Our paper makes a contrast to these research by seeing different games as “substitutes” in the allocation of cognitive resources, while they see those
as “complements” in rational decision making.
In the context of biological multi-gene evolution, Cressman, Gaunersdorfer, and Wen
(2000) and Hashimoto (2006) study replicator dynamic over two distinctively different
games. Unlike ours, an agent (specie) revises two actions (genes) at once upon receiving a single revision opportunity and then replicate a better pair of genes. Replicator
dynamic is defined over the pairs of two genes; as the total payoff affects the rate of
reproduction of each pair, the dynamic is not separable even when they payoff of each
gene is independent of the other gene. As a result, both the two papers report interesting
counter-intuitive examples in which a stable equilibrium in a separate environment becomes unstable in the multiple game environment even if the payoff in the other game is
infinitesimally small compared to the offer.
The paper proceeds as follows. In the next section, we formulate a multitasking game
and the multitasking best response dynamic. In Section 3, we verify the basic properties
of the dynamic such as stationarity of Nash equilibrium and positive correlation between
change in each strategy’s players and the payoff of the strategy. The section continues
to global stability of Nash equilibria in potential games and contractive games and local stability of regular ESS’s. Finally, we present an example to show counter-intuitive
transitory aggregate behavior and argue that payoff monotonicity may not hold in the
multitasking dynamic. Lengthy proofs are provided in Appendix.
5 See
also Bednar and Page (2007).
3
2
2.1
Model
Players, games and payoffs
We consider a population of anonymous agents playing two strategic form games,
Games 1 and 2. The population is a unit mass of agents with the same action set and
the same payoff function. Let A i = {1, . . . , Ai } denote the set of actions in Game i. A set
of actions over two games is given by A = A 1 × A 2 . Let A = A1 A2 . Denote by xi an
arbitrary distribution of actions in Game i:
(
xi ∈ X i ≡
i
y ∈ R+ Ai
∑
a i ∈A i
)
yiai = 1
.
i
Let Fi : X i → R A denote the payoff function of Game i, and Fai (xi ) payoffs from action a
given xi . We assume that Fi is continuously differentiable.
We assume that each agent plays one action in each game; an agent chooses some
a = ( a1 , a2 ) ∈ A . The state of the population is described as the distribution of action
profiles, i.e.
(
)
A
x ∈ X ≡ y ∈ R+
∑ ya = 1 .
a ∈A
TX denotes a tangent space of X , i.e. TX = {y ∈ R A | ∑ a∈A y a = 0}. TX i is defined
similarly for i ∈ {1, 2}.
Let Ii denote a linear mapping: ∆A → ∆A i such that Ii x is the aggregate strategy
i
distribution in Game i given x. The ai -th coordinate of Ii x ∈ R A , i.e., (Ii x) ai = ∑ â j ∈A j x ai â j
represents the mass of agents who take action ai in Game i.
An agent’s payoff from action profile a = ( a1 , a2 ) is a weighted sum of payoffs from
the two games;
Fa1 a2 (x) ≡ w1 Fa11 (I1 x) + w2 Fa22 (I2 x),
where wi ∈ R+ for i = 1, 2. In what follows, we let w1 = w2 = 1. Note that this does not
pose any restriction. If wi 6= 1, we simply redefine payoff functions as F̂1 (x1 ) = w1 F1 (x1 )
and F̂2 (x2 ) = w2 F2 (x2 ).
The following derivative formula is immediate from additive separability of the payoffs:
Lemma 2.1.
DF(x) = (I1 )0 DF1 (I1 x)I1 + (I2 )0 DF2 (I2 x)I2
4
Define the sets of pure-strategy best responses and mixed-strategy best responses to
population state xi in Game i = 1, 2 as
bi (xi ) ≡ argmax Fai i (xi ), Bi (xi ) ≡ argmax Fi (xi ) · yi .
a i ∈A i
y i ∈X i
Similarly we define the set of pure-strategy best responses and mixed-strategy best responses to x in the combined game F as
b(x) ≡ argmax Fa (x), B(x) ≡ argmax F(x) · y.
a ∈A
y ∈X
In a Nash equilibrium, almost all agents take the best response to the current population state. That is, the sets of Nash equilibria in Game i, Fi , and in the combined game are
defined as
n
o
NE(Fi ) ≡ xi = ( xiai ) ai ∈A i ∈ X i xiai > 0 ⇒ ai ∈ bi (xi )
for each i = 1, 2,
NE(F) ≡ {x = ( x a ) a∈A ∈ X | x a > 0 ⇒ a ∈ b(x) }
Notice that ai ∈ bi (xi ) is equivalent to F̆ai i (xi ) = 0; a = ( a1 , a2 ) ∈ b(x) is equivalent to
F̆a11 (I1 x) + F̆a22 (I2 x) = 0, and thus equivalent to Fa11 (I1 x) = F̆a22 (I2 x) = 0. Hence,
a = ( a1 , a2 ) ∈ b(x) ⇔ ai ∈ bi (Ii x) for all i = 1, 2.
(1)
This observation leads to the following theorem. See Appendix for complete proof.
Theorem 2.2. x ∈ NE(F) if and only if Ii x ∈ NE(Fi ) for all i = 1, 2.
2.2
Dynamic
We consider a dynamic learning process in which an agent having a revision opportunity switches her action only in one game. In other words, she cannot switch actions in
two games simultaneously.
Define
F̆ai i (xi ) = Fbi i (xi ) − Fai i (xi ) for ai ∈ A i .
F̆ai ( x ) is payoff ’deficit’ of strategy a compared to the best response in Game i. The payoff
deficits play an important role when an agent can switch only in one game at a single
revision opportunity, as the agent should compare the payoff deficits in the two games
and use the revision opportunity to eliminate the greater payoff deficit. For example, if
5
F̆a11 (x) > F̆a22 (x), a player playing ( a1 , a2 ) will switch to (b1 , a2 ), leaving his action in Game
2 unchanged.
The action profile a = ( a1 , a2 ) cannot switch directly to the ones in b(x) at once, but the
switch to the best response action can happens only in either one game. This leads us to
define the modified best responses as follows:
b̂(a1 ,a2 ) (x1 , x2 )
≡
B̂a (x1 , x2 )
≡

 bi ( xi ) × e j
a −i
 ( b1 ( x1 ) × e2 ) ∪ ( e1
a2
a1

 Bi ( xi ) × e j
a −i
conv( B1 (x1 ) × e2 , e1
a2 a1
if F̆ai i (xi ) > F̆a−−ii (x−i ),
× b2 (x2 )) if F̆a11 (x1 ) > F̆a22 (x2 ).
if F̆ai i (xi ) > F̆a−−ii (x−i ),
× B2 (x2 )) if F̆a11 (x1 ) > F̆a22 (x2 ).
i
Here eia ∈ R A denotes a unit vector with a-th element being one. As our dynamic shows,
one of our difficulties is that the dynamic depends on not only payoffs in two games but
also differences in payoff deficits in two games.
The dynamic is expressed as
ẋ ∈
∑
x a B̂a (I1 x, I2 x) − x ≡ VM (x)
a=( a1 ,a2 )∈A
3
Results
3.1
Fundamental properties
In the standard BRD of a single task, a suboptimal strategy decreases its players at
a constant rate, regardless of the payoff deficit.6 So does a suboptimal pair of actions
a∈
/ b(x) in the multitasking BRD. But, in each individual game i = 1, 2, the mass of players
of a suboptimal action ai ∈
/ bi (Ii x) may not decrease, though it can never increase.
/ bi (Ii x) implies
Lemma 3.1 (Weak best response property). At any x ∈ X , a0i ∈
0 ≥ (Ii ẋ) ai ≥ −(Ii x) ai
0
for any ẋ ∈ VM (x).
0
Note that (Ii ẋ) ai can be zero—especially if the payoff deficit in the other game is
0
greater for all the players of this action a0i .
6 Zusai
(2014) proposes the tempered best response dynamic in which the rate of decay of a suboptimal
strategy increases with the payoff deficit of the strategy.
6
Yet, we can retain the stationarity of Nash equilibrium and positive correlation between the change in the mass of each multitasking strategy and the strategy’s payoff.
Theorem 3.2 (Nash stationarity).
x ∈ NE(F)
⇐⇒
0 ∈ VM (x).
Theorem 3.3 (Positive correlation). For all z ∈ VM (x), we have
F(x) · z ≥ 0;
3.2
F(x) · z > 0 ⇔ x ∈
/ NE(F).
Potential games
A population game F : X → R A is called a potential game if the domain is extended
to R A (or a neighborhood of X ) and there is a scalar-valued continuously differentiable
function f : R A → R whose gradient vector always coincides with the payoff vector: for
all x ∈ X , f satisfies
∂f
(x) = Fa (x) for all a ∈ A ,
∂x a
The class of potential games includes random matching in symmetric games, binary
choice games and standard congestion games. The potential function f works as a Lyapunov function in a wide range of evolutionary dynamics: replicator, BRD, etc.: see Sandholm (2001).
The next lemma shows that a multitask game F is a potential game if and only if underlying games F1 and F2 are potential games.
Lemma 3.4. Then, F is a potential game if and only if both F1 and F2 are potential games.
i
Theorem 3.5. Consider a multitask best response dynamic in potential games Fi : X i → R A
(i = 1, 2) with potential functions f i . Then, the set of Nash equilibria NE(F) is globally attracting.
Moreover, each local maximizer of f is Lyapunov stable under the mutitask BRD.
3.3
Contractive games
In this section, we consider contractive games, which are known as a class of games
whose Nash equilibria form a single convex set. Several important classes of games, zerosum games and wars of attrition, for example, fall into this category.
7
Definition 3.6. A population game F : X → Rn is a contrative game if7
(y − x)0 (F(y) − F(x)) ≤ 0 ∀x, y ∈ X .
If the inequality holds strictly whenever x 6= y, then we say that F is strictly contractive.
Hofbauer and Sandholm (2009) shows that contractive games with differentiable payoffs are equivalent to games satisfying self-defeating externalities:
Theorem 3.7 (Hofbauer and Sandholm (2009)). Suppose that F is C1 . F is a contractive game
if and only if F satisfies that z0 DF(x)z ≤ 0 for all x ∈ X and z ∈ TX .
Like a potential game, multitask game F is contractive if and only if each component
game Fi is contractive.
i
Lemma 3.8. Suppose that Fi : X i → R A (i = 1, 2) is C1 . F is a contractive game if and only if
Fi is a contractive game for all i ∈ {1, 2}.
It is known that the sum of revising agents’ payoff increases by the revisions works
as a Lyapunov function in a contractive game under evolutionary dynamics based on
myopic optimization. (See Hofbauer and Sandholm (2009) for BRD and perturbed BRD
and Zusai (2014) for tempered BRD.)
We start building the Lyapunov function for the multitasking BRD from applying the
same idea to this dynamic. When an agent switches from action profile a = ( a1 , a2 ) ∈ A to
ã ∈ A in the social state x, the payoff changes by Fã (x) − Fa (x) =: ga ã (x). In the multitask
BRD, the new action profile is chosen only from subset Aa := { a1 } × A 2 ∪ A 1 × { a2 } ⊂ A
and should achieve the largest non-negative payoff difference; let
ga∗ (x) := max{ ga ã (x)| ã ∈ Aa }.
The set of the maximizers is b̂a (x). Notice that ∑i=1,2 F̆ai i (x) ≥ ga∗ (x) ≥ F̆ai i (x) for each
i = 1, 2. The payoff increase function ga∗ has some good properties as below.
Lemma 3.9. For all a ∈ A , ga∗ (·) satisfies the following three properties.
(P1) Consider a Carathéodory solution {xt }. Then, ga∗ (xt ) is Lipschitz continuous in t. For all
most all t ∈ [0, ∞), we have ġa∗ (xt ) = ġab (xt ) for any b ∈ b̂(xt ).
(P2) ga∗ (x) = (y a − e a )0 F(x) for any y a ∈ B̂(x) and x ∈ X .
7 See
Sandholm (2013). Hofbauer and Sandholm (2009) call it a stable game. It is also called a negative
semidefinite game.
8
(P3) If b ∈ b̂a (x), then gb∗ (x) = maxã∈A Fã (x) − max{ Fã (x)| ã ∈ Aa } ≤ ga∗ .
(P30 ) Let b ∈ b̂a (x). If Ii x ∈ Bi (Ii x) and I j x ∈
/ B j (I j x) for i 6= j, then gb∗ (x) < ga∗ for a ∈
/ b ( x ).
The sum of payoff increases is
G (x) :=
∑
x a g a ∗ ( x ),
a ∈A
which is indeed equal to z0 F(x) for any z ∈ VM (x) by Lemma 3.9 (P2). Lemma 3.9 (P1)
guarantees that G (xt ) is Lipschitz continuous in t. The next lemma verifies that G is a
Lyapunov function of the multitask BRD in a contractive game. As a lower bound of Ġ,
we define a function
G̃ (x) := max Fã (x) + x0 F(x) − 2
ã∈A
∑
xc max{ Fã (x)| ã ∈ Aa }
c ∈A
and show that this is nonpositive.
Lemma 3.10. At any x ∈ X , the followings hold.
i) 1) G (x) ≥ 0; 2) G (x) = 0 ⇔ x ∈ NE(F), i.e., G −1 (0) = NE(F).
ii) 1) G̃ (x) = ∑ a∈A z a ga∗ (x) = G̃ (x) for any z = (z a ) a∈A ∈ VM (x). Further, 2) G̃ (x) ≤ 0; 3)
x ∈ NE(F) ⇒ ∀c ∈ A [ xc > 0 ⇒ gc∗ (x) = gb∗ (x) ∀b ∈ b̂c (x)] ⇔ G̃ (x) = 0.
Let {xt } be a Carathéodory solution under the multitasking BRD. Then, it satisfies the following.
iii) For almost all t ≥ 0, we have that
Ġ (xt ) = G̃ (xt ) + ẋt · DF(xt )ẋt .
Theorem 3.11. Consider a multitask best response dynamic in contractive games Fi : X i → R A
(i = 1, 2). Then, the set of Nash equilibria NE(F) is Lyapunov stable.
i
Proof. By Lemma 3.8, the combined game F is also a contractive game. So, the equation
in part iii) of Lemma 3.10 reduces to
Ġ (x) = G̃ (x) + ẋ · DF(x)ẋ ≤ G̃ (x).
Therefore, combining this with parts i) and ii) of Lemma 3.10, we can apply the Lyapunov
stability theorem8 such as Theorem A.3 of Hofbauer and Sandholm (2009) to establish
Lyapunov stability of a Nash equilibrium in a contractive game
8 See
also Theorem 7.B.2 of Sandholm (2010b).
9
For asymptotic stability, we need G̃ −1 (0) = NE(F); see the theorem below.9 But, G̃
can be zero out of NE(F), especially if the payoff deficits in the two individual games are
equal to each other for almost all the agents; then, even after revising an action in either
one task, the payoff increase g remains at the same level. According to part ii-3) of Lemma
3.10, this implies G̃ = 0. Further, if the multitasking game is only weakly contractive and
there exists a direction z ∈ TX such that z · DF(x)z = 0 in such state x, then Ġ (x) = 0 if ẋ
takes in this direction.
Theorem 3.12 (Zusai, 2014: Theorem 7). Let A be a closed subset of a compact space X and A0
be a neighborhood of A. Suppose two continuous functions W : X → R and W̃ : X → R satisfy
(i) W (x) ≥ 0 and W̃ (x) ≤ 0 for all x ∈ X and (ii) W −1 (0) = W̃ −1 (0) = A. In addition, assume
W is Lipschitz continuous in x ∈ X with Lipschitz constant K ∈ (0, ∞). If any Carathéodory
solution {xt } starting from A0 satisfies
Ẇ (xt ) ≤ W̃ (xt )
for almost all t ∈ [0, ∞),
(2)
then A is asymptotically stable and A0 is its basin of attraction.
To obtain asymptotic stability, the whole multitasking game needs to be strictly stable.
Then, we can refine G̃ to Ḡ : X → R as below to obtain Ḡ −1 (0) = NE(F) and apply the
above Lyapunov stability theorem:10
Ḡ (x) = G̃ (x) + max
∑
z∈VM (x) a∈A
z · DF(x)z
Theorem 3.13. Consider a strictly contractive game F : X → R A . Then, the set of Nash equilibria NE(F) is asymptotically stable under a multitask best response dynamic.
3.4
Regular ESS
Let xi∗ be a regular (Taylor) evolutionary stable state in Game i ∈ {1, 2}: that is, i) xi∗
is a quasi-strict equilibrium:
Fbi i (xi∗ ) = F∗i (xi∗ ) > Fai i (xi∗ )
whenever xbi∗i > 0 = xia∗i ;
and ii) DFi (xi∗ ) is negative definite with respect to TX i ∩ RSAi :
i
zi · DFi (xi∗ )zi < 0
9 The
i
for all zi ∈ ( TX i ∩ RSAi ) \ {0}.
assumption in this theorem is weaker than Theorem A.3 of Hofbauer and Sandholm (2009).
is closed since Bi (xi ) is closed for i ∈ {1, 2}.
10 V ( x )
M
10
i
i
Here Si is the support of xi∗ , i.e., Si := {bi ∈ A i | xbi∗i > 0} and RSAi := {zi ∈ R A | zibi >
0 ⇒ bi ∈ Si }. Notice that a regular ESS is an isolated Nash equilibrium (Bomze & Weibull,
1995) in the sense that xi∗ is the only Nash equilibrium of Fi in its small enough neighborhood O0i ⊂ X i . Let U i := A i \ Si = { ai ∈ A i | xia∗i = 0}.
The negative definiteness condition means that the game F is a strictly contractive
game locally around the quasi-strict equilibrium xi∗ in the reduced state space where any
action unused in xi∗ is kept unused. Sandholm (2010a) proves local stability of a regular
ESS in major evolutionary dynamics.
As (I1 , I2 ) is not a one-to-one mapping, there are continuously many multitasking
states x ∈ X that are projected to the regular ESS pair (x1∗ , x2∗ ). Denote by X ∗ ⊂ X be a
set of such multitasking states and call it the regular ESS component:
X ∗ := {x∗ ∈ X | I1 x∗ = x1∗ and I2 x∗ = x2∗ }.
Lemma 3.14. There is a neighborhood Õ ⊂ X of X ∗ and constant C1 , C2 ∈ R++ such that, at
any x ∈ Õ and for each i = 1, 2,
i) ai ∈ U i
⇒
ai ∈
/ b i ( Ii x ) ;
ii) ( ai , b−i ) ∈ U i × S−i
b̂(ai ,b−i ) (I1 x, I2 x) ⊂ Si × {b−i };
⇒
iii) x ∈ NE(F) if and only if x ∈ X ∗ ;
i
iv) DFi (Ii x) is negative definite with respect to TX i ∩ RSAi ;
v) (Ii ẋ) · DFi (Ii x)Ii ẋ ≤ (Φi ẋ) · DFi (Ii x)Φi ẋ + Ci
where Φi is an orthogonal projection matrix of
vi)
∑
∑
(Ii ẋ) ai +
a i ∈U i
ẋ a ≤ −
a ∈U 1 ×U 2
∑
( Ii x ) a i
∑
( Ii x ) a i
a i ∈U i
R A onto
for any ẋ ∈ V M (x),
i
TX i ∩ RSAi ;11
for any ẋ ∈ V M (x).
a i ∈U i
For local stability of the regular ESS component X ∗ , we modify functions G, G̃ to functions G ∗ , G̃ ∗ : Õ → R defined as
2
G ∗ ( x ) : = G ( x ) + ∑ ( C i + 1)
i =1
Ḡ ∗ (x) := G̃ (x) + max
z ∈V M ( x )
11 That
∑
( Ii x ) a i +
a i ∈U i
!
∑
xa
a ∈U 1 ×U 2
!
2
2
∑ (Φi z) · DFi (Ii x)Φi z − ∑
i =1
∑
( Ii x ) a i
i = 1 a i ∈U i
i
is, z ∈ R A is projected to Φi z ∈ TX i ⊂ R A such that (Φi z)bi = (Ii z)bi − (∑bi ∈Si (Ii z)bi )/#Si for
each b0i ∈ Si and (Φi z) ai = 0 for each a0i ∈ A i \ Si = U i .
0
11
0
0
for each x ∈ Õ. We verify that G ∗ is a local stricly Lyapunov function with Ḡ ∗ being an
upper bound on Ġ ∗ so we can apply Theorem 3.12 to asymptotic stability of X ∗ .
Theorem 3.15. Suppose that X ∗ be a regular ESS component. Then, X ∗ is (locally) asymptotically stable under the multitask BRD in F.
4
Discussions
4.1
Comparison to single-task environments
In this section, we illustrate differences of our model from single-task environments
by means of examples.
It is well-known that many evolutionary dynamics including the replicator dynamic
and perturbed best response dynamics satisfy payoff monotonicity.12 It is defined as follows.
Definition 4.1. A dynamic is said to satisfy payoff monotonicity if any solution path satisfies
Fa (xt ) < Fb (xt )
ẋbt
ẋ ta
.
<
x ta
xbt
=⇒
We say weak payoff monotonicity if a weak inequality holds in the necessary condition.
If we focus on either of two games in a multitask environment, then we may not necessarily observe the payoff monotonicity. The next example illustrates it.
Example 1. Suppose agents playing a multitask environment with two normal form
games in Table 1. Suppose that x a1 a2 = .5, xb1 b2 = .25 and xc1 c2 = .25. Note that, given
that population state, Fa1 = Fa2 = 2.5, Fb1 = .75, Fb2 = 1, Fc1 = .5 and Fc2 = .25.
Agents playing (b1 , b2 ) would switch their strategy in Game I if they were to receive
a revision opportunity. So, the fraction of agents playing b1 will decrease. While, agents
playing (c1 , c2 ) would switch theirs in Game II if they were to revise. The fraction of
agents playing c1 will be unchanged.
As a consequence, we will observe that
ẋbt 1
xbt 1
<0=
12 In
ẋct 1
xct 1
,
the BRD, numbers of agents playing suboptimal actions decrease at the same rate. So, it satisfies
weak payoff monotonicity.
12
Game I
P1
a1
b1
c1
Game II
a1
5, 5
0, 0
0, 0
P2
b1
0, 0
3, 3
0, 0
c1
0, 0
0, 0
2, 2
P1
a2
b2
c2
a2
5, 5
0, 0
0, 0
P2
b2
0, 0
4, 4
0, 0
c2
0, 0
0, 0
1, 1
Table 1: Two symmetric normal form games
while we have Fb1 (xt ) > Fc1 (xt ).
The above example emphasizes the importance of taking into account multi-task environments. In other words, even if we observe that people seem not to be following an
evolutionary dynamic, it will not necessarily imply that we should reject the evolutionary
dynamic. People might work on a multitask environment and be well represented by the
dynamic on our multitask environment.
A
Appendix
A.1
Proofs of fundamental theorems on a multitasking game
A.1.1
Proof of Theorem 2.2
Proof. Suppose that x ∈ NE(F). For each i = 1, 2,
( Ii x ) a i > 0 ⇔ ∃ a − i ∈ A − i x a i a − i > 0
∵ ( Ii x ) a i =
∑
!
xia
(3)
a −i ∈A −i
⇒ ( ai , a −i ) ∈ b ( x )
(∵ x ∈ NE(F))
⇒ a i ∈ b i ( Ii x )
(by (1)) .
This suggests that Ii x ∈ NE(Fi ) for each i = 1, 2.
For the contrary, suppose that x ∈
/ NE(F). Then, there must exist some a = ( a1 , a2 ) ∈ A
such that x a > 0 and a ∈
/ b( x ). The latter implies a1 ∈
/ b1 (I1 ) or a2 ∈
/ (I2 ) by (1). Assume
that a1 ∈
/ b1 (I1 ). x a > 0 implies (I1 x) a1 > 0 by (3). Hence I1 x ∈
/ NE(F1 ). Hence we have
x ∈ NE(F) only if Ii x ∈ NE(Fi ) for all i = 1, 2.
13
A.1.2
Proof of Lemma 2.1
Proof. Fix x ∈ X arbitrarily. Henceforth we omit the arguments of DF(x), DF1 (I1 x) and
DF2 (Ix2 ). For notational simplicity, let us denote ( a, b) entry of DFi (Ii x) by ∂b Fai , i.e.
∂b Fai =
∂Fai i
(I x ).
∂xbi
Recall that
Fa1 a2 (x) = Fa11 (I1 x) + Fa22 (I2 x).
This implies that
∂F11
∂x11
∂F22
∂x22
∂Fa1 a2
(x) = a1 (I1 x) b (x) + a2 (I2 x) b (x)
∂xb1 b2
∂xb1 b2
∂xb1 b2
∂xb2
∂xb1
= ∂b1 Fa11 + ∂b2 Fa22
The above equation implies that it suffices to show that (i) ∂b1 Fa11 appears in ( a1 â2 , b1 b̂2 )
entries in (I1 )0 DF1 I1 for all â2 , b̂2 ∈ A 2 , and that (ii) ∂b2 Fa22 appears in ( â1 a2 , b̂1 b2 ) entries
in (I2 )0 DF2 I2 for all â1 , b̂1 ∈ A 1 . The above equation together with (i) and (ii) will prove
the claim.
For (i), observe that ∂b1 Fa11 will be mapped to ( a1 â2 , b1 ) entry in (I1 )0 DF1 for all â2 ∈ A 2 .
13 Fix â2 ∈ A 2 . Observe that ( a1 â2 , b1 ) entry in (I1 )0 DF1 will be mapped to ( a1 â2 , b1 b̂2 )
entry in (I1 )0 DF1 I1 for all b̂2 ∈ A 2 .
A similar discussion applies to (ii). The claim follows.
A.2
Proofs of fundamental theorems on the multitask BRD
A.2.1
Proof of Lemma 3.1
Proof. Without loss of generality, let i = 1. Pick ẋ ∈ VM (x) arbitrarily; it consists of {yc }c∈A
as
ẋ = ∑ xc yc − x
with yc ∈ B̂c (I1 x, I2 x) for each c ∈ A .
c ∈A
yc = (yca ) a∈A assigns positive probability only to b̂c (I1 x, I2 x). As a10 ∈
/ b1 (Ix), yca > 0
implies c1 = a10 and a2 ∈ b2 (I2 x). Therefore, (I1 yc ) a1 = ∑ a2 ∈A 2 yca is positive only if c1 = a10 .
0
has all zero except an entry with one. Every entry of (I1 )0 DF1 ,
e.g. ( a1 â2 , b1 ) entry, must be identical to one entry of DF1 , e.g. ( a1 , b1 ) entry.
13 To see this, recall that every column of I1
14
As yc is a probability vector, it is non-negative and at most one. So we have
− (I1 x) a1 =
0
∑
c ∈A
xc · 0 − (I1 x) a1 ≤ (I1 ẋ) a1 =
0
0
∑
c ∈A
xc (I1 yc ) a1 − (I1 x) a1
0
≤
0
∑
c∈{ a10 }×A 2
A.2.2
xc · 1 − (I1 x) a1 = 0. (4)
0
Proof of Nash stationarity (Theorem 3.2)
Proof. We first show that 0 ∈ VM (x) if x ∈ NE( F ). Observe that, for all a = ( a1 , a2 ) ∈ A ,
x a 1 a 2 > 0 ⇒ ( Ii x ) a i ≥ x a 1 a 2 > 0
for each i = 1, 2
⇒ ai ∈ bi (Ii x) for each i = 1, 2
⇒ e a1 a2 ∈ Ba1 a2 (I1 x, I2 x).
Since the above holds for all x a1 a2 > 0, it implies that
x=
∑
xa ea ∈
a ∈A
∑
x a Ba (I1 x, I2 x).
a ∈A
Then, it is followed by
0∈
∑
0
a ∈A
x a0 Ba0 (I1 x, I2 x) − x = VM (x).
For the contrary, suppose that x ∈
/ NE( F ). There must exist some a = ( a1 , a2 ) such that
x a > 0 and ai ∈
/ bi ( x ) for some i = 1, 2. Choose a such that
a ∈ argmax{ F̆a10 (I1 x), F̆a20 (I2 x)}
a 0 ∈A
This implies that a ∈
/ Ba0 (I1 x, I2 x) for all a0 ∈ A . Then, it must be that ẋ a = − x a < 0.
Contradiction.
A.2.3
Proof of positive correlation (Theorem 3.3)
Proof. z ∈ VM (x) implies that
z=
∑
x a (y a − e a )
for some y a ∈ Ba (I1 x, I2 x).
a ∈A
15
Let a = ( a1 , a2 ) ∈ A . If F̆a11 (I1 x) > F̆a22 (I2 x), then
F (x) · y a = Fb11 (I1 x) + Fa22 (I2 x),
F (x) · e a = Fa11 (I1 x) + Fa22 (I2 x),
where b1 ∈ b1 (x). This implies that
F (x) · (y a − e a ) = F̆a11 (I1 x) > 0.
Similarly, we can show that, if F̆a11 (I1 x) < F̆a22 (I2 x), then
F (x) · (y a − e a ) = F̆a22 (I2 x) > 0.
If F̆a11 (I1 x) = F̆a22 (I2 x), then, by definition, y a can be decomposed as
y a = tŷ1a + (1 − t)ŷ2a ,
where t ∈ [0, 1] and ŷia ∈ Bi (yi ) × e a j for j 6= i. Observe that
F (x) · (ŷ1a − e a ) = F̆a11 (I1 x),
F (x) · (ŷ2a − e a ) = F̆a22 (I2 x).
Thus, we have
F (x) · (y a − e a ) = t F̆a11 (I1 x) + (1 − t) F̆a22 (I2 x) = F̆a11 (I1 x) ≥ 0.
Note that F (x) · (y a − e a ) > 0 if and only if F̆a11 (I1 x) > 0.
Aggregating F (x) · (y a − e a ) over a ∈ A , we have
F (x) · z =
∑
F (x) · (y a − e a ) ≥ 0.
a ∈A
Finally, suppose that x ∈
/ NE(F). Then, we have at least F̆a11 (I1 x) > 0 or F̆a22 (I2 x) > 0 for
some x a > 0. The observations above implies that F (x) · z > 0.
16
A.3
Proofs of theorems on potential games
A.3.1
Proof of Theorem 3.4
Proof of the if part. Suppose that f i is the potential of Fi . We prove that function f : R A → R
given by
f (x) = f 1 (I1 x) + f 2 (I2 x).
is a potential of F. For ai ∈ A i , let
xiai = (Ii x) ai =
∑
â j ∈A j
x ai â j .
In words, x ai is a fraction of agents playing action ai in Game i. Note that, for all ( ai , a j ) ∈
A and x ∈ R A ,
∂xiai
(x) = 1.
∂x ai a j
Observe that
1
∂f
∂ f 1 1 ∂x a1
∂ f 2 2 ∂x2a2
( x ) = 1 (I x )
( x ) + 2 (I x )
(x)
∂x ai a j
∂x a1 a2
∂x a1 a2
∂x a2
∂x a1
= Fa11 (I1 x) + Fa22 (I2 x)
= Fa (x).
This proves the if part of the claim.
Proof of the only-if part. Suppose f : R A → R is the potential of F. For each i = 1, 2 and
i
j 6= i, define function f i : R A → R as
f i ( xi ) : = f ( x ) −
∑ ∑
a i ∈A i a j ∈A j
j
x ai a j Fa j (x j )
for each xi = ( xiai ) ai ∈A i ∈ R A
i
j
where x ∈ R A is a vector such as x ai a j := xiai /A j and x j := I j x = (1/A j , . . . , 1/A j ) ∈ R A .
i
Note that Ii x = xi , x is a linear function of xi and ∂x ai a j /∂xiai (xi ) = 1/A j for each xi ∈ R A
Function f i is a potential of Fi :
∂x i j
∂x ai a j i j j
∂fi i
∂f
(x) ai a (xi ) − ∑
(x ) = ∑
(x ) Fa j (x )
i
i
∂x ai a j
∂x ai
∂x
∂x
j
j
j
j
i
i
a ∈A
a ∈A
a
a
=
∑
a j ∈A j
j
{ Fai i (Ii x) + Fa j (I j x)}/A j −
17
∑
a j ∈A j
j
Fa j (x j )/A j
= Fai i (xi ).
A.3.2
Proof of Nash stability (Theorem 3.5)
Proof. Lemma 3.4 suggests that the multitask game F has a potential function f : X → R A .
From the definition of a potential function and the fact that 1 · ẋ p = 0, positive correlation
(Theorem 3.3) implies f˙(x) = ∇ f (x) · ẋ ≥ 0, especially f˙(x) = 0 iff x ∈ NE(F). So f is a
strict Lyapunov function. Then, each local maximizer of f is Lyapunov stable and the
set of stationary points, i.e. NE(F), is globally attracting (Sandholm, 2010b, Theorems
7.B.2,4).
A.4
Proofs of theorems on contraction games
A.4.1
Proof of Lemma 3.8
Proof of “if” part. First, observe that, for i ∈ {1, 2},
∑
a i ∈A i
( Ii z ) a i =
∑ ∑
a i ∈A i a j ∈A j
∀z ∈ TX .
z ai a j = 0
This implies that Ii z ∈ TX i if z ∈ TX .
Since Fi is a contractive game for i ∈ {1, 2}, Theorem 3.7 tells us that, for all x ∈ X and
z ∈ TX ,
(I1 z)0 DF1 (I1 x)I1 z ≤ 0
(I2 z)0 DF2 (I2 x)I2 z ≤ 0.
Then, observe that, for all x ∈ X and z ∈ TX ,
0
0
h
1 0
2 0
z DF(x)z = z (I ) DF (I x)I + (I ) DF (I x)I
1
1
1
2
2
2
i
z
= (I1 z)0 DF1 (I1 x)I1 z + (I2 z)0 DF2 (I2 x)I2 z
≤ 0.
The first equality comes from Lemma 2.1. The last inequality with Theorem 3.7 proves
the claim.
Proof of “only-if” part. Fix x1 ∈ X 1 and z1 ∈ TX 1 arbitrarily. Consider state x ∈ X in
18
the multitasking game such as x a1 a2 = x1a1 /A2 ; I1 x coincides with x1 . Define z ∈ TX by
z a1 a2 = z1a1 /A2 . Then, I1 z coincides with z1 and I2 z = 0 since
(I1 z) a1 =
(I2 z) a2 =
∑
a2 ∈A 2
∑
a1 ∈A 1
z a1 a2 =
z a1 a2 =
z1a1
A2
A2 = z1a1
∑
a1 ∈A 1
,
z1a1
A2 = 0.
According to Theorem 3.7, F being a contraction game implies
0 ≤ z0 DF(x)z = (I1 z)0 DF1 (I1 x)I1 z + (I2 z)0 DF2 (I2 x)I2 z
(by Lemma 2.1)
10
= z DF1 (x1 )z1 + 00 DF2 (I2 x)0
0
= z1 DF1 (x1 )z1
and thus F1 is a contraction game as well. Similarly we can verify that F2 is also a contraction game.
A.4.2
Proof of Lemma 3.9
For the first part, we use a version of Danskin’s envelop theorem below.
Theorem A.1 (Hofbauer and Sandholm, 2009: Theorem A.4). For each element z in a set Z,
let gz : [0, ∞) → R be Lipschitz continuous. Let
g∗ (t) = max gz (t) and Z∗ (t) = arg max gz (t).
z∈ Z
z∈ Z
Then g∗ : [0, ∞) → R is Lipschitz continuous, and for almost all t ∈ [0, ∞), we have that ġ∗ (t) =
ġz (t) for each z ∈ Z∗ (t).
Proof. The first part comes immediately from the above theorem. For the second part,
recall y a ∈ B̂(x) is a mixture of pure-strategy multitask best responses from a, b̂a (x), each
of which yields the same payoff increase ga∗ (x).
Let b = (b1 , b2 ) ∈ b̂a (x). Note that either b1 or b2 is the optimal strategy in the corresponding game in the current state: bi ∈ bi (x). Thus, the multitask best response from b
is the profile of the optimal strategies in both games, i.e. b̂b (x) = b1 (x) × b2 (x); hence, it
should achieve maxã∈A Fa (x). With the fact that Fb (x) = max{ Fa (x)| ã ∈ Aa }, this implies
the first equality in the third part.
19
If a ∈ b(x), then b ∈ b(x) too and ga∗ (x) = gb∗ (x) = 0. Suppose a = ( a1 , a2 ) ∈
/ b(x); then
a∈
/ b̂a (x) and thus a 6= b. Say, a1 6= b1 and a2 = b2 . Then, ga∗ (x) = F̆a11 (x) ≥ F̆a22 (x).14 Since
b1 ∈ b1 (x), gb∗ (x) = F̆b22 (x) = F̆a22 (x). Hence we have gb∗ (x) ≤ ga∗ (x).15
j
Observe that ga∗ (x) = F̆a j (x) > 0 and that gb∗ (x) = 0. The claim (P3’) follows.
A.4.3
Proof of Lemma 3.10
Proof. i) The first claim is immediate from ga∗ ≥ 0. For the second claim, observe that
G (x) = 0 is equivalent to [ x a > 0 ⇒ ga∗ (x) = 0]. Since ga∗ (x) ≤ F̆ai i (x) for each i = 1, 2,
ga∗ (x) = 0 is equivalent to F̆a11 (x) = F̆a22 (x) = 0, i.e., a = ( a1 , a2 ) ∈ b(x). So G (x) = 0 is
equivalent to [ x a > 0 ⇒ a ∈ b(x)], i.e., x ∈ NE(F).
ii) Let z = (z a ) a∈A ∈ VM (x); then z can be represented as z = ∑ x a (y a − e a ) and thus
z a = ∑c∈A xc yca − x a with yc = (yca ) a∈A ∈ B̂c (x) for each c ∈ A . Thus,
∑
z a ga∗ (x) =
a ∈A
=
∑
c ∈A
xc
∑
∑
xc yca − x a
∑
∑
c ∈A
x c gc ∗ ( x ) =
c ∈A
∑
ga∗ (x) =
c ∈A
yca ga∗ (x) −
a ∈A
= max Fã (x) − 2
ã∈A
∑
a ∈A
!
∑
xc
c ∈A
xc max{ Fã (x)| ã ∈ Ac } +
c ∈A
xc
∑
yca ga∗ (x) −
a ∈A
∑
x a ga∗ (x)
a ∈A
∑ yca ga∗ (x) − gc∗ (x)
!
c ∈A
∑
xc Fc (x) = G̃ (x).
a ∈A
The second last equality comes from Lemma 3.9 (P3), which implies ∑ a∈A yca ga∗ (x) =
maxã∈A Fã (x) − max{ Fã (x)| ã ∈ Ac } ≤ gc∗ (x). Since the last expression on the third line
tells that G̃ (x) is their convex combination, the lemma also suggests G̃ (x) ≤ 0.
From the end of the second line, we can find that G̃ (x) is equivalent to [ xc > 0 ⇒
gc∗ (x) = ∑ a∈A yca ga∗ (x)] where yc ∈ B̂c (x). By Lemma 3.9 (P3), the necessary condition
is equivalent to gc∗ (x) = gb∗ (x) for all b ∈ b̂c (x). So the equivalence in the last statement
of ii) is proven. In particular, if x ∈ NE(F), the argument in the proof of part i) shows
gc∗ (x) = 0 whenever xc > 0. By Lemma 3.9 (P3) and ga∗ (·) ≥ 0, it follows that gc∗ (x) =
gb∗ (x) = 0 for all b ∈ b̂c (x).
iii) Observe that by Lemma 3.9 (P1), for almost all t, we have
ġa∗ ( x ) = ġab ( x )
= ( DFb (x) − DFa (x))ẋ = (eb − e a )0 DF(x)ẋ
14 If
F̆a11 (x) < F̆a22 (x), agents playing a should have switched their action in Game 2.
15 But, a ∈
/ b(x) does not necessarily imply gb∗ (x) > ga∗ . Consider the case F̆a11 (I1 x)
prevents us from getting G̃ −1 (0) = NE(F).
20
= F̆a22 (I2 x) > 0. This
= (y a − e a )0 DF(x)ẋ
for any b ∈ b̂a (x) and any y a ∈ B̂a (x). Thus, since z ∈ VM (x) can be represented as z =
∑ a∈A x a (y a − e a ) with y a ∈ B̂a (x) for each a ∈ A , we have
∑
x a ġa∗ (x) =
∑
x a (y a − e a ) DF(x)ẋ = ẋ0 DF(x)ẋ.
a ∈A
a ∈A
Since ẋ ∈ VM (x), we have G̃ (x) = ∑ a∈A ẋ a ga∗ (x). By combining these, we obtain
Ġ (x) =
∑ { ẋa ga∗ (x) + xa ġa∗ (x)} = G̃(x) + ẋ · DF(x)ẋ.
a ∈A
A.4.4
Proof of Theorem 3.13
Proof. Because F is strictly contractive, any z ∈ TX yields z · DF(x)z ≤ 0 and, especially,
z · DF(x)z < 0 if z 6= 0. Together with Lemma 3.10-ii), this implies Ḡ (z) ≤ G̃ (z) ≤ 0.
If x ∈
/ NE(F), then x ∈
/ B(x) and thus 0 ∈
/ VM (x). So any z ∈ VM (x) yields z · DF(x)z < 0.
As VM (x) is compact, this implies that the maximal value of z · DF(x)z over VM (x) exists
and is negative; hence, Ḡ (x) < 0 in this case. In contrary, with Lemma 3.10-ii) again, the
same argument proves Ḡ (x) = 0 if x ∈ NE(F). Therefore, Ḡ −1 (0) = NE(F).
Since ẋ ∈ VM (x), Lemma 3.10-iii) implies Ġ (x) ≤ Ḡ (x). With Lemma 3.10-i), these imply that G is a strictly Lyapunov function with Ḡ being an upper bound function; therefore, Theorem 3.12 guarantees global asymptotic stability of NE(F).
A.5
Proofs of theorems on regular ESSs
A.5.1
Proof of Lemma 3.14
Proof. For each i ∈ {1, 2}, ai ∈ U i implies Fai i (xi∗ ) < F∗i (xi∗ ) because xi∗ is a quasi-strict
equilibrium. Continuity of Fi implies existence of a neighborhood O1i ⊂ X i of xi∗ in
which any ai ∈ U i is suboptimal: Fai i (xi ) < F∗i (xi ) and thus ai ∈
/ bi (xi ) at any xi ∈ O1i . Since
Ii : X → X 1 is a linear map and thus continuous, the preimage Õ1i := {x ∈ X | Ii x ∈ O1i }
is open in X . X ∗ ⊂ Õ1i , because any x∗ ∈ X ∗ satisfies Ii x∗ = xi∗ ∈ O1i .
( ai , b−i ) ∈ U i × S−i implies F̆ai i (xi∗ ) > 0 = F̆b−−ii (x−i∗ ) and thus F̆ai i (xi∗ ) − F̆b−−ii (x−i∗ ) > 0
because xi∗ is a quasi-strict equilibrium. Since F is continuous, there exists a neighborhood O2 ⊂ X 1 × X 2 of (xi∗ , x−i∗ ) such that F̆ai i (xi ) − F̆b−−ii (x−i ) > 0 for each i = 1, 2, any
21
(xi , x−i ) ∈ O2 , and any ( ai , b−i ) ∈ U i × S−i . Since (Ii , I−i ) : X → X i × X −i is a linear map
and thus continuous and also any x∗ ∈ X ∗ satisfies (Ii x∗ , I−i x∗ ) = (xi∗ , x−i∗ ) ∈ O2 , the
preimage Õ2 := {x ∈ X | (Ii x, I−i x) ∈ O2 } is a neighborhood of X ∗ in X .
Likewise, the preimage Õ0 := {x ∈ X | (I1 x, I2 x) ∈ O01 × O02 } is a neighborhood of X ∗
in X . Let x ∈ Õ0 . Then, since xi∗ is the only Nash equilibrium of Fi in O 1 , Ii x ∈ NE(Fi )
is equivalent to Ii x = xi∗ . The statement iii) follows by Theorem 2.2 and the definition of
X∗.
i
Negative definiteness of DFi (xi∗ ) w.r.t. TX i ∪ RSAi extends to DFi (xi ) for any xi in a
sufficiently small neighborhood of xi∗ . As above, this implies the statement iv) for some
neighborhood of X ∗ . Finally, let Õ be the intersections of all these neighborhoods of X ∗ .
Then, the statements in parts i)–iv) hold in Õ.
Now let x ∈ Õ and ẋ ∈ V M (x) and fix them henceforth. Then, from part i) of this
lemma and Lemma 3.1, any ai ∈ U i satisfies ai ∈
/ bi (Ii x) and thus 0 ≥ (Ii ẋ) ai ≥ −(Ii x) ai .
Therefore,
r
∑
|(Ii ẋ) ai |2 ≤
a i ∈U i
∑
|(Ii ẋ) ai | = −
a i ∈U i
∑
∑
(Ii ẋ) ai ≤
a i ∈U i
( Ii x ) a i .
a i ∈U i
By the same token as in (Sandholm, 2010a, pp.43–44), this implies the statement in part
v).
Further, assume that c ∈ U i × S−i . Then, by ii), any yc = (yca ) a∈A ∈ B̂c (I1 x, I2 x) satisfies yca > 0 only if ai ∈ Si and a−i = c−i . That is, if yca > 0 with a j ∈ U i , then c 6∈ U j × S− j and
thus c j ∈ S j or c− j ∈ U j ; this in turn implies c j = a j . Hence, if (I j yc ) a j = ∑c∈A yc,(a j ,a− j ) > 0
j
with a j ∈ U j , then c j = a0 and c− j ∈ U − j . This implies that
∑ ∑
a j ∈U j c ∈ A
∴
∑
a j ∈U j
x c (I j y c ) a j ≤
(I j ẋ) a j =
∑
a j ∈U j
∑
∑
a j ∈U j c − j ∈U − j
∑
c ∈A
x(a j ,c− j ) (I j yc ) a j ≤
!
x c (I j y c ) a j − (I j x ) a j
≤
∑
xc .
c ∈U j ×U − j
∑
c ∈U j ×U − j
xc −
∑
(I j x ) a j .
a j ∈U j
If a ∈ U 1 × U 2 , i) implies that a ∈
/ b̂c (I1 x, I2 x) for any c ∈ A and thus ẋ a = − x a . Therefore,
∑ ẋa = − ∑ xa .
a ∈U 1 ×U 2
a ∈U 1 ×U 2
Combining this with the above equation, we verify part vi).
22
A.5.2
Proof of Theorem 3.15
Proof. Part i-1) of Lemma 3.10 immediately implies G ∗ (x) ≥ 0 for any x. Observe that
G ∗ (x) = 0 is equivalent to a) G (x) = 0 and b) (Ii x) ai = 0 for all ai ∈ U i and b’) x a = 0 for
all a ∈ U 1 × U 2 . G (x) = 0 is equivalent to x ∈ NE(F) by i-2) of 3.10; under the restriction
to O, this is further equivalent to x ∈ X ∗ by part iii) of Lemma 3.14. In turn, this implies
the other two conditions b) and b’) by the definition of U i . Therefore, ( G ∗ )−1 (0) = X ∗ .
For x ∈ O, G̃ (x) ≤ 0 comes from part 2-i) of Lemma 3.10 and part iv) of Lemma 3.14.
i
(Notice Φi z ∈ TX i ∪ RSAi for any z ∈ R A .) If x ∈ X ∗ , then x ∈ NE(F) and thus 0 ∈ V M (x).
This implies that the second term of G̃ (x) is zero, as well as the first term G̃ (x) = 0 by ii-2)
of Lemma 3.10 and the third term by the definition of U i . So we obtain G̃ (x) = 0 when
x ∈ X∗.
In turn, we prove that Ḡ ∗ (x) = 0 only if x ∈ X ∗ . To prove this by contradiction, assume
that there exists x ∈ O such that x ∈
/ X ∗ but G̃ (x) = 0. By part iii) of Lemma 3.14, x ∈
/ X∗
implies x ∈
/ NE(F). So there exists c ∈ A such that xc > 0 but c ∈
/ b(x). Notice that Ḡ ∗ (x) =
0 requires each of the three components in its definition to be zero, i.e.,
G̃ (x) = 0;
(5)
∃z ∈ V M (x) (Φi z) · DFi (Ii x)Φi z = 0 for each i = 1, 2; and
(6)
( Ii x ) a i = 0
for each i = 1, 2, ai ∈ U i .
(7)
By applying part ii) of Lemma 3.10 to (5) for the above strategy c, we have gc∗ (x) = gb∗ (x)
for all b ∈ b̂c (x). By Lemma 3.9 (P3’), either one of the following two cases must be the
case:
[Ii x ∈ Bi (Ii x) for both i = 1, 2]
or
[ Ii x ∈
/ Bi (Ii x) for both i = 1, 2].
The first case means x ∈ NE(F), according to Theorem 2.2. So the second case must be the
case. Since (7) implies Ii x assigns positive mass only to (possibly a part of) Si , this implies
that Si 6= bi (Ii x) for each i = 1, 2. In particular, we have Si ( bi (Ii x) since bi (Ii x) ⊂ Si
by part i) of Lemma 3.14. So, in each Game i = 1, 2, there is a strategy si ∈ Si such that
si ∈
/ b i ( Ii x ) .
By part iv) of Lemma 3.14, (6) means that existence of z ∈ V M (x) such that Ii z = 0 for
each i = 1, 2. By the definition of Φi , this means that (Ii z) ai = (Ii z)bi for any ai , bi ∈ Si .
Since z ∈ VM (x), it must be the case that (Ii z) ai ≤ 0 for any ai ∈
/ bi (Ii x) by Lemma 3.1; in
particular, (Ii z)si ≤ 0. Since si belongs to Si , this implies (Ii z)bi ≤ 0 for any bi ∈ Si . With
∑ ai ∈A i (Ii z) ai and A i = Si ∪ bi (Ii x), this requires Ii z = 0. As this is true for both i = 1, 2,
23
such z must be z = 0. So (6) eventually reduces to 0 ∈ VM (x), i.e., x ∈ NE(F). But this
contradicts with the hypothesis x∗ ∈
/ X ∗ . Therefore, Ḡ ∗ (x) = 0 holds only if x ∈ X ∗ . In
sum, we have ( G̃ ∗ )−1 (0) = X ∗ .
Finally, by combining part iii) of Lemma 3.10, Lemma 2.1 and parts v, vi) of Lemma
3.14, we obtain
2
2
Ġ ∗ (x) ≤ G̃ (x) + ∑ (Φi ẋ) · DFi (Ii x)Φi ẋ − ∑
i =1
∑
( Ii x ) a i .
i = 1 a i ∈U i
Because ẋ ∈ VM (x), the second term of the right hand side cannot exceeds the maximal
value of the second term in the definition of Ḡ ∗ ; the other two terms are identical. Hence,
we have Ġ ∗ (x) ≤ Ḡ ∗ (x).
Applying Theorem 3.12 to this pair of functions G ∗ and Ḡ ∗ , we verify asymptotic
stability of X ∗ with O being a basin of attraction.
References
A PPELBAUM , S. H., A. M ARCHIONNI , AND A. F ERNANDEZ (2008): “The multi-tasking
paradox: Perceptions, problems and strategies,” Management Decision, 46(9), 1313–1325.
B EDNAR , J., Y. C HEN , T. X. L IU , AND S. PAGE (2012): “Behavioral spillovers and cognitive load in multiple games: An experimental study,” Games and Economic Behavior,
74(1), 12–31.
B EDNAR , J., AND S. PAGE (2007): “Can game (s) theory explain culture? The emergence
of cultural behavior within multiple games,” Rationality and Society, 19(1), 65–97.
B OUCEKKINE , R., AND P. C RIFO (2008): “Human Capital Accumulation And The Transition From Specialization To Multitasking,” Macroeconomic Dynamics, 12(3), 320–344.
C RESSMAN , R., A. G AUNERSDORFER , AND J.-F. W EN (2000): “Evolutionary and dynamic
stability in symmetric evolutionary games with two independent decisions,” International Game Theory Review, 2(1), 67–81.
G ONZ ÁLEZ , V. M., AND G. M ARK (2005): “Managing currents of work: multi-tasking
among multiple collaborations,” in ECSCW 2005, pp. 143–162. Springer.
G RIMM , V., AND F. M ENGEL (2012): “An experiment on learning in a multiple games
environment,” Journal of Economic Theory, 147(6), 2220–2259.
24
H ASHIMOTO , K. (2006): “Unpredictability induced by unfocused games in evolutionary
game dynamics,” Journal of theoretical biology, 241(3), 669–675.
H OFBAUER , J., AND W. H. S ANDHOLM (2009): “Stable games and their dynamics,” Journal of Economic Theory, 144(4), 1665–1693.
L INDBECK , A., AND D. J. S NOWER (2000): “Multitask Learning and the Reorganization of
Work: From Tayloristic to Holistic Organization,” Journal of Labor Economics, 18(3), pp.
353–376.
M ENGEL , F. (2012): “Learning across games,” Games and Economic Behavior, 74(2), 601–
619.
S ANDHOLM , W. H. (2001): “Potential Games with Continuous Player Sets,” Journal of
Economic Theory, 97(1), 81–108.
(2010a): “Local stability under evolutionary game dynamics,” Theoretical Economics, 5(1), 27–50.
(2010b): Population games and evolutionary dynamics. MIT Press.
(2013): “Population Games and Deterministic Evolutionary Dynamics,” in Handbook of Game Theory and Economic Applications, ed. by H. P. Young, and S. Zamir, vol. 4.
Elsevier., forthcoming.
S PINK , A., C. C OLE , AND M. WALLER (2008): “Multitasking behavior,” Annual Review of
Information Science and Technology, 42(1), 93–118.
Z USAI , D. (2014): “Tempered best response dynamics,” mimeo, Temple University.
25