Evolutionarily stable strategies in nonlinear and
multiplayer games
Mark Broom
City University London
16th ISDG Symposium
University of Amsterdam
9-12 July 2014
Mark Broom (City University London)
Amsterdam, July 12
1 / 116
Credits
Credits
This talk is based upon the book:
Broom, M. and Rychtar, J. (2013) Game-Theoretical Models in Biology,
Chapman and Hall/ CRC Press.
The numerous references to other works that are in the book above are
omitted from the talk for the sake of brevity.
Mark Broom (City University London)
Amsterdam, July 12
2 / 116
Introduction to evolutionary games
Outline
1
Introduction to evolutionary games
What is a game?
Two approaches to game analysis
Some classic games
2
Nonlinear games
Overview and general theory
Playing the field
Nonlinearity due to non-constant interaction rates
Nonlinearity in the strategy of the focal player
3
Multi-player games
Multi-player matrix games
The multi-player war of attrition
Mark Broom (City University London)
Amsterdam, July 12
3 / 116
Introduction to evolutionary games
What is a game?
The key game components
In game theory, a game is comprised of three key elements.
1. The players: often there are two players, who sometimes can be viewed as
distinct, so it matters which is “player 1” and which “player 2”. Often in
evolutionary games this distinction is not important.
2. The strategies: these are the choices of the players. Strategies can be pure
or mixed, as described below.
3. The payoffs: these are the rewards to the players, and are a function of the
strategies chosen.
For an evolutionary game we also need a population, and a means for a
population to evolve, an evolutionary dynamics.
Games can be represented in either normal form or in some cases extensive
form. All of the games we consider here are of the former type.
Mark Broom (City University London)
Amsterdam, July 12
4 / 116
Introduction to evolutionary games
What is a game?
Pure strategies
A pure strategy is a single choice of what strategy to play.
There can be a finite or infinite number of pure strategies. In the Prisoner’s
dilemma game (see later) the pure strategies are “play Cooperate” and “play
Defect.”
If the game is modified so that the players play the Prisoner’s dilemma game
over a number of rounds, the number pure strategies can get very large.
A pure strategy in such a case specifies what to play in every round,
conditional on every possible sequence played previously.
Biology plays an important role in trimming the set of pure strategies.
For example, if players have a short term memory, a strategy can be a rule like
“start with Cooperate and then play whatever the opponent played in the
previous round”.
Mark Broom (City University London)
Amsterdam, July 12
5 / 116
Introduction to evolutionary games
What is a game?
Mixed strategies I
If the pure strategies comprise a finite set {S1 , S2 , . . . , Sn }, then a mixed
strategy is defined as a probability vector p = (p1 , p2 , . . . , pn ) where pi is the
probability that the player will choose pure strategy Si .
For example, in the Prisoner’s dilemma, a player may choose to play each of
Cooperate and Defect half of the time, which would be represented by the
vector (1/2, 1/2).
The Support of p, S(p), is defined by S(p) = {i : pi > 0}, so that it is the set
of pure strategies which have non-zero chance of being played by a p-player.
For example, the support of the above strategy (1/2, 1/2) is {1, 2}.
Mark Broom (City University London)
Amsterdam, July 12
6 / 116
Introduction to evolutionary games
What is a game?
Mixed strategies II
A pure strategy can be seen as a special case of a mixed strategy; Si can be
identified with a “mixed strategy” (0, . . . , 0, 1, 0, . . . , 0) with 1 at the ith place.
The set of all mixed strategies can be represented as a simplex in Rn with
vertices at {S1 , S2 , . . . , Sn }.
We can see a mixed strategy as a convex combination of pure strategies,
p = (p1 , p2 , . . . , pn ) =
n
X
pi Si .
(1)
i=1
The notion of a mixed strategy is naturally extended even to cases where the
set of pure strategies is infinite, as we see with the “war of attrition” game.
Mark Broom (City University London)
Amsterdam, July 12
7 / 116
Introduction to evolutionary games
What is a game?
Simplex representation of mixed strategies
pure strategy
S2 = (0, 1)
1
pure strategy
S3 = (0, 0, 1)
mixed strategy
p = (p1 , p2 ) = p1 S1 + p2 S2
mixed strategy
p0 = (p01 , 0, p03 )
= p01 S1 + p03 S3
p2
pure strategy
S1 = (1, 0)
p1
p3
mixed strategy
p = (p1 , p2 , p3 )
= p1 S 1 + p2 S 2 + p 3 S 3
p03
p2
p1
p01
pure strategy
S2 = (0, 1, 0)
pure strategy
S1 = (1, 0, 0)
1
pure strategy
S3 = (0, 0, 1)
p2
p1
p = (p1 , p2 , p3 )
distance 1
p1
[]
pure strategy
S1 = (1, 0)
p3
p2
mixed strategy
p = (p2 , p1 )
pure strategy
S2 = (0, 1)
pure strategy
S1 = (1, 0, 0)
[]
pure strategy
S2 = (0, 1, 0)
Figure : Two ways to visualize pure and mixed strategies
Mark Broom (City University London)
Amsterdam, July 12
8 / 116
Introduction to evolutionary games
What is a game?
Payoff matrices I
In general, payoffs for a game played by two players with each having only
finitely many pure strategies can be represented by two matrices.
For example, if player 1 has available the strategy set S = {S1 , . . . , Sn } and
player 2 has the strategy set T = {T1 , . . . , Tm }, then the payoffs in this game
are completely determined by the pair of matrices
A = (aij )i=1,...,n;j=1,...,m , B = (bij )i=1,...,m;j=1,...,n ,
(2)
where aij and bji represent rewards to players 1 and 2 respectively after player
1 chooses pure strategy Si and player 2 chooses pure strategy Tj .
We thus have all of the possible rewards in a game given by a pair of n × m
matrices A and BT , which is known as a bimatrix representation.
Entries are often given as a pair of values in a single matrix.
Mark Broom (City University London)
Amsterdam, July 12
9 / 116
Introduction to evolutionary games
What is a game?
Payoff matrices II
Sometimes, as in the Prisoner’s Dilemma, the choice of which player is player
1 and which player 2 is arbitrary, and thus the strategies that they have
available to them are identical, i.e. n = m and (after possible renumbering)
Si = Ti for all i.
Further, since the ordering of players is arbitrary, we can switch the two
without changing their rewards, so that their payoff matrices satisfy bij = aij ,
i.e. A = B.
We thus now have all of the possible rewards in a game given by a single
n × n matrix
A = (aij )i,j=1,...,n ,
(3)
where in this case, aij is a reward to the player that played strategy Si while its
opponent played strategy Sj .
Mark Broom (City University London)
Amsterdam, July 12
10 / 116
Introduction to evolutionary games
What is a game?
Payoffs as a matrix product
Consider a game whose payoffs are given by a matrix A.
If player 1 plays p and player 2 plays q, then the proportion of games that
involve player 1 playing Si and player 2 playing Sj is simply pi qj .
The reward to player 1 in this case is aij . The expected reward to player 1,
which we shall write as E[p, q], is thus obtained by averaging, i.e.
X
E[p, q] =
aij pi qj = pAqT .
(4)
i,j
When comparing payoffs, we can ignore difficult cases involving equalities by
assuming our games are generic. In most of the following we will make this
assumption.
Mark Broom (City University London)
Amsterdam, July 12
11 / 116
Introduction to evolutionary games
What is a game?
Games in biological settings
So far we have considered a single game contest between individuals.
However, individuals are not usually involved in a single contest only.
More often, they are repeatedly involved in the same game (either with the
same or different opponents).
Each round of the game contributes a relatively small portion of the fitness to
the total reward.
Of ultimate interest is the function E[σ; Π] describing the fitness of a given
individual using a strategy σ in a given population represented by Π.
We shall represent by δp a population where the probability of a randomly
selected player being a p-player is 1.
Mark Broom (City University London)
Amsterdam, July 12
12 / 116
Introduction to evolutionary games
What is a game?
Fitness functions
We note that E[σ; Π] can be observed by biologists only if there are players
using σ in the population.
However, from the mathematical point of view, even expressions like E[q; δp ]
for q 6= p are considered.
In an infinite population there may be a sub-population (e.g. a single
individual) which makes up a proportion 0 of the population, and so such
payoffs are logical.
For any strategy p, as described above, we let δp denote the population where
a randomly selected player plays strategy p with probability 1.
Let δi denote the population
P consisting of individuals playing strategy Si (with
probability 1). Similarly, i pi δi means the population where the proportion
of individuals playing strategy Si is pi .
Mark Broom (City University London)
Amsterdam, July 12
13 / 116
Introduction to evolutionary games
Two approaches to game analysis
Evolutionary dynamics
The evolution of a population can be modelled using evolutionary dynamics,
where the number/ proportion of individuals playing a given strategy changes
according to their fitness. We shall assume a population consisting only of
pure strategists.
P
Consider a population described by pT = i pi δi , i.e. the frequency of
individuals playing strategy Si is pi .
To simplify notation, let fi (p) denote the fitness of individuals playing Si in
the population in this section.
Further, for the purpose of deriving the equation of the dynamics, assume that
the population has N individuals and Ni = pi N of those are using strategy Si
(this is convenient for the immediate derivations below, but often we shall
assume infinite populations and only the frequencies matter).
Mark Broom (City University London)
Amsterdam, July 12
14 / 116
Introduction to evolutionary games
Two approaches to game analysis
The (continuous) replicator dynamics
If the population is very large, has overlapping generations and asexual
reproduction, we may consider Ni and pi = Ni /N to be continuous variables.
Population growth is given by the differential equation
d
Ni = Ni fi p(t) ,
dt
(5)
and we get the continuous replicator dynamics,
d
pi = pi fi p(t) − f̄ (p(t)) .
dt
(6)
As before, for matrix games, the dynamics (6) becomes
T T
d
pi = pi A p(t)
− p(t)A p(t)
.
dt
i
Mark Broom (City University London)
(7)
Amsterdam, July 12
15 / 116
Introduction to evolutionary games
Two approaches to game analysis
The static approach
Static analysis does not consider how the population reached a particular
point in the strategy space.
Instead, we assume that the population is at that point, and ask, is there any
incentive for any members of the population to change their strategies?
A strategy S is a best reply (alternatively a best response) to strategy T if
f (S0 , T) ≤ f (S, T); for all strategies S0 ,
(8)
where f (S, T) denotes the payoff to a player using S against a player using T.
A strategy p* is a Nash equilibrium if
f (S0 , S) ≤ f (S, S); for all strategies S0 .
Mark Broom (City University London)
(9)
Amsterdam, July 12
16 / 116
Introduction to evolutionary games
Two approaches to game analysis
Evolutionarily Stable Strategies
Any “good” strategy must be a best reply to itself. However, a strategy being
a best reply to itself does not prevent invasion.
Consider a population consisting of individuals, the vast majority of whom
adopt a strategy S, while a very small number of “mutants” adopt a strategy M.
The strategies S and M thus compete in the population (1 − ε)δS + εδM for
some small ε > 0 (rather than in the population δS ), and it is against such a
population that S must outcompete M.
We say that a strategy S is evolutionarily stable against strategy M if there is
εM > 0 so that for all ε < εM we have
E[S; (1 − ε)δS + εδM ] > E[M; (1 − ε)δS + εδM ].
(10)
S is an evolutionarily stable strategy (ESS) if it is evolutionarily stable against
M for every other strategy M 6= S.
Mark Broom (City University London)
Amsterdam, July 12
17 / 116
Introduction to evolutionary games
Two approaches to game analysis
ESSs for matrix games I
In the case of matrix games, the linearity of payoffs gives
E[p; (1 − ε)δp + εδq ] = E[p, (1 − ε)p + εq] =
(11)
pA((1 − ε)p + εq)T = (1 − ε)pApT + εpAqT .
(12)
A strategy p is an ESS if for every other strategy q 6= p there is εp > 0 such
that for all ε < εp we have
E[p, (1 − ε)p + εq] > E[q, (1 − ε)p + εq].
Mark Broom (City University London)
Amsterdam, July 12
(13)
18 / 116
Introduction to evolutionary games
Two approaches to game analysis
ESSs for matrix games II
From payoff linearity, we have the equivalent definition for matrix games.
Theorem 1
A (pure or mixed) strategy p is an Evolutionarily Stable Strategy (ESS) for a
matrix game, if and only if for any mixed strategy q 6= p
E[p, p] ≥ E[q, p],
(14)
If E[p, p] = E[q, p], then E[p, q] > E[q, q].
(15)
The proof is straightforward, and we will not show it here.
If (15) does not hold, then p may be invaded by a mutant that does equally
well against the majority of individuals in the population (that adopts p) but is
getting a (tiny) advantage against them by doing better in the (rare) contests
with like mutants.
Mark Broom (City University London)
Amsterdam, July 12
19 / 116
Introduction to evolutionary games
Two approaches to game analysis
ESSs for matrix games III
Alternatively there is the possibility that the mutant and the residents do
equally well against the mutants too.
In this latter case invasion can occur by “drift”; both types do equally well, so
in the absence of selective advantage random chance decides whether the
frequency of mutants increases or decreases.
Thus, both conditions (14) and (15) are needed for p to resist invasion by a
mutant q.
If the conditions hold for any q 6= p, then p can resist invasion by any rare
mutant and so p is an evolutionarily stable strategy.
Mark Broom (City University London)
Amsterdam, July 12
20 / 116
Introduction to evolutionary games
Two approaches to game analysis
Dynamics versus statics
The focus of this talk is mainly on the static analysis.
Since it may be argued (with some justification) that the dynamics approach
may model the biology in a more realistic way, we point out the similarities
and difference between the two approaches in this section.
Dynamic and static analyses are mainly complementary, however the
relationship between the two is not straightforward, and there is some
apparent inconsistency between the theories.
As the concept of an ESS as an uninvadable strategy is partially based on the
same idea as that of replicator dynamics, we look at replicator dynamics.
Comparing ESS analysis and replicator dynamics, we see that the information
required for each type of analysis is different.
Mark Broom (City University London)
Amsterdam, July 12
21 / 116
Introduction to evolutionary games
Two approaches to game analysis
Contrasting populations
To determine whether p is an ESS, we need the minimum of a function
q → E[p; (1 − ε)δp + εδq ] − E[q; (1 − ε)δp + εδq ]
(16)
to be attained for q = p for all sufficiently small ε > 0.
To understand the replicator dynamics, we require E[Si ; pT ] for all i and all p.
Thus a major difference between static analysis and replicator dynamics is
that the static analysis is concerned with monomorphic populations
δp while
P
the replicator dynamics studies mixed populations pT = i pi δi .
Both analyses can produce the same (or at least similar) results only if there is
an identification between δp and pT such as in the case of matrix games.
Note that most of the comparative analysis between ESSs and replicator
dynamics assumes matrix games.
Mark Broom (City University London)
Amsterdam, July 12
22 / 116
Introduction to evolutionary games
Two approaches to game analysis
ESSs and replicator dynamics in matrix games
Theorem 2 (Folk theorem of evolutionary game theory) In the following, we
consider a matrix game with payoffs given by matrix A. (7).
1) If p is a Nash equilibrium of a game, in particular if p is an ESS of a
matrix game, then pT is a rest point of the dynamics
, i.e. the population
P
T
does not evolve further from the state p = i pi δi .
2) If p is a strict Nash equilibrium, then p is locally
Pasymptotically stable,
i.e. the population converges to the state pT = i pi δi if it starts
sufficiently close to it.
3) If the rest point p∗ of the dynamics is also the limit of an interior orbit
(the limit of p(t) as t → ∞ with p(0) ∈ int(S)), then it is a Nash
equilibrium.
4) If the rest point p is Lyapunov stable (i.e. if all solutions that start out
near p stay near p forever), then p is a Nash equilibrium.
Mark Broom (City University London)
Amsterdam, July 12
23 / 116
Introduction to evolutionary games
Two approaches to game analysis
Long-term dynamic behaviour
Theorem 3
Any ESS is an attractor of the replicator dynamics (i.e. has some set of initial
points in the space that lead to the population reaching that ESS). Moreover,
the population converges to the ESS for every strategy sufficiently close to it;
and if p is an internal ESS, then global convergence to p is assured.
Also, if the replicator dynamics has a unique internal rest point p*, under
certain conditions we have
Z
1 T
pi (t)dt = p∗i .
(17)
lim
t→∞ T 0
Thus the long-term average strategy is given by this rest point, even if at any
time there is considerable variation.
Mark Broom (City University London)
Amsterdam, July 12
24 / 116
Introduction to evolutionary games
Two approaches to game analysis
Different invaders I
The results above are very helpful because it means that, for matrix games,
identifying ESSs and Nash equilibria of a game gives a lot (sometimes
practically all) of the important information about the dynamics.
For example, if p is an internal ESS, and there is no other Nash equilibrium,
then global convergence to p is assured.
Yet, there are cases when an ESS analysis does not provide a complete
picture. In particular, there are attractors that are not ESSs.
To see this, consider the matrix
0
1 −1
−2 0
2 .
2 −1 0
Mark Broom (City University London)
(18)
Amsterdam, July 12
25 / 116
Introduction to evolutionary games
Two approaches to game analysis
Different invaders II
We note that the above matrix is a variant of the Rock-Scissors-Paper game.
We can show that the replicator dynamics for this game has a unique internal
attractor. However, this internal attractor is not an ESS.
This occurs because we can find an invading strategy for p where the payoffs
to the different components change in such a way under the dynamics that it is
inevitably forced into a combination that no longer invades.
Thus if the invader is comprised of a combination of pure strategists it is
beaten, but if it is comprised of mixed strategists it is not.
Mark Broom (City University London)
Amsterdam, July 12
26 / 116
Introduction to evolutionary games
Some classic games
Hawk-Dove game I
Each individual plays against other randomly chosen individuals.
A contest contains two equally able players competing for a reward (e.g. a
territory) of value V > 0. Each of the contestants may play one of two pure
strategies, Hawk (H) and Dove (D).
If two Doves meet, they each display, and each will gain the reward with
probability 1/2, giving an expected reward of E[D, D] = V/2.
If a Hawk meets a Dove, the Hawk will escalate, the Dove will retreat
(without injury) and so the Hawk will gain the reward V, and the Dove will
receive 0. Hence, E[H, D] = V and E[D, H] = 0.
If two Hawks meet, they will escalate until one receives an injury, which is a
cost C > 0 (the equivalent of a reward of −C). The injured animal retreats,
leaving the other with the reward. The expected reward for each individual is
thus E[H, H] = (V − C)/2.
Mark Broom (City University London)
Amsterdam, July 12
27 / 116
Introduction to evolutionary games
Some classic games
Hawk-Dove game II
We have a matrix game with two pure strategies, where the payoff matrix is
Hawk
Dove
Hawk
V −C
2
0
Dove
V
V .
2
(19)
We denote a mixed strategy p = (p, 1 − p) to mean to play Hawk with
probability p and to play Dove otherwise.
Since E[H, D] > E[D, D], Dove is never an ESS.
Hawk (or the strategy (1, 0)) is a pure ESS if E[H, H] > E[D, H] i.e. if
V > C. In fact Hawk is a pure ESS if V ≥ C.
Mark Broom (City University London)
Amsterdam, July 12
28 / 116
Introduction to evolutionary games
Some classic games
Hawk-Dove game III
For a mixed strategy p = (p, 1 − p) to be an ESS we require that
E[H, p] = E[D, p] ⇒ p
V
V −C
+ (1 − p)V = (1 − p)
2
2
(20)
which occurs if p = V/C.
It is easy to show that the required stability condition E[p, p] > E[q, p] holds
for all q 6= p.
Thus we get that p = (V/C, 1 − V/C) is the unique ESS when V < C.
This means that there is a unique ESS in the Hawk-Dove game, irrespective of
the values of V and C.
Mark Broom (City University London)
Amsterdam, July 12
29 / 116
Introduction to evolutionary games
Some classic games
Prisoner’s dilemma I
In the Prisoner’s dilemma, a pair of individuals can either cooperate (play C)
or try to obtain an advantage by defecting and exploiting the other (play D).
We obtain a two player game where the payoffs are given by the payoff matrix
Cooperate
Defect
Cooperate
R
T
Defect
S
,
P
(21)
where R, S, T and P are termed the Reward, Sucker’s payoff, Temptation and
Punishment respectively.
Whilst the individual numbers are not important, for the classical dilemma we
need T > R > P > S.
It turns out that the additional condition 2R > S + T is also necessary for the
evolution of cooperation.
Mark Broom (City University London)
Amsterdam, July 12
30 / 116
Introduction to evolutionary games
Some classic games
Prisoner’s dilemma II
One prisoner sits in their cell and thinks about what to do. If the other
cooperates then the reward for the first to play Defect is bigger than that if he
plays Cooperate, so they should play Defect.
But if the other plays Defect it is also the case that Defect is best. Thus
irrespective of what the other will do, Defect is best.
The other prisoner comes to the same conclusion and so both players defect,
and receive a payoff of P.
The outcome seems paradoxical because R > P, i.e. they would both be better
off if they cooperated.
Each player following their own self interest in a rational manner has led to
the worst possible collective result, with a total payoff of 2P.
Mark Broom (City University London)
Amsterdam, July 12
31 / 116
Introduction to evolutionary games
Some classic games
The war of attrition I
The two games above are classic examples of a matrix game, where there are
a finite number of pure strategies. We now introduce a game with an infinite
(and uncountable) number of pure strategies.
Consider a situation that arises in the Hawk-Dove game type conflict, namely
two individuals compete for a reward of value V and both choose to display.
Both individuals keep displaying for some time, and the first to leave does not
receive anything, the other gaining the whole reward.
For simplicity, we assume that individuals have no active strategy, i.e. the
time they are prepared to wait is determined before the conflict begins.
Although there is no real harm done to the individuals during their displays,
each individual pays a cost related to the length of the contest.
Mark Broom (City University London)
Amsterdam, July 12
32 / 116
Introduction to evolutionary games
Some classic games
The war of attrition II
A pure strategy of an individual is to be prepared to wait for a predetermined
time t ≥ 0.
We will denote such a strategy by St . It is clear that there are uncountably
many pure strategies.
A mixed strategy of an individual is a measure p on [0, ∞).
The measure p determines that an individual chooses a strategy from a set A
with probability p(A).
One can see the mixed strategy p given by a density function p(x).
Mark Broom (City University London)
Amsterdam, July 12
33 / 116
Introduction to evolutionary games
Some classic games
The war of attrition III
The reward for the winner of the contest is given by V. The one that is
prepared to display the longer gets the reward, but the cost of the contest of
length x is C(x) = cx for some given constant c.
For pure strategies Sx and Sy we get
V − cy
E[Sx , Sy ] = V/2 − cx
−cx
For mixed strategies p, q we get
ZZ
E[p, q] =
x > y,
x = y,
x < y.
(22)
E[Sx , Sy ]dp(x)dq(y),
(23)
(x,y)∈[0,∞)2
Mark Broom (City University London)
Amsterdam, July 12
34 / 116
Introduction to evolutionary games
Some classic games
The war of attrition IV
It is immediately clear that there is no pure ESS.
If p with a density function p is an ESS, then for almost all t ≥ 0 for which
p(t) > 0 we need
E[St , p] = E[p, p],
(24)
which, by (23) yields
Z t
Z
(V − cx)p(x)dx +
0
Mark Broom (City University London)
∞
(−ct)p(x)dx = E[p, p].
(25)
t
Amsterdam, July 12
35 / 116
Introduction to evolutionary games
Some classic games
The war of attrition V
Differentiating (25) with respect to t (assuming that such a derivative exists)
gives
Z
∞
(V − ct)p(t) − c
p(x)dx + ctp(t) = 0.
(26)
t
Solving this equation with appropriate boundary conditions yields
p(t) = (c/V) exp(−ct/V).
We can verify that this is indeed an ESS by proving stability, but we will not
do this here.
Mark Broom (City University London)
Amsterdam, July 12
36 / 116
Introduction to evolutionary games
Some classic games
The sex ratio game I
Why is the sex ratio in most animals close to a half? A naive look at this
problem would contend that for the good of the species, there should be far
less males than females.
Indeed it is often the case that most offspring are sired by a relatively small
number of males and most males make no contribution at all.
To make the problem more specific, let us suppose that in a given population,
any individual will have a fixed number of offspring, but that it can choose the
proportion which are male and female.
We also assume that all females (males) are equally likely to be the mother
(father) of any particular child in the next generation.
Our task is to provide a reasonable argument to answer the question of what is
the best choice of the sex ratio.
Mark Broom (City University London)
Amsterdam, July 12
37 / 116
Introduction to evolutionary games
Some classic games
The sex ratio game II
A strategy of an individual will be a choice of the proportion of male
offspring. We consider a small proportion of the population (fraction ε) of
individuals playing a (potentially) different strategy to the rest.
Let p denote the strategy of our invading group, and m denote the strategy
played by the rest of the population.
Since every individual has the same number of offspring, we cannot use this
number as the payoff, and we consider the number of grandchildren.
Assume that the total number of individuals in the next generation is N1 , and
the total number in the following generation is N2 .
In fact, as we shall see, these numbers are irrelevant, but it is convenient to
consider them at this stage.
Mark Broom (City University London)
Amsterdam, July 12
38 / 116
Introduction to evolutionary games
Some classic games
The sex ratio game III
The proportion of males in the generation of offspring is m1 = (1 − ε)m + εp.
Given m, what is the best reply p to m? A strategy p then will be an ESS if p is
the best reply to itself and uninvadable by any other strategy.
The expected number of offspring of one of our focal individual’s offspring is
1
1
× N2 + (1 − p) ×
× N2
m1 N1
(1 − m1 )N1
N2 p
1−p
1−p
N2 p
+
+
≈
.
N1 m1 1 − m1
N1 m 1 − m
E[p; δm ] = p ×
=
(27)
To find the best choice of p, we maximise this function E[p; δm ]. This gives
p = 1 if m < 1/2 and p = 0 if m > 1/2. For m = 1/2 we need ε > 0, and
then m = 1/2 performs strictly better than any mutant.
Mark Broom (City University London)
Amsterdam, July 12
39 / 116
Nonlinear games
Outline
1
Introduction to evolutionary games
What is a game?
Two approaches to game analysis
Some classic games
2
Nonlinear games
Overview and general theory
Playing the field
Nonlinearity due to non-constant interaction rates
Nonlinearity in the strategy of the focal player
3
Multi-player games
Multi-player matrix games
The multi-player war of attrition
Mark Broom (City University London)
Amsterdam, July 12
40 / 116
Nonlinear games
Overview and general theory
Payoffs for matrix games
Previously we considered matrix games, where
E[p; qT ] = E[p, q] = pAqT
X
X
=
pi (AqT )i =
(pA)j qj
i
=
X
(28)
j
aij pi qj .
i,j
We can see from the above that payoffs are linear in both the strategy of the
focal individual and the strategy of the population, yielding a quadratic form
as the payoff function E[p; qT ].
This has many nice static and dynamic properties as we have seen.
Mark Broom (City University London)
Amsterdam, July 12
41 / 116
Nonlinear games
Overview and general theory
Linearity on the left
We say that E is linear in the focal player strategy (or linear on the left) if
"
#
X
X
E
αi pi ; Π =
αi E[pi ; Π]
(29)
i
i
for every population Π, every m−tuple of individual
strategies p1 , . . . , pm and
P
every collection of constants αi ≥ 0 such that i αi = 1.
Mark Broom (City University London)
Amsterdam, July 12
42 / 116
Nonlinear games
Overview and general theory
Linearity on the right
Also, we say that E is linear in the population strategy (or linear on the right)
if
#
"
X
X
αi E[p; δqi ]
(30)
E p;
αi δqi =
i
i
for every individual strategy p, every m−tuple
q1 , . . . , qm and every
P
collection of αi ’s from [0, 1] such that i αi = 1.
Recall that for matrix games, the payoff to an individual is the same whether
it faces opponents playing a polymorphic mixture of pure strategies or a
monomorphic population.
Mark Broom (City University London)
Amsterdam, July 12
43 / 116
Nonlinear games
Overview and general theory
Polymorphic-monomorphic equivalence
We say that a game has polymorphic-monomorphic equivalence if for every
strategy p, any finite collection of strategiesP{qi }m
i=1 and any corresponding
collection of m constants αi ≥ 0 such that m
α
i = 1 we have
i
"
#
h
i
X
E p;
αi δqi = E p; δPi αi qi .
(31)
i
We note that polymorphic-monomorphic equivalence holds only in respect of
the static notion of ESSs, and there is no such equivalence in terms of
dynamics.
Mark Broom (City University London)
Amsterdam, July 12
44 / 116
Nonlinear games
Overview and general theory
Examples of nonlinearity I
Quite often, the payoff is linear in the focal player strategy because by its very
definition E[p; Π] is set to equal the expected payoff of the focal individual
playing a pure strategy Si with probability pi for all i.
This is the case for matrix games but also for example in the sex ratio game.
It is common, however, that the payoff is nonlinear in the population strategy.
This occurs whenever the game does not involve pairwise contests against
opponents playing pure strategies (or equivalent mixed combinations).
Alternatively, the payoff will also be nonlinear in the population strategy if it
does involve such pairwise contests, but that these are not independent
contests against randomly chosen opponents.
Mark Broom (City University London)
Amsterdam, July 12
45 / 116
Nonlinear games
Overview and general theory
Examples of nonlinearity II
A third situation occurs when a strategy is a pure strategy drawn from a
continuum, such as a level of defence or a volume of sperm, but that the
payoff is nonlinear as a function of this pure strategy.
This will lead to games which are nonlinear in the focal player strategy.
It can be argued that nearly all real situations feature nonlinearity of at least
one of the types described above.
When models of real behaviours are developed, the payoffs involved are
indeed almost always nonlinear in some way.
Some results for linear games can be generalized and reformulated even for
nonlinear games.
Mark Broom (City University London)
Amsterdam, July 12
46 / 116
Nonlinear games
Overview and general theory
ESSs for nonlinear games I
Theorem 1 can be generalized as follows:
Theorem 4
For games with generic payoffs, if the incentive function
hp,q,u = E[p; (1 − u)δp + uδq ] − E[q; (1 − u)δp + uδq ]
(32)
is differentiable (from the right) at u = 0 for every p and q, then p is an ESS if
and only if for every q 6= p;
1
E[p; δp ] ≥ E[q; δp ] and
2
if E[p; δp ] = E[q; δp ], then
Mark Broom (City University London)
∂ ∂u (u=0) hp,q,u
> 0.
Amsterdam, July 12
47 / 116
Nonlinear games
Overview and general theory
ESSs for nonlinear games II
Proof
Recall that a strategy p is an ESS if and only if for every q there is uq > 0
such that for every u ∈ (0, uq )
hp,q,u > 0.
(33)
So, if p is an ESS, then by the continuity of hp,q,u in u (it is differentiable so is
clearly continuous) as we take a limit of (33) for u → 0+, we get that
E[p; δp ] ≥ E[q; δp ]. If E[p; δp ] = E[q; δp ], then differentiating (33) at u = 0
∂ yields ∂u
hp,q,u ≥ 0.
(u=0)
∂ Since we assume the payoffs are generic, we cannot have ∂u
hp,q,u = 0
(u=0)
and the statement of Theorem 4 thus follows.
We omit the proof of the reverse implication.
Mark Broom (City University London)
Amsterdam, July 12
48 / 116
Nonlinear games
Overview and general theory
A quadratic example I
As an example, consider a two strategy game which satisfies
polymorphic-monomorphic equivalence, where the payoff to an individual
playing pure strategy S1 with probability p in a population where the
probability of an individual playing S1 is r is given by
E[p; δr ] = p(a1 r2 + a2 r + a3 ).
(34)
Find conditions on the ESSs of the game in terms of the parameters a1 , a2 and
a3 .
The pure strategy S2 (or p = 0) is an ESS if a3 < 0 (a3 = 0 is a non-generic
case).
Similarly the pure strategy S1 (or p = 1) is an ESS if a1 + a2 + a3 > 0.
Mark Broom (City University London)
Amsterdam, July 12
49 / 116
Nonlinear games
Overview and general theory
A quadratic example II
An internal strategy p (or 0 < p < 1) can be an ESS only if
a1 p2 + a2 p + a3 = 0.
Depending upon the values of a1 , a2 and a3 there may be 0, 1 or 2 values of p
which satisfy this condition.
It is clear that the first condition of Theorem 4 is always satisfied with
equality, so that we need
∂ hp,q,u > 0,
∂u (u=0)
(35)
where,
2
hp,q,u = (p − q)(a1 p + u(q − p) + a2 p + u(q − p) + a3 ).
(36)
This gives the required condition as 2a1 p̂ + a2 < 0.
Mark Broom (City University London)
Amsterdam, July 12
50 / 116
Nonlinear games
Overview and general theory
The differentiability of the payoff function
Under most biologically reasonable conditions, the function
u 7→ E[r; (1 − u)δp + uδq ]
(37)
is differentiable for every u ∈ [0, 1] and thus Theorem 4 can still be used.
Later, when considering playing the field games, we will see one of the rare
instances where the function hp,q,u is not differentiable for all p; yet it is still
differentiable for all important ps.
Below we investigate different scenarios where the payoffs are nonlinear.
Mark Broom (City University London)
Amsterdam, July 12
51 / 116
Nonlinear games
Overview and general theory
Linearity on the left: an equilibrium condition I
Theorem 5
Let E be linear in the focal player strategy, i.e. (29) holds, and let the function
hp,q,u be differentiable w.r.t u at u = 0.
Let p = (pi ) be an ESS. Then E[p; δp ] = E[Si ; δp ] for any pure strategy Si
such that i ∈ S(p) = {j; pj > 0}.
We note that it is enough to assume hp,q,u to be continuous.
Mark Broom (City University London)
Amsterdam, July 12
52 / 116
Nonlinear games
Overview and general theory
Linearity on the left: an equilibrium condition II
Proof
By Theorem 4, p is a best response to itself and thus
X
X
max E[Sj ; δp ] =
pi max E[Sj ; δp ] ≥
pi E[Si ; δp ]
j∈S(p)
i∈S(p)
j∈S(p)
(38)
i∈S(p)
= E[p; δp ] ≥ max E[Sj ; δp ] .
(39)
j∈S(p)
Hence, we must have equalities everywhere in the above and the statement
follows.
Mark Broom (City University London)
Amsterdam, July 12
53 / 116
Nonlinear games
Overview and general theory
Convexity on the left
Note that if the payoff is not linear but strictly convex, i.e.
X
pi E[Si ; δq ] > E[p; δq ]
(40)
i
for all q and all p with at least two elements in S(p), then the inequality in
(39) would be strict for any ESS p that is not pure.
Thus, strict convexity of payoffs in the focal player strategy forces the ESSs to
be pure.
Mark Broom (City University London)
Amsterdam, July 12
54 / 116
Nonlinear games
Overview and general theory
Population games I
Lemma 1 below shows that the payoffs of games that are linear in the focal
player strategy and satisfy polymorphic monomorphic equivalence (31) must
be of a special form. Such games are called population games, or playing the
field games.
Lemma 1
If the payoffs of the game are linear in the focal player strategy (i.e. satisfy
(29)) and satisfy polymorphic monomorphic equivalence (31), then for every
x, y, z and every ε ∈ [0, 1]
X
E[x; (1 − ε)δy + εδz ] =
xi fi (1 − ε)y + εz
(41)
i
where fi (q) = E[Si ; δq ].
Mark Broom (City University London)
Amsterdam, July 12
55 / 116
Nonlinear games
Overview and general theory
Population games II
Proof
From the assumptions of Lemma 1, it follows that for every x, y, z and every
ε ∈ [0, 1]
X
E[x; (1 − ε)δy + εδz ] =
xi E[Si ; (1 − ε)δy + εδz ]
(42)
i
=
X
i
=
X
i
Mark Broom (City University London)
xi E[Si ; δ(1−ε)y+εz ]
(43)
xi fi (1 − ε)y + εz .
(44)
Amsterdam, July 12
56 / 116
Nonlinear games
Overview and general theory
ESSs in population games I
P
If we write that payoffs are such that E[p; δq ] = i pi fi (q) for some functions
fi we mean that payoffs are linear in the focal player strategy and satisfy
polymorphic monomorphic equivalence.
Theorem 6
P
Let the payoffs be such that E[p; δq ] = i pi fi (q) for some continuous
functions fi .
Then p is an ESS if and only if there exists εp > 0 such that for all q 6= p and
all ε ∈ (0, εp ) we have
E[p; (1 − ε)δp + εδq ] > E[q; (1 − ε)δp + εδq ].
Mark Broom (City University London)
Amsterdam, July 12
(45)
57 / 116
Nonlinear games
Overview and general theory
ESSs in population games II
Proof
hp,q,ε = E[p; (1 − ε)δp + εδq ] − E[q; (1 − ε)δp + εδq ]
X
=
(pi − qi )fi (1 − ε)p + εq .
(46)
(47)
i
Thus, if p is an ESS and q is any strategy, there is εp,q ∈ (0, 1] such that
hp,q,ε > 0 for all ε ∈ (0, εp,q ).
We can pick εp,q to be the maximum possible.
Since the strategy simplex is compact and the fi ’s are continuous, the fi ’s are
uniformly continuous. Thus it follows that hp,q0 ,ε > 0 for all q0 close enough
to q. Hence, for every q, there exists a neighbourhood U(q) of q such that for
every q0 ∈ U(q), εp,q0 ≥ εp,q . Since the strategy simplex is compact, there
exists q0 such that εp,qo ≤ εp,q for every q.
Mark Broom (City University London)
Amsterdam, July 12
58 / 116
Nonlinear games
Overview and general theory
ESSs and local superiority I
Theorem 7
P
Let the payoffs be such that E[p; δq ] = i pi fi (q) for some continuous
functions fi .
Then the strategy p is an ESS if and only if it is locally superior, i.e. there is
U(p) a neighbourhood of p such that
E[p; δq ] > E[q; δq ], for all q(6= p) ∈ U(p).
Mark Broom (City University London)
Amsterdam, July 12
(48)
59 / 116
Nonlinear games
Overview and general theory
ESSs and local superiority II
Proof
Let d denote the shortest distance from strategy p to a face of a strategy
simplex not containing p.
If p is an ESS, then by Theorem 6 there exists a uniform invasion barrier εp .
Now, if x is such that |x − p| < dεp ; x 6= p, there is q such that
x = (1 − ε)p + εq, for some ε ∈ (0, εp ).
Mark Broom (City University London)
Amsterdam, July 12
60 / 116
Nonlinear games
Overview and general theory
ESSs and local superiority III
Since the following are equivalent
E[p; (1 − ε)δp + εδq ] > E[q; (1 − ε)δp + εδq ],
X
X
ε
pi fi (x) > ε
qi fi (x),
i
(49)
(50)
i
X
pi fi (x) >
i
X
i
(1 − ε)pi + εqi fi (x),
E[p; δx ] > E[x; δx ],
(51)
(52)
we get that p is locally superior.
Conversely, if p is locally superior and q is an invading strategy, take ε small
enough so that x = (1 − ε)p + εq ∈ U(p) and thus (52) holds. Thus, (49)
holds and so p is an ESS.
Mark Broom (City University London)
Amsterdam, July 12
61 / 116
Nonlinear games
Playing the field
Playing the field
In this section we consider payoff functions of the form
X
E[p; Π] =
pi fi (Π)
(53)
where the fi ’s are in general nonlinear functions, and Π represents the strategy
played by the population.
Playing the field games are perhaps the most straightforward way of
incorporating nonlinearity into a game model, as the fitness function of the
individuals involved automatically includes the population frequencies of the
different strategies.
Mark Broom (City University London)
Amsterdam, July 12
62 / 116
Nonlinear games
Playing the field
Example: the sex ratio game
An example is the sex ratio game that we considered earlier, where a strategy
was effectively mixed, with two pure strategies “male” and “female”. The
fitness of an individual with strategy p was given by (27) as
E[p; δm ] =
1−p
p
+
m 1−m
(54)
so that in the notation of equation (53) we have
1
,
m
1
f2 (m) =
.
1−m
f1 (m) =
Mark Broom (City University London)
(55)
(56)
Amsterdam, July 12
63 / 116
Nonlinear games
Playing the field
Optimal foraging
Consider a single species foraging on N patches, with resources ri > 0 for
i = 1, . . . , N which are shared equally by all who choose the patch.
There are N pure strategies for this game, each corresponding to foraging on
one patch only, a mixed strategy x = (xi ) meaning to forage at patch i with
probability xi .
The general payoff to an individual using strategy x = (xi ) against a
population playing y = (yi ) is
if xi > 0 for some i such that yi = 0,
∞,
N
E[x; δy ] = X ri
xi
otherwise,
yi
(57)
i;xi >0
where ri is a constant corresponding to the quality of a patch i.
Mark Broom (City University London)
Amsterdam, July 12
64 / 116
Nonlinear games
Playing the field
Otpmial foraging II
It is obvious from (57) that if there is an ESS p, it must be internal, i.e. we
must have pi > 0 for all i = 1, . . . , N.
Thus any problematic cases with potentially infinite payoffs do not arise in
any analysis.
In particular Theorem 6 still holds even though the fitness functions are not
continuous everywhere, since they are continuous in the vicinity of the ESS.
Note that the sex ratio game is a special case with N = 2 and r1 = r2 .
Mark Broom (City University London)
Amsterdam, July 12
65 / 116
Nonlinear games
Playing the field
Optimal foraging: the ESS I
Now, let us show that p = (pi ) given by pi /
PN
i=1 ri
is an ESS.
Clearly, E[q; δp ] = E[p; δp ] for all q. Moreover, since this game satisfies
polymorphic monomorphic equivalence (31) then
E[x; (1 − u)δy + uδz ] = E[x; δ(1−u)y+uz ]
(58)
hp,q,u = E[p; (1 − u)δp + uδq ] − E[q; (1 − u)δp + uδq ]
(59)
and thus
N
X
ri
pi + u(qi − pi )
i=1
N
X
pi − qi
qi − pi
=
ri 1 − u
+ ... ,
pi
pi
=
(pi − qi )
(60)
(61)
i=1
Mark Broom (City University London)
Amsterdam, July 12
66 / 116
Nonlinear games
Playing the field
Optimal foraging: the ESS II
This implies that
N
X
∂ ri
hp,q,u =
∂u u=0
i=1
pi − qi
pi
2
> 0.
(62)
Thus p is an ESS, using Theorem 4.
Also, note (and this is called Parker’s matching principle), that at the ESS
strategy p, we have
pi
ri
= .
(63)
pj
rj
Mark Broom (City University London)
Amsterdam, July 12
67 / 116
Nonlinear games
Nonlinearity due to non-constant interaction rates
Non-constant interaction rates I
The second scenario where games can be nonlinear is where the strategies
employed by the players affect the frequency of their interactions.
The actual pairwise interactions themselves can be very simple, but if the
strategy affects the interaction rate, then the overall payoff function can be
complicated.
The simplest non-trivial scenario to consider where interaction rates are not
constant is a two player contest with two pure strategies S1 and S2 , with
payoffs given by a standard payoff matrix
a b
,
(64)
c d
but where the three types of interaction happen with probabilities not simply
proportional to their frequencies.
Mark Broom (City University London)
Amsterdam, July 12
68 / 116
Nonlinear games
Nonlinearity due to non-constant interaction rates
Non-constant interaction rates II
Assume that each pair of S1 individuals meet at rate r11 , each pair of S1 and S2
individuals meet at rate r12 and each pair of S2 individuals meet at rate r22 .
Thus the frequency of interactions of an S1 individual with other S1
individuals is r11 (N − 1)p1 and the frequency with S2 individuals is r12 Np2 ,
for population size N and pi the proportion of Si -individuals in the population.
For N large enough, we get (N − 1)/N ≈ 1 and thus this yields the following
nonlinear payoff function
ar11 p1 + br12 p2
,
r11 p1 + r12 p2
cr12 p1 + dr22 p2
E[S2 ; pT ] =
.
r12 p1 + r22 p2
E[S1 ; pT ] =
(65)
(66)
This reduces to the standard payoffs for a matrix game when
r11 = r12 = r22 .
Mark Broom (City University London)
Amsterdam, July 12
69 / 116
Nonlinear games
Nonlinearity due to non-constant interaction rates
Non-constant interaction rates: the ESSs I
How do these non-uniform interaction rates affect the game?
In particular, when are there differences between this case and the simple two
player matrix game?
In the simple game (uniform interaction rates) if a < c and b > d there is a
mixed ESS, and this is not altered by the use of non-uniform interaction rates,
although the ESS proportions of the strategies do change.
If a > c and b < d then there are two ESSs in the simple case, and this is also
always true for non-uniform interactions, although the location of the unstable
equilibrium between the pure strategies changes, which affects the dynamics.
Mark Broom (City University London)
Amsterdam, July 12
70 / 116
Nonlinear games
Nonlinearity due to non-constant interaction rates
Non-constant interaction rates: the ESSs II
For non-uniform interaction rates, the cases with c > a > d > b or
d > b > c > a yields a solution that does not occur for matrix games.
Under some circumstances there are two ESSs rather than just one, a pure
(0, 1) ESS, but also a mixed ESS.
For c > a > d > b Setting r12 = 1 without loss of generality, this occurs if
r11 r22 >
!2
p
p
(a − b)(c − d) + (a − c)(b − d)
,
d−a
(67)
The Prisoner’s Dilemma is an example where c > a > d > b. Setting
r11 = r22 (= r) as r → ∞ the proportion of cooperators in the mixture tends
to 1 and the basin of attraction of the proportion of cooperators p in the
replicator dynamics increases, tending to p ∈ (0, 1].
Mark Broom (City University London)
Amsterdam, July 12
71 / 116
Nonlinear games
Nonlinearity in the strategy of the focal player
Nonlinearity on the left
Here we consider the third case, involving games where the strategy of an
individual is described by a single number (or a vector) but where this does
not describe the probability of playing a given pure strategy, but effectively
describes a unique behaviour such as the intensity of a signal.
We note that this is also the scenario generally considered in Adaptive
Dynamics, though in practice stronger assumptions are generally made than
we use here.
For such a strategy set, some regularity properties of the payoffs as a function
of the strategy are generally assumed.
It is often assumed that the function is continuous and even differentiable
everywhere, or almost everywhere.
In practice, almost everywhere will usually mean everywhere but a small
number of critical points.
Mark Broom (City University London)
Amsterdam, July 12
72 / 116
Nonlinear games
Nonlinearity in the strategy of the focal player
A model of tree growth
Consider the following game-theoretical model of tree growth.
We assume that a tree has to grow large enough in order to get sunlight and
not get overshadowed by neighbours; yet the more the tree grows the more of
its energy has to be devoted to “standing” rather than photosynthesis.
Let h ∈ [0, 1] be the normalized height of the tree; here 1 means the maximum
possible height of a tree.
We define the fitness of a tree of height h in the forest where all other trees are
of height H by
−1
E[h; δH ] = (1 − h3 ) · 1 + exp(H − h) ,
(68)
where f (h) = 1 − h3 represents the proportion of leaf tissue of a tree of height
−1
h and g(h − H) = 1 + exp(H − h)
represents the advantage or
disadvantage of being bigger/smaller than one’s neighbour.
Mark Broom (City University London)
Amsterdam, July 12
73 / 116
Nonlinear games
Nonlinearity in the strategy of the focal player
A model of tree growth: the best response
Now we will determine the ESSs for the tree, i.e. what are the evolutionarily
stable heights.
First, in accordance with Theorem 4 we will determine the best response h to
a general tree height H.
We get
−3h2 1 + exp(H − h) + (1 − h3 ) exp(H − h)
∂
E[h; δH ] =
.
2
∂h
1 + exp(H − h)
(69)
Thus, the derivative is positive for h = 0 and negative for h = 1 and it is
decreasing in h. So, there is only one root of the derivative and the function
h 7→ E[h; δH ] attains a maximum at that root, i.e. the root is the best response.
Mark Broom (City University London)
Amsterdam, July 12
74 / 116
Nonlinear games
Nonlinearity in the strategy of the focal player
A model of tree growth: the ESS I
Since the ESS must be a best response to itself, we get that the ESS must thus
solve an equation
−3H 2 1 + exp(H − H) + (1 − H 3 ) exp(H − H)
(70)
0=
2
1 + exp(H − H)
1
= −6H 2 + (1 − H 3 ) .
(71)
4
The above equation has only one root and the crossing of the x axis happens
with negative derivative, so that the root is the unique ESS, as we see in the
following figure.
Mark Broom (City University London)
Amsterdam, July 12
75 / 116
Nonlinear games
Nonlinearity in the strategy of the focal player
A model of tree growth: the ESS II
1
Best Response
0.8
0.6
0.4
0.2
0
0
0.2
0.4
0.6
Height of Population
0.8
1
The best response and ESS for the tree height competition game. The unique
ESS occurs at the point where the best response line crosses the dashed h = H
line.
Mark Broom (City University London)
Amsterdam, July 12
76 / 116
Multi-player games
Outline
1
Introduction to evolutionary games
What is a game?
Two approaches to game analysis
Some classic games
2
Nonlinear games
Overview and general theory
Playing the field
Nonlinearity due to non-constant interaction rates
Nonlinearity in the strategy of the focal player
3
Multi-player games
Multi-player matrix games
The multi-player war of attrition
Mark Broom (City University London)
Amsterdam, July 12
77 / 116
Multi-player games
Introducing multi-player games
In the previous sections we have considered games with two individuals only,
or games played against “the population”.
We now consider situations where individual conflicts consist of a number of
individuals greater than two.
Such games have only rarely been considered with regard to biological
populations, although multi-player games are common in economics.
All the games that we consider are contests involving a randomly selected
group of (at least three) players from a large population.
Mark Broom (City University London)
Amsterdam, July 12
78 / 116
Multi-player games
Multi-player matrix games
Introducing multi-player matrix games
We shall consider an infinite population, from which groups of m players
were selected at random to play a game.
The expected payoff to an individual is obtained by averaging over the
rewards, weighted by their probabilities, as for matrix games.
In its most general form where the ordering of individuals matter, extending
the bimatrix game case to m players, the payoff to each individual in position
k is governed by an m-dimensional payoff matrix.
It is assumed that there is no significance to the ordering of the players, as a
natural extension of matrix games (in contrast to bimatrix games).
Thus the payoff to an individual depends only upon its strategy and the
combination of the strategies of its opponents and only one such
m-dimensional matrix is needed.
Mark Broom (City University London)
Amsterdam, July 12
79 / 116
Multi-player games
Multi-player matrix games
Symmetric multi-player matrix games
We will call such games symmetric and write the payoffs for a 3-player
n-strategy game as A = (A1 , A2 , . . . , An ) where Aj are the payoffs assuming
the focal player plays pure strategy Sj , and
aj11 aj12 · · · aj1n
aj21 aj22 · · · aj2n
(72)
Aj = .
..
.. .
.
.
.
.
.
.
.
ajn1 ajn2 · · ·
ajnn
To add an extra player, the number of matrices required in this formulation is
multiplied by the number of strategies n. There are, in a full general case, nm
entries of the matrices.
Mark Broom (City University London)
Amsterdam, July 12
80 / 116
Multi-player games
Multi-player matrix games
Multi-player matrix payoffs I
However, we have some symmetry conditions.
For the three player case, these are
apqr = aprq , for all p, q, r = 1, 2, . . . , n.
(73)
In general these are
ai1 ...im = ai1 σ(i2 )...σ(im )
(74)
for any permutation σ of the indices i2 , . . . , im .
The payoff to an individual playing p in a contest with individuals playing
p1 , p2 , . . . , pm−1 respectively is written as E[p; p1 , p2 , . . . , pm−1 ].
As the ordering is irrelevant, when some strategies are identical a power
notation is used, for example E[p; p1 , p2 , p3 m−3 ].
Mark Broom (City University London)
Amsterdam, July 12
81 / 116
Multi-player games
Multi-player matrix games
Multi-player matrix payoffs II
General payoffs are given as follows
E[p; p1 , p2 , . . . , pm−1 ] =
n
X
i=1
pi
n
X
i1 =1
···
n
X
im−1 =1
aii1 i2 ...i(m−1)
k−1
Y
pj,ij ,
(75)
j=1
where pj = (pj,1 , pj,2 , . . . , pj,n ).
As long as groups are selected from the population completely at random, as
is usually assumed, then there is no real difference between symmetric and
non-symmetric games.
For example in a population playing 3-player games every individual is
equally likely to occupy any of the ordered positions, and in particular the
term aijk will have identical weighting to aikj in the payoff to an i-player. Thus
the sum of these two can be replaced by twice their average.
Mark Broom (City University London)
Amsterdam, July 12
82 / 116
Multi-player games
Multi-player matrix games
Multi-player matrix payoffs III
In the context of matrix games, a 2-player game is called (super-)symmetric if
aij = aji for all i, j and a multi-player matrix is super-symmetric if
ai1 ...im = aσ(i1 )...σ(im )
(76)
for any permutation σ of the indices i1 , . . . , im .
For example, for super-symmetric three-player three strategy games, there are
ten distinct payoffs.
Without loss of generality we can define the three payoffs
a111 = a222 = a333 = 0, and this leaves seven distinct payoffs to consider
a112 , a113 , a221 , a223 , a331 , a332 and a123 .
Before we specify the payoff E[p; Π] to an individual playing p in the
population described by Π, let us consider the specific case of two strategy
games.
Mark Broom (City University London)
Amsterdam, July 12
83 / 116
Multi-player games
Multi-player matrix games
Payoffs for two strategies, three and four players I
The complete payoffs to the three-player two strategy game can be written as
a211 a212
a111 a112
(77)
a221 a222
a121 a122
where, as in (74) a112 = a121 and a212 = a221 .
Similarly for the four player two strategy game we have
a1111 a1112
a1211 a1212
a1121 a1122
a1221 a1222
a2111 a2112
a2211 a2212
a2121 a2122
a2221 a2222
(78)
with symmetry conditions a1112 = a1121 = a1211 etc, see the following figure.
Mark Broom (City University London)
Amsterdam, July 12
84 / 116
Multi-player games
Multi-player matrix games
Payoffs for two strategies, three and four players II
a122
a112
a111
a111
a112 = a121
a122
a211
a212 = a221
a222
a121
a222
a212
a211
a121
a1221
a1121
a1211
a1111
a1122
a1112
a1212
a2112
a2122
a2212
a2111
a2121
a1222
a1111
a1112
a1121
a1211
a1122
a1212
a1221
a2112
a2121
a2211
a2122
a2212
a2221
a1222
a2222
a2221
a2211
a2111
a2222
Figure : Visualization of payoffs in 2-strategy m-player games.
Mark Broom (City University London)
Amsterdam, July 12
85 / 116
Multi-player games
Multi-player matrix games
Example payoffs
Thus, the payoffs of the game are completely determined by specifying the
payoffs to an individual playing pure strategy i = 1, 2 against m − 1 players, j
of which play strategy S1 (and the other m − 1 − j play strategy S2 ).
Let us denote these payoffs by αij .
Later we look at the game with payoffs
−3/32
0
0
−13/96
0
−13/96
−13/96
0
Mark Broom (City University London)
0
−13/96
−13/96
0
−13/96
0
.
0
−3/32
Amsterdam, July 12
(79)
86 / 116
Multi-player games
Multi-player matrix games
Polymorphic-monomorphic equivalence
Consider an individual playing strategy x in a population playing y.
A group of m − 1 opponents is chosen and each one of them chooses to play
strategy S1 with probability y1 and strategy S2 with probability y2 . Thus
E[x; δy ] =
m−1
X
l=0
m − 1 l m−1−l
y1 y2
E[x; S1l S2m−1−l ],
l
where
E[x; S1l , S2m−1−l ] =
2
X
xi αil .
(80)
(81)
i=1
Iit does not matter whether the population is polymorphic or monomorphic
and playing the mean strategy, and so multi-player matrix games have the
polymorphic-monomorphic equivalence property.
Mark Broom (City University London)
Amsterdam, July 12
87 / 116
Multi-player games
Multi-player matrix games
ESSs for multi-player games I
The following definition of an ESS for an m-player game is a natural
extension of the definition for a two player game.
A strategy p in an m-player game is called evolutionarily stable against a
strategy q if there is an εq ∈ (0, 1] such that for all ε ∈ (0, εq ]
E[p; (1 − ε)δp + εδq ] > E[q; (1 − ε)δp + εδq ],
(82)
where
E[x; (1 − ε)δy + εδz ] =
m−1
X
l=0
m−1
(1 − ε)l εm−1−l E[x; yl , zm−1−l ]. (83)
l
We say that p is an ESS for the game if for every q 6= p, there is εq > 0 such
that (82) is satisfied for all ε ∈ (0, εq ].
Mark Broom (City University London)
Amsterdam, July 12
88 / 116
Multi-player games
Multi-player matrix games
ESSs for multi-player games II
Similarly as in Theorem 1, we get the following Theorem whose proof we
will not show.
Theorem 8
For an m-player matrix game, the mixed strategy p is evolutionarily stable
against q if and only if there is a j ∈ {0, 1, . . . , m − 1} such that
E[p; pm−1−j , qj ] > E[q; pm−1−j , qj ],
(84)
E[p; pm−1−i , qi ] = E[q; pm−1−j , qi ] for all i < j.
(85)
A strategy p is called an ESS at level J if, for every q 6= p, the conditions
(84)-(85) of Theorem 8 are satisfied for some j ≤ J and there is at least one
q 6= p for which the conditions are met for j = J precisely.
Mark Broom (City University London)
Amsterdam, July 12
89 / 116
Multi-player games
Multi-player matrix games
ESSs for multi-player games III
If p is an ESS, then by Theorem 8, for all q,
E[p; pm−1 ] ≥ E[q; pm−1 ].
(86)
Since the payoffs are linear in the strategy of the focal player it follows that
E[p; pm−1 ] = E[q; pm−1 ], for all q with S(q) ⊆ S(p).
(87)
In the generic case, any pure ESS is of level 0. A mixed ESS cannot be of
level 0 but in the generic case, any mixed ESS must be of level 1.
Mark Broom (City University London)
Amsterdam, July 12
90 / 116
Multi-player games
Multi-player matrix games
ESS supports for multi-player games I
Analogues of the strong restrictions on possible combinations of ESSs for
matrix games do not hold for multi-player games.
The Bishop-Cannings Theorem (which implies that the support of one ESS
cannot be a subset of the support of another) fails already for m = 3.
For m > 3, there can be more than one ESS with the same support as we shall
see in a later example. On the other hand, we still have the following for
m = 3.
A pattern of ESSs is a collection of ESS supports. A pattern is attainable if a
matrix exists which has that pattern.
Theorem 10
It is not possible to have two ESSs with the same support in a three player
matrix game.
Mark Broom (City University London)
Amsterdam, July 12
91 / 116
Multi-player games
Multi-player matrix games
ESS supports for multi-player games II
Proof
Suppose that p is an ESS of a 3-player game. Then, by Theorem 8 exactly one
of the following three conditions holds for any q 6= p,
(i) E[p; p, p] > E[q; p, p],
(ii) E[p; p, p] = E[q; p, p] and E[p; q, p] > E[q; q, p],
(iii) E[p; p, p] = E[q; p, p], E[p; q, p] = E[q; q, p] and
E[p; q, q] > E[q; q, q].
Moreover, since q is also an ESS with S(p) = S(q) we have,
Mark Broom (City University London)
E[p; p, p] = E[q; p, p],
(88)
E[q; q, q] = E[p; q, q].
(89)
Amsterdam, July 12
92 / 116
Multi-player games
Multi-player matrix games
ESS supports for multi-player games III
Hence p must satisfy condition (ii) and thus
E[p; q, p] > E[q; q, p] = E[q; p, q].
(90)
However, by repeating the same process yet starting with q as an ESS, we get
the reverse inequality which is a contradiction.
Mark Broom (City University London)
Amsterdam, July 12
93 / 116
Multi-player games
Multi-player matrix games
The incentive function I
Recall that the payoffs of the m-player two strategy matrix game are given by
αil for i = 1, 2 and l = 0, 1, . . . , m − 1.
Let us define βl = α1l − α2l and consider
h(p) = E[S1 ; δ(p,1−p) ] − E[S2 ; δ(p,1−p) ]
m−1
X m − 1
=
βl pl (1 − p)m−l−1 .
l
(91)
(92)
l=0
The incentive function h quantifies the benefits of using strategy S1 over
strategy S2 in a population where everybody else uses strategy p = (p, 1 − p).
Note that h is differentiable, and that the replicator dynamics now becomes
dq
= q(1 − q)h(q).
dt
Mark Broom (City University London)
(93)
Amsterdam, July 12
94 / 116
Multi-player games
Multi-player matrix games
The incentive function II
Theorem 11
In a generic two strategy m-player matrix game
1
pure strategy S1 is an ESS (level 0) if and only if βm−1 > 0,
2
pure strategy S2 is an ESS (level 0) if and only if β0 < 0,
3
an internal strategy p = (p, 1 − p) is an ESS, if and only if
a) h(p) = 0, and
b) h0 (p) < 0.
We will not prove this result here.
Mark Broom (City University London)
Amsterdam, July 12
95 / 116
Multi-player games
Multi-player matrix games
The incentive function III
0
1
0
1
0
1
Figure : The incentive function and ESSs in multi-player games. The solid dots show
the equilibrium points and the arrows show the direction of evolution under the
replicator dynamics.
Mark Broom (City University London)
Amsterdam, July 12
96 / 116
Multi-player games
Multi-player matrix games
The number of ESSs for the two strategy case
Thus, the possible sets of ESSs are one of the following:
1
2
3
0 pure ESSs, and l internal ESSs with l ≤ b m2 c;
1 pure ESS, and l internal ESSs with l ≤ b m2 − 1c;
2 pure ESSs, and l internal ESSs with l ≤ b m2 − 2c.
There can be more than one ESS with the same support in a 4-player game as
shown in the next Example.
Mark Broom (City University London)
Amsterdam, July 12
97 / 116
Multi-player games
Multi-player matrix games
An example
Now consider an example with the following payoffs:
α13 0
0 α11
α22 0
0 α22
α11 0
0 α11
α22 0
0 α20
(94)
13
3
with α11 = α22 = − 96
, α13 = α20 = − 32
. Thus
β0 = 3/32, β1 = −13/96, β2 = 13/96, β3 = −3/32 giving
13
3
3 3 13 2
p + p (1 − p) − p(1 − p)2 + (1 − p)3
32
32
32
32 1
1
3
=− p−
p−
p−
,
4
2
4
h(p) = −
(95)
(96)
and thus the game has two internal ESSs at p = (1/4, 3/4) and
p = (3/4, 1/4) (and no pure ESSs).
Mark Broom (City University London)
Amsterdam, July 12
98 / 116
Multi-player games
Multi-player matrix games
Dynamics of multi-player matrix games I
Consider the replicator dynamics for super-symmetric 3-strategy, 3-player
games. The mean payoff over the population is given by
W=
3 X
3 X
3
X
aijk pi pj pk .
(97)
i=1 j=1 k=1
The payoff to an individual playing pure strategy Si in such a population is
3 X
3
X
j=1 k=1
aijk pj pk =
1 ∂W
,
3 ∂pi
The continuous replicator equation is given by
dpi
1 ∂W
= pi
−W
dt
3 ∂pi
Mark Broom (City University London)
(98)
1 ≤ i ≤ 3.
(99)
Amsterdam, July 12
99 / 116
Multi-player games
Multi-player matrix games
Dynamics of multi-player matrix games II
There are some results that occur for the dynamics on the three player
super-symmetric games that cannot occur for two players.
Two player super-symmetric games are important because they represent the
genetic case where strategies represent alleles and a game a mating, where it
can be reasonably assumed that the payoff to both players is the same.
If new strategies are allowed to enter a population sequentially and the
population is allowed to converge to a new ESS, any new strategy that can
invade the current ESS must subsequently feature in the support of the new
ESS.
This is not true in the multi-player case, e.g. with payoffs
a111 = 0, a222 = 1.5, a333 = 0, a112 = 0.6, a113 = 0.05, a221 = 0,
a223 = −1, a331 = 0, a332 = 1/2 and a123 = 0.6 which is shown in the
following figure.
Mark Broom (City University London)
Amsterdam, July 12
100 / 116
Multi-player games
Multi-player matrix games
Dynamics of multi-player matrix games III
Figure : This figure shows fitness contours. This demonstrates that even though a
new strategy may be able to invade what was an ESS, it need not be represented in
the final ESS. Specifically, strategy 3 can invade (1, 2) but the outcome is (2).
Mark Broom (City University London)
Amsterdam, July 12
101 / 116
Multi-player games
Multi-player matrix games
Dynamics of multi-player matrix games IV
In the two player case any ESS can be reached by an appropriately ordered
sequential introduction of strategies.
This is again not true for the multi-player case, using the example game where
the game with payoffs
a112 = a113 = a221 = a223 = a331 = a332 = −1, a123 = 1
yields an unreachable ESS. This concept is not often discussed in static games
or in replicator dynamics, where no new strategies are introduced, but is of
particular interest in adaptive dynamics.
Mark Broom (City University London)
Amsterdam, July 12
102 / 116
Multi-player games
Multi-player matrix games
Dynamics of multi-player matrix games V
In this example we have ESSs with supports (1), (2), (3) and (1, 2, 3). With
sequential introduction of strategies, the ESS with support (1, 2, 3) can never
be reached.
Mark Broom (City University London)
Amsterdam, July 12
103 / 116
Multi-player games
The multi-player war of attrition
Introduction to the multi-player war of attrition
We consider two out of a set of four related models of the multi-player war of
attrition.
There are m players that compete for a single reward of value V.
Individual i selects a random time Xi from some distribution (only) depending
upon m and V.
As in the standard war of attrition, a player receives the reward if the time that
they select is the largest, and pays a cost equivalent to the length of the time
that they spend in the contest whether they win or lose.
Mark Broom (City University London)
Amsterdam, July 12
104 / 116
Multi-player games
The multi-player war of attrition
The multi-player war of attrition: payoffs
The payoff to player i is thus given by
(
V − Wi
E[Xi ; X1 , . . . , Xi−1 , Xi+1 , . . . , Xm ] =
−Xi
if Xi > Wi ,
if Xi < Wi ,
(100)
where
Wi = max(X1 , . . . , Xi−1 , Xi+1 , . . . , Xm ).
Mark Broom (City University London)
(101)
Amsterdam, July 12
105 / 116
Multi-player games
The multi-player war of attrition
The ESS for the multi-player war of attrition I
In this game there is a unique ESS with all players choosing a value following
a random distribution, with distribution function
1/(m−1)
x ≥ 0,
(102)
G(x) = 1 − exp(−x/V)
illustrated in the figure below.
1
0.8
G(x)
0.6
0.4
0.2
0
0
1
2
x
3
4
Figure : ESS distribution functions for the ESSs for V = 5 and
m = 2, 4, 8, 16 (from bottom to top).
Mark Broom (City University London)
Amsterdam, July 12
106 / 116
Multi-player games
The multi-player war of attrition
The ESS for the multi-player war of attrition II
How can we show this?
Recall that a strategy p is given by a probability distribution p(t) t≥0 and
that the support of p is S(p) = {t; p(t) > 0}.
By a generalization of Theorem 5 we must have that if p is an ESS, then for
any t ∈ S(p) and any pure strategy St (wait until time t and then give up if still
in the game) we have
E[St ; δp ] = E[p; δp ].
(103)
Mark Broom (City University London)
Amsterdam, July 12
107 / 116
Multi-player games
The multi-player war of attrition
The ESS for the multi-player war of attrition III
Suppose that Xi : i = 2, . . . , m are random variables with distributions given
by p.
Since the Xi s are independent, we get that
P(W1 ≤ t) = P(X1 ≤ t)m−1 .
(104)
Moreover, the first player is in fact effectively playing a two person war of
attrition against an opponent using strategy W1 (ie. to wait for a time W1
determined by (101) from m − 1 randomly chosen times
Xj ; j = 1, . . . , m, j 6= i).
If E[x, y] are the payoffs of the two player war of attrition, we need
E[St , W1 ] = K1
(105)
for some constant K1 .
Mark Broom (City University London)
Amsterdam, July 12
108 / 116
Multi-player games
The multi-player war of attrition
The ESS for the multi-player war of attrition IV
By closely following the analysis of a two player war of attrition, we obtain
that w1 , the probability density function of W1 , must be of the form
−K2
exp(−t/V) for almost all t with w1 (t) 6= 0,
V
R∞
where the constant K2 is chosen so that 0 w1 (x)dx = 1.
w1 (t) =
(106)
As for the two player case, one candidate for an ESS stands out, namely
w(t) =
1
exp(−t/V).
V
(107)
Let W be the strategy of a two player war of attrition with density function w,
and consider W1 6= W.
Mark Broom (City University London)
Amsterdam, July 12
109 / 116
Multi-player games
The multi-player war of attrition
The ESS for the multi-player war of attrition V
By using arguments on the stability of ESS in the war of attrition directly
carried over from the two player case, it follows that if there is an ESS
strategy p, it must correspond to W.
From (107) it follows that P(W ≤ t) = 1 − exp(−t/V) and thus by (104), the
1/(m−1)
distribution function of p is P(X1 ≤ t) = 1 − exp(−t/V)
.
The proof that the ESS exists is more complicated and is omitted.
Mark Broom (City University London)
Amsterdam, July 12
110 / 116
Multi-player games
The multi-player war of attrition
The multi-player WoA with strategy adjustment
The second model allows players to adjust their strategy as others dropped
out.
Thus with m players all individuals select a time they are prepared to wait
before becoming the first to leave; we denote the time individual i selects as
(m)
Xi .
When the time
(m)
(m)
(m)
min{X1 , X2 , . . . , Xm }
(108)
is reached one individual drops out and all remaining individuals select a new
(m−1)
time Xi
that they are prepared to wait above and beyond the current time.
We suppose that two or more individuals cannot leave simultaneously.
The strategy is an (m − 1)−tuple (pm−1 , pm−2 , . . . , p1 ) where pi specifies
how long the individual will wait with i others.
Mark Broom (City University London)
Amsterdam, July 12
111 / 116
Multi-player games
The multi-player war of attrition
The ESS candidate
An intuitive argument based on induction over m can establish the only
candidate ESS.
Suppose that there are two players remaining. We know that the optimal play
is to choose pESS , an exponential distribution with mean V.
However, we also know that the expected reward from being in this position is
zero. Thus for more than two players, the reward to get down to the final two
is zero.
Thus it is optimal to quit immediately. Of course if everyone quits
immediately then nobody wins the reward, but we do not allow simultaneous
departures.
Thus all choose 0, but are allowed to update if another is randomly chosen to
leave first until there are two remaining players, when the usual exponential
distribution is selected.
Mark Broom (City University London)
Amsterdam, July 12
112 / 116
Multi-player games
The multi-player war of attrition
There is no ESS for the strategy adjustment game
Above, we have identified that P = (0, 0, . . . , 0, pESS ) is the only candidate for
an ESS. Now we will, however, illustrate that it is in fact not an ESS.
Consider a 3-player game. Then P = (0, pESS ). Let Q = (q, 0) for small
q > 0. We have
E[P; P2 ] = 0 = E[Q; P2 ],
V
E[P; PQ] = = E[Q; PQ],
2
V
E[P, Q2 ] = 0 < − q = E[Q; Q2 ].
3
(109)
(110)
(111)
Thus, P is not an ESS and consequently, there is no ESS of the game.
The game where all but one player receives the same reward (zero) can be
thought of as non-generic.
Mark Broom (City University London)
Amsterdam, July 12
113 / 116
Multi-player games
The multi-player war of attrition
The multi-player war of attrition with multiple rewards
The two remaining models are equivalent to the above models, except in each
case there is more than one reward available, and the ith individual to leave
would collect reward Vm+1−i .
When individuals are allowed to update their strategy there is a unique ESS
for the case Vm < Vm−1 < . . . < V1 , where individuals play an exponential
time with mean (i − 1)(Vi−1 − Vi ) when the remaining number of individuals
is i.
With the same tie-breaking rule as above, we can deal with ties at any point,
and so the existence of a unique candidate for an ESS extends to
Vm ≤ Vm−1 ≤ . . . ≤ V1 . There may be no ESS as demonstrated above.
A more complex updating rule is required if the values of the Vi s are not
monotonic in this way.
Mark Broom (City University London)
Amsterdam, July 12
114 / 116
Multi-player games
The multi-player war of attrition
Multiple non-monotonic rewards
Consider the final two players with remaining rewards V2 > V1 .
Clearly it is best to quit immediately, with expected reward (V1 + V2 )/2 (as
both players will do this).
Thus in the position when there are still three players, the game with rewards
V3 , V2 , V1 has the same optimal strategy as that with
V3 , (V1 + V2 )/2, (V1 + V2 )/2.
If V3 ≥ (V1 + V2 )/2 then quitting immediately is best, otherwise optimal play
here is to use an exponential
distribution with mean
2 (V1 + V2 )/2 − V3 = V1 + V2 − 2V3 .
This method can be adapted in general, using backwards induction to give a
general solution.
Mark Broom (City University London)
Amsterdam, July 12
115 / 116
Multi-player games
The multi-player war of attrition
An example
Find the candidate ESS for the multi-player war of attrition with rewards
V1 = 2, V2 = 10, V3 = 9, V4 = 0.
Here V1 < V2 so that the last two players quit immediately, receiving on
average 6. 6 < V3 , so when there are three players the best strategy is still to
quit immediately, with mean reward (V1 + V2 + V3 )/3 = 7.
This gives the expected reward to win the first contest is 7. Thus all
individuals play an exponential time with mean 3 × 7 = 21, and then quit
immediately after the first player leaves.
Mark Broom (City University London)
Amsterdam, July 12
116 / 116
© Copyright 2026 Paperzz