POLITECNICO DI MILANO
FACOLTÀ DI INGEGNERIA DELL’INFORMAZIONE
Corso di Laurea in Ingegneria Informatica
Researches on the calculation of strong Nash
equilibria
Relatore: Prof. Nicola Gatti
Correlatore: Dott. Marco Rocco
Tesi di laurea di:
Xu Zongque
Matr. 766475
Anno Accademico 2012 - 2013
Acknowledgements
There are a number of people without whom this thesis might not have been written, and to
whom I am greatly indebted.
A first acknowledgement is for my advisor, professor Nicola Gatti, who has supported me
during this year of thesis working without restraining me in any way, guiding me with valuable
advices at all the time and letting me explore what I really wanted most. If there were not the
meetings we spent in the lab room discussing game theories and algorithms, i might have already
lost the objective of the thesis .
Marco Rocco, as a correlator of the work, not only kept close track on my process, but also
helped me a lot in the practice part. I remember the day we sit together at the lab and you help
me optimizing the function i just wrote.
I cannot forget to mention my families here. Although we are not together during the time I
studying in Politecnico di Milano university, you are always my unshakable support.
Last but not least, I thank all people who have helped me during the my studying. This
work, and my life, would not be the same without you all, thank you.
i
Abstract
Computing equilibria of games is an important task in computer science. A large number of
results are known for Nash equilibrium. However, Nash equilibrium can be adopted only when
coalitions are not an issue. When instead agents can form coalitions, Nash equilibrium is inadequate and an appropriate solution concept is strong Nash equilibrium.
This thesis is devoted to the problem of calculating strong Nash equilibrium in general strategic games. We divide the problem of finding a strong Nash equilibrium to sub-problems of finding
Nash equilibrium, and finding Pareto efficient strategies. Then we introduced and developed several formulations for both kinds of sub-problem. For Nash equilibrium finding we introduced the
state-of-art NLCP and MILP, while for Pareto efficiency we developed one necessary formulation
based on Karush-Kuhn-Tucker conditions and one sufficient formulation considering correlated
strategies.
The exact formulation for Pareto optimality is difficult to obtain, but one can firstly find a
Nash equilibrium, then verify the Pareto optimality. Based on this idea we developed a Branch
and Bound algorithm and its variant in MILP.
We develop a new kind of game class named MISSING to test our algorithm, together with
GAMUT classes, we measure the performances of our algorithm.
Keywords:
Strong Nash equilibrium, Game theory, Algorithm
ii
Sommario
La ricerca degli equilibri di un gioco è un problema di grande importanza in intelligenza artificiale. Molti risultati noti in letteratura studiano il problema di ricerca di un equilibrio di Nash.
Tuttavia, equilibrio di Nash può essere adottato solo quando gli agenti non possono formare
coalizioni. Quando invece questa possibilità è percorribile, il concetto di equilibrio di Nash è non
appropriato. Il concetto di soluzione che cattura queste situazioni è l’equilibrio di Nash forte
(strong Nash equilibrium).
Questo tesi è dedicata al problema di ricerca dell’equilibrio di Nash forte. Decomponiamo
il problema di ricercare un equilibrio di Nash forte in sotto-problemi di ricercare un equilibrio
di Nash, e la verifica di strategie di Pareto efficienti. Diverse formulazioni sono introdotte e
sviluppate per entrambi i sotto-problemi. Per trovare equilibrio di Nash sono state utilizzate
le tecniche disponibili nello stato dell’arte (NLCP e MILP), mentre per la verifica della Pareto
efficienza abbiamo sviluppato una formulazione necessaria sulla base di Karush-Kuhn-Tucker e
una formulazione sufficiente basato su analisi delle strategie correlate.
Mentre la formulazione esatta per il Pareto ottimalità è difficile da ottenere, si può in primo
luogo trovare un equilibrio di Nash, poi verificare l’ottimalità di Pareto. Sulla base di questa
idea è sviluppato un algoritmo di Branch and Bound e la sua variante in MILP.
È stato sviluppato inoltre una nuova classe di gioco, detta MISSING, per valutare sperimentalmente il nostro algoritmo assieme alle classi di GAMUT.
Keywords:
Equilibrio di Nash forte, Teoria dei giochi, Algorithm
iii
Contents
1 Introduction
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Objectives and Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Structure of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1
1
2
2 Game Theory: an Overview
2.1 Normal form strategy games . . . . . . . . . . . . .
2.1.1 Self-interested agents . . . . . . . . . . . .
2.1.2 Definitions and examples . . . . . . . . . .
2.1.3 Game strategies . . . . . . . . . . . . . . . .
2.2 Solution concepts . . . . . . . . . . . . . . . . . . .
2.2.1 Pareto optimality . . . . . . . . . . . . . . .
2.2.2 Nash equilibrium . . . . . . . . . . . . . . .
2.2.3 Strong Nash equilibrium . . . . . . . . . . .
2.2.4 Maxmin and minmax strategies . . . . . . .
2.3 Correlated Strategies . . . . . . . . . . . . . . . . .
2.4 State-of-art Algorithms for Nash finding in 2-player
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
games .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
3
3
3
5
6
6
6
7
9
10
10
3 Approaches to Strong Nash Equilibrium
3.1 Developing a formulation . . . . . . . . . .
3.1.1 General idea . . . . . . . . . . . . .
3.1.2 Formulations of Nash Equilibrium
3.1.3 Formulations of Pareto optimality
3.2 Verification for Strong Nash equilibrium .
3.2.1 General idea . . . . . . . . . . . . .
3.2.2 Pareto verification algorithms . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
12
12
12
13
15
18
18
19
4 Iterative algorithms for strong Nash equilibrium
4.1 Basic Branch and bound algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2 Mixed integer linear programming to strong Nash . . . . . . . . . . . . . . . . . . . . .
4.3 Ad-hoc implementation for Pareto verification in 2–player games . . . . . . . . . . . .
22
22
25
26
iv
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5 Tests and results
5.1 Test Setting and objectives . . . . . . . . . . . .
5.2 Game instances . . . . . . . . . . . . . . . . . .
5.2.1 GAMUT instances . . . . . . . . . . . .
5.2.2 MISSING instances . . . . . . . . . . . .
5.2.3 Generation configurations . . . . . . . .
5.3 Test results . . . . . . . . . . . . . . . . . . . . .
5.4 PNS adoption for branch and bound algorithm
6 Conclusions and future work
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
31
31
32
32
32
36
36
38
40
v
List of Figures
3.1
3.2
3.3
3.4
Example of two agent game which is not enough to consider pure multilateral deviations
The Pareto frontier of game in Figure 3.1 . . . . . . . . . . . . . . . . . . . . . . . . .
Example of strong Nash equilibrium on Pareto frontier . . . . . . . . . . . . . . . . . .
The Pareto frontier of game in Figure 3.3 . . . . . . . . . . . . . . . . . . . . . . . . .
13
13
13
14
4.1
4.2
Example of two–agent game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Pareto frontier of game in Fig. 4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
24
24
A.1 Average compute time of Algorithm 3 with MISSING instances: black solid line as no
other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey
solid line as 9 other NEs, grey dotted line as 12 other NEs. . . . . . . . . . . . . . . .
A.2 Average number of AMPL calls per instance of Algorithm 3 with MISSING instances:
black solid line as no other NE, black dashed line as 3 other NEs, black dotted line as
6 other NEs, grey solid line as 9 other NEs, grey dotted line as 12 other NEs. . . . .
A.3 Average number of verification calls per instance of Algorithm 3 with MISSING instances: black solid line as no other NE, black dashed line as 3 other NEs, black
dotted line as 6 other NEs, grey solid line as 9 other NEs, grey dotted line as 12 other
NEs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.4 Average percentage of time taken by SNE verification of Algorithm 3 with MISSING
instances: black solid line as no other NE, black dashed line as 3 other NEs, black
dotted line as 6 other NEs, grey solid line as 9 other NEs, grey dotted line as 12 other
NEs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.5 Average compute time of MIP Nash per call of Algorithm 3 with MISSING instances:
black solid line as no other NE, black dashed line as 3 other NEs, black dotted line as
6 other NEs, grey solid line as 9 other NEs, grey dotted line as 12 other NEs. . . . .
A.6 Average results obtained with Algorithm 3 with MISSING instances in which there is
no SNE: black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey solid
line as 9 other NEs, grey dotted line as 12 other NEs. . . . . . . . . . . . . . . . . . .
A.7 Average compute time of Iterated MIP 2StrongNash with MISSING instances: black
solid line as no other NE, black dashed line as 3 other NEs, black dotted line as 6
other NEs, grey solid line as 9 other NEs, grey dotted line as 12 other NEs (Part 1) .
vi
50
51
52
53
54
55
56
A.8 Average compute time of Iterated MIP 2StrongNash with MISSING instances: black
solid line as no other NE, black dashed line as 3 other NEs, black dotted line as 6
other NEs, grey solid line as 9 other NEs, grey dotted line as 12 other NEs (Part 2) .
A.9 Average number of iterations of Iterated MIP 2StrongNash with MISSING instances:
black solid line as no other NE, black dashed line as 3 other NEs, black dotted line as
6 other NEs, grey solid line as 9 other NEs, grey dotted line as 12 other NEs (Part 1)
A.10 Average number of iterations of Iterated MIP 2StrongNash with MISSING instances:
black solid line as no other NE, black dashed line as 3 other NEs, black dotted line as
6 other NEs, grey solid line as 9 other NEs, grey dotted line as 12 other NEs (Part 2)
A.11 Average compute time of MIP Nash per iteration of Iterated MIP 2StrongNash with
MISSING instances: black solid line as no other NE, black dashed line as 3 other NEs,
black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey dotted line as
12 other NEs (Part 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.12 Average compute time of MIP Nash per iteration of Iterated MIP 2StrongNash with
MISSING instances: black solid line as no other NE, black dashed line as 3 other NEs,
black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey dotted line as
12 other NEs (Part 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.13 Average percentage of time taken by SNE verification in Iterated MIP 2StrongNash
with MISSING instances: black solid line as no other NE, black dashed line as 3 other
NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey dotted line
as 12 other NEs (Part 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.14 Average percentage of time taken by SNE verification in Iterated MIP 2StrongNash
with MISSING instances: black solid line as no other NE, black dashed line as 3 other
NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey dotted line
as 12 other NEs (Part 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.15 Average results obtained with Iterated MIP 2StrongNash with MISSING instances in
which there is no SNE: black dashed line as 3 other NEs, black dotted line as 6 other
NEs, grey solid line as 9 other NEs, grey dotted line as 12 other NEs. . . . . . . . . .
A.16 Average compute time of Iterated Box MIP 2StrongNash with MISSING instances:
black solid line as no other NE, black dashed line as 3 other NEs, black dotted line as
6 other NEs, grey solid line as 9 other NEs, grey dotted line as 12 other NEs (Part 1)
A.17 Average compute time of Iterated Box MIP 2StrongNash with MISSING instances:
black solid line as no other NE, black dashed line as 3 other NEs, black dotted line as
6 other NEs, grey solid line as 9 other NEs, grey dotted line as 12 other NEs (Part 2)
A.18 Average compute time of Iterated Box MIP 2StrongNash with MISSING instances:
black solid line as no other NE, black dashed line as 3 other NEs, black dotted line as
6 other NEs, grey solid line as 9 other NEs, grey dotted line as 12 other NEs (Part 3)
A.19 Average compute time of Iterated Box MIP 2StrongNash with MISSING instances:
black solid line as no other NE, black dashed line as 3 other NEs, black dotted line as
6 other NEs, grey solid line as 9 other NEs, grey dotted line as 12 other NEs (Part 4)
A.20 Average number of iterations of Iterated Box MIP 2StrongNash with MISSING instances: black solid line as no other NE, black dashed line as 3 other NEs, black
dotted line as 6 other NEs, grey solid line as 9 other NEs, grey dotted line as 12 other
NEs (Part 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.21 Average number of iterations of Iterated Box MIP 2StrongNash with MISSING instances: black solid line as no other NE, black dashed line as 3 other NEs, black
dotted line as 6 other NEs, grey solid line as 9 other NEs, grey dotted line as 12 other
NEs (Part 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
vii
57
58
59
60
61
62
63
64
65
66
67
68
69
70
A.22 Average number of iterations of Iterated Box MIP 2StrongNash with MISSING instances: black solid line as no other NE, black dashed line as 3 other NEs, black
dotted line as 6 other NEs, grey solid line as 9 other NEs, grey dotted line as 12 other
NEs (Part 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.23 Average number of iterations of Iterated Box MIP 2StrongNash with MISSING instances: black solid line as no other NE, black dashed line as 3 other NEs, black
dotted line as 6 other NEs, grey solid line as 9 other NEs, grey dotted line as 12 other
NEs (Part 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.24 Average compute time of MIP Nash per iteration of Iterated Box MIP 2StrongNash
with MISSING instances: black solid line as no other NE, black dashed line as 3 other
NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey dotted line
as 12 other NEs (Part 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.25 Average compute time of MIP Nash per iteration of Iterated Box MIP 2StrongNash
with MISSING instances: black solid line as no other NE, black dashed line as 3 other
NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey dotted line
as 12 other NEs (Part 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.26 Average compute time of MIP Nash per iteration of Iterated Box MIP 2StrongNash
with MISSING instances: black solid line as no other NE, black dashed line as 3 other
NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey dotted line
as 12 other NEs (Part 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.27 Average compute time of MIP Nash per iteration of Iterated Box MIP 2StrongNash
with MISSING instances: black solid line as no other NE, black dashed line as 3 other
NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey dotted line
as 12 other NEs (Part 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.28 Average percentage of time taken by SNE verification in Iterated Box MIP 2StrongNash with MISSING instances: black solid line as no other NE, black dashed line as
3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey
dotted line as 12 other NEs (Part 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.29 Average percentage of time taken by SNE verification in Iterated Box MIP 2StrongNash with MISSING instances: black solid line as no other NE, black dashed line as
3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey
dotted line as 12 other NEs (Part 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.30 Average percentage of time taken by SNE verification in Iterated Box MIP 2StrongNash with MISSING instances: black solid line as no other NE, black dashed line as
3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey
dotted line as 12 other NEs (Part 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.31 Average percentage of time taken by SNE verification in Iterated Box MIP 2StrongNash with MISSING instances: black solid line as no other NE, black dashed line as
3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey
dotted line as 12 other NEs (Part 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.32 Average results obtained with Iterated Box MIP 2StrongNash with MISSING instances
in which there is no SNE: black dashed line as 3 other NEs, black dotted line as 6
other NEs, grey solid line as 9 other NEs, grey dotted line as 12 other NEs (Part 1) .
A.33 Average results obtained with Iterated Box MIP 2StrongNash with MISSING instances
in which there is no SNE: black dashed line as 3 other NEs, black dotted line as 6
other NEs, grey solid line as 9 other NEs, grey dotted line as 12 other NEs (Part 2) .
viii
71
72
73
74
75
76
77
78
79
80
81
82
List of Tables
2.1
2.2
2.3
2.4
Utility matrix of Prisoner’s dilemma . . . . . . . . . . . . . . . . .
Modified utility matrix of Prisoner’s dilemma . . . . . . . . . . .
Example of strong Nash Equilibrium . . . . . . . . . . . . . . . .
Example of game that does not admits strong Nash Equilibrium
5.1
5.2
5.3
5.4
5.5
Example game admitting Nash equilibrium with support 2 . . . . . .
Example game admitting Nash equilibrium with support 4 . . . . . .
Example game admitting only one Nash equilibrium with support 4
Example game admitting only one Nash equilibrium with support 2
Example game admitting one Nash equilibrium and one strong Nash
support 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MISSING instance that admits mixed Nash equilibrium . . . . . . . .
5.6
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
5
7
8
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
equilibrium with
. . . . . . . . . .
. . . . . . . . . .
33
33
33
34
A.1 Results of Algorithm 3 with Nash equilibrium finding oracle MIP Nash with 25x25
GAMUT instances: instances admitting SNEs (Y), average compute time (time),
average compute time when there is a strong Nash equilibrium (time—Y), average
compute time when there is no strong Nash equilibrium (time—N). . . . . . . . . . .
A.2 Results of Algorithm 3 with Nash equilibrium finding oracle MIP Nash with 50x50
GAMUT instances: instances admitting SNEs (Y), average compute time (time),
average compute time when there is a strong Nash equilibrium (time—Y), average
compute time when there is no strong Nash equilibrium (time—N). . . . . . . . . . .
A.3 Results of Iterated Box MIP 2StrongNash with 25x25 GAMUT instances: instances
admitting SNEs (Y), average compute time (time), average compute time when there
is a strong Nash equilibrium (time—Y), average compute time when there is no strong
Nash equilibrium (time—N). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.4 Results of Iterated Box MIP 2StrongNash with 50x50 GAMUT instances: instances
admitting SNEs (Y), average compute time (time), average compute time when there
is a strong Nash equilibrium (time—Y), average compute time when there is no strong
Nash equilibrium (time—N). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ix
34
35
44
45
46
47
A.5 Results of Iterated MIP 2StrongNash with 25x25 GAMUT instances: instances admitting SNEs (Y), instances admitting mixed strategy SNE (mY), instances admitting
only mixed strategy SNE (omY), compute time (time), compute time when there is
an SNE (time—Y), compute time when there is no SNE (time—N). . . . . . . . . . .
A.6 Results of Iterated MIP 2StrongNash with 50x50 GAMUT instances: instances admitting SNEs (Y), instances admitting mixed strategy SNE (mY), instances admitting
only mixed strategy SNE (omY), compute time (time), compute time when there is
an SNE (time—Y), compute time when there is no SNE (time—N). . . . . . . . . . .
x
48
49
1
Introduction
1.1
Motivation
Being one of the most important element in game theory, Nash Equilibrium is an interesting
research topic both in economics, social science and computer science. Many algorithms have
been developed to calculate a Nash Equilibrium since this solution concept was proposed in
1951 [20]. Lemke-Howson algorithm [18] and Lemke algorithm [17] are state-of-art algorithms
for finding the Nash Equilibrium in two-player games. Porter-Nudelman-Shoham (PNS) [22]
and Sandholm-Gilpin-Conitzer (SGC or MILP or MIP Nash) [24] are two algorithms developed
more recently. The former one can be applied to both 2-player games and n-player games, which
considers the enumeration of the supports of the strategies. The latter one provides different
formulations for finding Nash Equilibria in 2-player games. Other algorithms for n-player games
include simplicial subdivision [27], Govindan-Wilson [11] and Elzen-Talman [5]. The work of
Feige-Talman [7] is also an interesting but different approach, which enables us to transfer the
problem of finding an ǫ-Nash Equilibrium in a n-player game to the problem of finding one
in a 2-player game. There is a more recent improvement for Porter-Nudelman-Shoham (LSPNS) [4], which combines PNS with local search methods considering the support space as
search space. In [9], another local search method which considers space of vertices on the best
response polyhedron as search space was introduced.
Apart from these algorithms developed in the computer science field, many refinements of
Nash Equilibrium are proposed to have a better model of the problems in the real world. Strong
Nash Equilibrium is one of the solution concepts among those refinements [1]. A Strong Nash
Equilibrium is a Nash Equilibrium for which no coalition of players has a joint deviation that
improve the payoff of each member of the coalition [21]. In the literature, algorithms can be found
to search for pure strategy strong Nash equilibria with specific classes of games, e.g., congestion
games [13, 14, 23], connection games [6], and maxcut games [10], however, how to effectively find
them in generic games is still unknown to computer scientists. This fact makes it meaningful to
develop new algorithms to calculate Strong Nash Equilibrium.
1.2
Objectives and Contributions
The thesis includes the development of a new algorithm to calculate strong Nash equilibria in
generic game as well as the testing results. Since there is no implementation for algorithms to
1
calculate a Strong Nash Equilibrium in general game in the literature yet, it is not clear that which
kind of instances of games we should use to test our algorithm. GAMUT [19] and GAMUT game
instances are well known as a testbed to test algorithms calculating Nash Equilibria, however, it is
still necessary to verify if GAMUT game instances are suitable for testing algorithms calculating
Strong Nash Equilibria. In the case that the answer is negative, a new type of game instances
should be developed to test our algorithm.
The main objectives of this thesis are:
• To review the concepts and properties of Strong Nash Equilibrium.
• To develop formulations in order to solve the strong Nash equilibrium calculation problem.
• To propose, implement and test an algorithm for calculating Strong Nash Equilibrium in
generic strategy games.
• To develop new classes of games which is significant to the testing of the game, if the
current available ones are not enough.
1.3
Structure of the Thesis
The remainder of this thesis is organized as follows. In Chapter 2 we introduce briefly some
concepts, definitions and properties from game theory which is necessary to understanding the
remaining part. In Chapter 3 we present some formulations we considered in order to solve the
strong Nash equilibrium calculation problem and the idea of strong Nash equilibrium verification.
In Chapter 4 we present two main algorithms to solve the strong Nash problem, which is the
most important part of this thesis, while in Chapter 5 we represent the test results. Finally in
Chapter 6, we conclude our work and outlining the future directions.
2
2
Game Theory: an Overview
In this section, we introduce essential preliminaries for understanding the remaining part of the
thesis. More detailed explanation can be found in [26].
2.1
2.1.1
Normal form strategy games
Self-interested agents
Game theory is the mathematical study of interaction among independent, self-interested agents.
It has been widely used in economics, political science, decision making, as well as logic and
biology. There are two major branches of games theory: cooperative game theory and noncooperative game theory. In cooperative game theory, agents will align with each other to
achieve a certain goal, while in non-cooperative game theory, the model puts more emphasis on
each single agents.
The agents studied in the non-cooperative game theory are usually called self-interested
agents, this is usually the type of the player when we consider a normal form game. The name
‘self-interested’ does not necessary mean that each agents wants to cause harm to others or cares
only about itself. It means that each agents has its own description of the states of the world.
A general way to model an agent’s interest is the utility theory. This theoretical approach
allows us to quantify an agent’s degree of preference across a set of available alternatives. More
specifically, we use the utility function –a mapping from states of the world to real numbers–
to model the agents preferences. When the agent is uncertain about which state of the world
he faces, his utility is defined as the value of his utility function with respect to the appropriate
probability distribution over states.
2.1.2
Definitions and examples
The normal form is one of the representations most used in game theory. A game written in
normal form amounts to a representation of every player’s utility for every state of the world, in
the case where states of the world depends only on the players’ combined actions.
Definition 2.1.1. A normal form strategy game of n players is a tuple (N, A, u), where:
• N = {1, 2, . . . , n} is a set of players.
3
• A = (A1 , A2 , . . . , An ), where Ai denote the actions can be played by player i.
• u = (u1 , u2 , . . . , un ), where ui ∶ A ↦ R denote the utility function of player i.
A natural way to represent games is via an n-dimensional matrix, called utility matrix.
One example is Table 2.1 for 2-player games. In this example, each row in the utility matrix
corresponds to a possible action for player 1, each column in the utility matrix corresponds to a
possible action for player 2. There are 2 numbers in each cell, represents the utility for player 1
and player 2 respectively.
The prisoner’s dilemma is a classic example to illustrate strategy games, the description of
the game is as follows:
agent 1
Two men are arrested, but the police does not have enough information for a conviction. The police separates the two men, and offers both the same deal: if one
testifies against his partner (defects/betrays), and the other remains silent (cooperates with/assists his partner), the betrayer goes free and the one that remains silent
gets a one-year sentence. If both remain silent, both are sentenced to only one month
in jail on a minor charge. If each ‘rats out’ the other, each receives a three-month
sentence. Each prisoner must choose either to betray or remain silent; the decision
of each is kept secret from his partner. What should they do?
defect
cooperate
agent 2
defect cooperate
-3,-3
0,-12
-12,0
-1,-1
Table 2.1: Utility matrix of Prisoner’s dilemma
It is not difficult to understand the game by looking at the utility matrix. In the example
of the prisoner’s dilemma, the set of players is N = {1, 2}, which means there are 2 players in
the game. The action profile is A = {def ect, cooperate} × {def ect, cooperate}, which means both
for players 1 and 2, there are 2 available actions: defect and cooperate. The states of the game
can also described by A, for example, (def ect, def ect) means both player 1 and player 2 defects.
The values of utility function can be obtained from the values inside the matrix, for example, if
player 1 defects and player 2 also defects, the utility of both player 1 and player 2 are −3. We
call −3 is the expected utility for player 1 and player 2 if the strategy profile (def ect, def ect) is
played, and we usually denote (−3, −3) the outcome of the strategy profile (def ect, def ect). The
term social welfare means the sum of the expected utilities of all the players, in this example it
is −6 for (def ect, def ect).
Knowing the absolute value of one expected utility of a specific strategy profile is kind of useless because we use expected utilities to represent the preferences between actions. The important
thing is the differences, for example, for player 1 in the prisoner’s dilemma (def ect, def ect) is
better than (cooperate, def ect) because in the former case his payoff is −3, which is better than
−12 in the later case. This observation can be concluded by the following lemma:
Lemma 2.1.2. Two strategic games are equivalent if the utility matrix of one can be obtained
by applying the same affine transformation to every entry of the utility matrix of the other.
Applying Lemma 2.1.2 we can modify the utility matrix of prisoner’s dilemma into Table 2.2.
Some algorithms solving Nash Equilibrium in normal form games requires all the input utility
numbers positive. This requirement does not mean the loss of generality thanks to Lemma 2.1.2.
4
agent 1
defect
cooperate
agent 2
defect cooperate
10,10
13,1
1,13
12,12
Table 2.2: Modified utility matrix of Prisoner’s dilemma
2.1.3
Game strategies
Strategies corresponds the choices made by the players. One kind of strategy is to select a single
action and play it. Such a strategy is usually called a pure strategy. We call a choice of pure
strategy for each agent a pure strategy profile.
There is also another kind of strategy: randomizing over the set of available actions according
to some specific probability distribution. Such a strategy is called a mixed strategy. We define
a mixed strategy profile for a normal form strategy game as follows.
Definition 2.1.3. The set of mixed strategy profile of a normal form game (N = {1, 2, . . . , n}, A =
(A1 , A2 , ..., An ), u) is a set S = S1 × S2 × ⋅ ⋅ ⋅ × Sn , where Si represents the set of all probability
distributions over Ai . A mixed strategy profile is written as x = (x1 , x2 , . . . , xn ), where xi ∈ Si is
a vector of m elements, called the mixed strategy for player i.
In this thesis, we use a = (a1 , . . . , an ) to represent a pure strategy profile, with ai the
action played by player i, x = (x1 , x2 , . . . , xn ) to represent a mixed strategy profile and s =
(s1 , s2 , . . . , sn ) to represent a general strategy profile.
Definition 2.1.4. The support of a mixed strategy xi for a player i is the set of pure strategies
{ai ∣xi (ai ) > 0}
We call a strategy fully mixed if it has full support (i.e., every action has a non-zero probability). A pure strategy is a special case of a mixed strategy whose support contains only one
action.
The expected utilities of pure strategy profiles are defined by the utility matrix, but those of
mixed strategy profiles are not. They can be calculated combining the probability distributions.
Definition 2.1.5. Given a normal form strategy game (N, A, u) and a mixed strategy profile x
of this game, the expected utility for player i ∈ N is given as
n
ui (x) = ∑ ui (a) ∏ xj (aj )
j=1
a∈A
Another useful concept in game theory is the regret, it represents one player’s loss by playing
an action ai , rather than playing his best response to s−i , more formally:
Definition 2.1.6. An agent i’s regret for playing an action ai if the other agents adopt action
profile s−i is defined as
′
[max
ui (ai , a−i )] − ui (ai , a−i )
′
.
ai ∈Ai
5
2.2
Solution concepts
Solution Concepts are subsets of strategy profiles. In the decision of a single agent, one key
notation is the optimal strategy, that is, a strategy that maximizes the agent’s expected utility
for a given environment in which the agent operates. For each agent, the ‘environment’ not only
includes the world the agent stays in, which is stochastic and partially observable, but also other
agents. Solution concepts are developed by game theorists to help us to find meaningful strategy
profiles, which is ‘optimal ’ in some sense.
2.2.1
Pareto optimality
In a Pareto efficient economic allocation, no one can be made better off without making at least
one individual worse off. Given an initial allocation of utilities among a set of individuals, a
change to a different allocation that makes at least one individual better off without making any
other individual worse off is called a Pareto improvement.
Definition 2.2.1. Strategy profile x dominates strategy profile x′ if ∀i ∈ N, ui (x) ≥ ui (x′ ), and
there exists some j ∈ N such that uj (x) > uj (x′ ).
Definition 2.2.2. Strategy profile x is Pareto optimal, or strictly Pareto efficient, if there does
not exist another strategy profile x′ ∈ S that Pareto dominates x.
Pareto domination is a way of saying that some strategy profiles are better than others. If
strategy profile s dominates strategy profile s′ , we are sure by the definition that every agents
prefer strategy profile s than strategy profile s′ . We can consider this preferences a partial
ordering over strategy profiles. Therefore, every game must have at least one Pareto optimal
strategy profile.
Another concept related to the Pareto optimality, is the weakly Pareto efficient. A strategy
profile is weakly Pareto efficient if there are no other strategies whose outcome would cause every
individual to gain.
Definition 2.2.3. Strategy profile x is weakly Pareto efficient, if there does not exist another
strategy profile x′ ∈ S that ∀i ∈ N, ui (x′ ) > ui (x′ ).
2.2.2
Nash equilibrium
The Nash equilibrium is the most influential solution concept in game theory. Consider that
if the agent knew how the other agents want to play in a strategy game, the problem for an
agent to choose his own ‘optimal’ strategy would be simple. More specifically, he should play a
strategy profile that maximizing his own expected utility. This leads to the definition of the best
response:
Definition 2.2.4. Player i’s best response to the strategy profile s−i is a mixed strategy s∗i such
that ui (s∗i , s−i ) ≥ ui (si , s−i ) for all strategies si ∈ Si , where s−i = (s1 , . . . , si−1 , si+1 , . . . , sn ) is a
strategy profile without player i’s strategy.
Of course, in general an agent will not know what strategy other agents would play. Thus,
the best response is not a solution concept. But the idea is leveraged and leads to the definition
of Nash equilibrium:
Definition 2.2.5. A strategy profile s = (s1 , . . . , sn ) is a Nash equilibrium if, for all agents i, si
is a best response to s−i .
6
A Nash equilibrium is stable, that is, no agent would want to change his strategy if he knew
what strategies the other agents are following.
We can divide Nash equilibria into two categories, strict and weak, depending on whether or
not every agent’s strategy constitutes a unique best response to the other agents’ strategies.
Definition 2.2.6. A strategy profile s = (s1 , . . . , sn ) is a strict Nash equilibrium if, for all agents
i and for all strategies s′i ≠ si , ui (si , s−i ) > ui (s′i , s−i ).
Definition 2.2.7. A strategy profile s = (s1 , . . . , sn ) is a weak Nash equilibrium if, for all agents
i and for all strategies s′i ≠ si , ui (si , s−i ) ≥ ui (s′i , s−i ).
One important conclusion about Nash equilibrium is that for every finite strategy game, there
must exist at least one Nash equilibrium, the proof can be found in [16].
One approximate to the Nash equilibrium is the concept of ǫ-Nash equilibrium. In this
concept, players will accept strategy whose expect utility is with in ǫ of the best response. More
formally:
Definition 2.2.8. Fix ǫ > 0, a strategy profile s = (s1 , . . . , sn ) is an ǫ-Nash equilibrium if, for all
agents i and for all strategies s′i ≠ si , ui (si , s−i ) ≥ ui (s′i , s−i ) − ǫ.
2.2.3
Strong Nash equilibrium
agent 1
As mentioned in the Section 2.2.2, Nash equilibria are stable. But this is no more true if agents
can communicate before playing a game. Consider the game in Table 2.3, thinking of only pure
strategy profiles, we have two Nash equilibria: (a1 , a3 ) and (a2 , a4 ), none of the two agents
wants to deviate from the Nash equilibrium otherwise the expected utility would be less. Now
we let these two agents to communicate before they play this game. Since both agents gets more
expected utility in (a1 , a3 ) than (a2 , a4 ), they can obtain an agreement to play the former. This
consequences is captured by the strong Nash equilibrium solution concept. In fact, (a1 , a3 ) is a
strong Nash equilibrium in the example game, while (a2 , a4 ) is not. A strong Nash equilibrium
a3
a4
agent 2
a1
a2
4,4 0,0
0,0 2,2
Table 2.3: Example of strong Nash Equilibrium
is a Nash equilibrium in which no coalition, taking the actions of its complements as given, can
cooperatively deviate in a way that benefits all of its members. To simply the definition of strong
Nash equilibrium, we introduce some notations first.
Notation 2.2.9. Let C be a coalition of game (N, A, u) and x is a mixed strategy profile of
this game, we denote xN ∖C = (xj1 , . . . , xjp ) where N ∖ C = {j1 , . . . , jp } a mixed strategy profile
for players in N ∖ C and xC = (xk1 , . . . , xkq ) where C = {k1 , . . . , kq } a mixed strategy profile for
players in C.
Notation 2.2.10. Let C be a coalition of game (N, A, u), where N = {1, . . . , n}, A = (A1 . . . , An ),
u = (u1 , . . . , un ), we denote the game (C, AC , uxCN ∖C ) as a game with players j ∈ C ⊆ N , where
xN ∖C
C
= ∏j∈C uC
AC = ∏j∈C Aj and uC
j where uj (xj , xC−j ) = uj (xj , xC−j , xN ∖C )
7
Definition 2.2.11. A mixed strategy profile x is a strong Nash equilibrium of a game (N, A, u)
if, for every possible coalition C ⊆ N there is no other strategy in game (C, AC , uxCN ∖C ) that yields
′
xN ∖C
better utility for all players in C, formally, ∀C ⊆ N, ∄xC ≠ xC in game (C, AC , uC
) such that
′
C
∀i ∈ C, ui (xC ) > ui (xC )
Here we present some properties of strong Nash equilibria as Lemmas, since they related very
closed to the core idea of the algorithm we developed.
Lemma 2.2.12. For any strategy game (N, A, u), every strong Nash equilibrium x is also a
Nash equilibrium.
Proof. Consider the strong Nash equilibrium x = (x1 , . . . , xn ) for game (N, A, u) and a coalition
C ⊆ N where ∣C∣ = 1, without loss of generality we can choose C = {1}. From the definition of
′
′
{1}
strong Nash equilibrium, we know that ∄x1 ≠ x1 such that u1 (x1 ) > u1 (x1 ), this means that
the strategy of player 1 is the best response with respect to the strategy profile of other players.
We can repeat the reasoning for all coalitions of size 1, obtain that the strategy for every single
player is the best response with respect to the strategy profile of other players. Following the
definition of Nash equilibrium, x is also a Nash equilibrium for game (N, A, u).
Lemma 2.2.13. If x is a strong Nash equilibrium for game (N, A, u), ∀C ⊆ N, xC is Pareto
optimal in game (C, AC , uxCN ∖C ).
Proof. The proof of this Lemma is straightforward, just apply the definition of strong Nash
Equilibrium to all the possible coalitions and we can get the result.
Lemma 2.2.14. A strategy profile x of a game (N, A, u) is a strong Nash equilibrium if all the
following conditions hold:
(1) x is a Nash equilibrium of game (N, A, u)
(2) for all possible coalition C ⊆ N where ∣C∣ ≥ 2, xC is a Pareto optimal strategy profile of
x
game (C, AC , uCN /C )
xN ∖C
Proof. From point 1, we get ∀C ⊆ N where ∣C∣ = 1, ∄xC ≠ xC in game (C, AC , uC
) such
′
′
(x
).
From
point
2,
we
get
∀C
⊆
N
where
∣C∣
≥
2,
∄x
≠
x
in
game
that ∀i ∈ C, ui (xC ) > uC
C
C
i
C
′
xN ∖C
(C, AC , uC
) such that ∀i ∈ C, ui (xC ) > uC
(x
).
So
x
is
a
strong
Nash
equilibrium
of
game
C
i
(N, A, u) following the definition.
′
Lemma 2.2.15. Strong Nash equilibria may not exist in a strategy game.
agent 1
Proof. The counter example is given by Table 2.4. In this game, all Nash equilibria has outcome
(0, 0, 0). They are not robust to any coalitions of two players.
a3
a4
agent 2
a1
a2
0,0,0 1,-2,1
-2,1,1 1,1,-2
agent 3
a3
a4
agent 2
a1
a2
1,1,-2 -2,1,1
1,-2,1 0,0,0
Table 2.4: Example of game that does not admits strong Nash Equilibrium
8
2.2.4
Maxmin and minmax strategies
The maximum strategy of a player i in an n-player strategy game is a strategy that maximizes
i’s worst case pay-off, in the situation where all the other players happen to play the strategies
which cause the greatest harm to i. The maxmin value of the game for player i is that minimum
amount of pay-off guaranteed by a maxmin strategy.
Definition 2.2.16. The maxmin strategy for player i is arg maxsi mins−i ui (si , s−i ), and the
minmax value for player i is maxsi mins−i ui (si , s−i ).
The maxmin strategy can be understood through the following temporal intuition. The
maxmin strategy is i’s best choice when first i must commit to a strategy, and then the remaining
players −i observe this strategy and choose their own strategies to minimize i’s expected payoff.
While it may not seem reasonable to assume that the other players would be solely interested
in minimizing i’s utility, it is the case that if i plays a maxmin strategy and the other players
play arbitrarily, i will still receive an expected payoff of at least his maxmin value. This means
that the maxmin strategy is a sensible choice for a conservative player who wants to maximize
his expected utility without having to make any assumptions about the other agents, such as
that they will act rationally according to their own interests, or that they will draw their action
choices from known distributions.
The minmax strategy and minmax value play a dual role to their maxmin counterparts. In
two-player games the minmax strategy for player i against player −i is a strategy that keeps
the maximum payoff of −i at a minimum, and the minmax value of player −i is that minimum.
This is useful when we want to consider the amount that one player can punish another without
regard for his own payoff.
Definition 2.2.17. In a 2-player game, the minmax strategy for player i against player −i is
arg minsi maxs−i u−i (si , s−i ), and player −i’s minmax value is
minsi maxs−i u−i (si , s−i ).
In n-player games with n > 2, defining player i’s minmax strategy against player j is a bit
more complicated. This is because i will not usually be able to guarantee that j achieves minimal
payoff by acting unilaterally. However, if we assume that all the players other than j choose to
cooperate against j, we can define minmax strategies for the n-player case.
Definition 2.2.18. In a n-player game, the minmax strategy for player i against player j ≠ i is i’s
component of the mixed-strategy profile s−j in the expression arg mins−j maxsj uj (sj , s−j ), where
−j denotes the set os players other than j. The minmax value for player j is mins−j maxsj uj (sj , s−j ).
As with the minmax value, we can give temporal intuition for the maxmin value. Image that
the agents −i must commit to a strategy profile, to which i can then play a best response. Player
i receives his minmax value if players −i choose their strategies in order to minimize i’s expected
utility after he plays his best response.
Since neither an agent’s maxmin strategy nor his minmax strategy depend on the strategies
that the other agents actually choose, the minmax and maxmin strategies give rise to solution
concepts in a straightforward way. We call a mixed strategy profile s = (s1 , . . . , sn ) a maxmin
strategy profile of a given game if s1 is a maxmin strategy for player 1, s2 is a maxmin strategy
for player 2, and so on.
In 2-player zero-sum games, there is a very tight connection between minmax strategy,
maxmin strategy and Nash equilibrium.
Theorem 2.2.19. In any finite, two-player zero-sum game, in any Nash equilibrium each player
receives a payoff that is equal to both his maxmin value and minmax value.
9
2.3
Correlated Strategies
The strategy profiles discussed so far are all based on an assumption that players can only
communicate before they play a game. If we relax this assumption to that players can both
communicate before and during they play the game, we can obtain much more strategy profiles
which cannot be described by mixed strategy profiles.
Consider the prisoner’s dilemma game described in Table 2.1, if the two prisoners know the
other’s choice when they play the game. For example, the first prisoner will choose defect or
cooperate which is opposite the choice of the second player, and the second player will choose
1
probability cooperate and 21 probability defect. The final outcome will be 21 probability of
2
(defect,cooperate and 12 probability of (cooperate,defect).
The above example can also be considered in another way. Imagine that the two players
can observe the result of a fair coin flip, and they could choose (defect,cooperate) if heads and
(cooperate,defect) if tails. This new kind of strategy profile is called correlated strategy profile.
The expected pay-off of each player in the above example is 0.5 ∗ (−12) + 0.5 ∗ (0) = −6, which is
impossible if we consider only mixed strategies.
Generally speaking, we can represent a correlated strategy using a vector x with length equal
to the number of possible outcomes. The above example can be represented as (0, 0.5, 0, 0.5)T .
It is obvious that for each mixed strategy profile, we can always find a correlated strategy
profile corresponding to it, by putting the probability of an outcome equal to the multiplication
of the probability that agents will play the correspond action.
2.4
State-of-art Algorithms for Nash finding in 2-player games
In literature, the most important algorithms are Lemke-Howson [18], Porter-Nudelman-Shoham
(PNS) [22], Sandholm-Gilpin-Conitzer (SGC or MIP Nash or MILP) [24] and Local search methods (LS-PNS) [4]. Apart from Lemke-Howson, all the other three algorithms requires the resolution of mathematical programming problems. Here we introduce PNS, SGC and LS-PNS
because they will be used in the algorithm to be discussed in Section 4.1.
PNS is a Nash finding algorithm considering the enumeration of supports. The enumeration
starts from small size balanced support, for each support S = (S1 , . . . , Sn ), the algorithm solves
the following problem:
∑ ps−i ui (aij , s−i ) = vi
∀i ∈ N, aij ∈ Si
s−i ∈S−i
∑ ps−i ui (aij , s−i ) ≤ vi
s−i ∈S−i
∀i ∈ N, aij ∉ Si
∑ paij = 1
aij ∈Si
∀i ∈ N
paij ≥ 0
paij = 0
∀i ∈ N, aij ∈ Si
∀i ∈ N, aij ∉ Si
(2.1)
(2.2)
(2.3)
(2.4)
(2.5)
The existence of a solution of above problem suggests the existence of a Nash equilibrium with
support S.
SGC introduced four kinds of mixed integer linear programming formulation of Nash equilibrium. According to the tests of the original author, the first formulation is best for GAMUT
classes. This formulation will be introduced in Section 3.1.2.
LS-PNS is the adoption of local search methods to PNS. It considers the support space as
search space, for each support, instead of considering the feasibility problem (2.1)–(2.5), it offers
10
two options: considering finding the ǫ-Nash equilibrium within the given support whose ǫ value
is minimized or find the strategies that the regret is minimized. The value of ǫ or regret are
treated as the measure of current support. Considering supports as vectors containing 0 and 1.
The distance between supports are defined as the Hamming distance of the vectors.
11
3
Approaches to Strong Nash Equilibrium
3.1
Developing a formulation
3.1.1
General idea
Thanks to Lemma 2.2.14, we can divide the formulation of strong Nash equilibrium into following
sub-problems:
(1) Formulate the problem that x is a Nash equilibrium of game (N, A, u).
(2) Formulate the problem that xC is a Pareto optimal strategy profile of game (C, AC , uCN /C )
for every possible C ⊆ N where ∣C∣ ≥ 2.
x
As a consequence, we can divide the problem of formulating complete formulation of strong
Nash equilibrium into the formulation of Nash equilibrium and Pareto optimal. The strong Nash
equilibrium formulation can be obtained by putting the Nash formulation and the 2n − 1 Pareto
optimal formulation together.
One may argue that it might be enough to consider only the multilaterally deviating of pure
strategies in strong Nash Equilibrium, just like we consider only the regrets for pure strategies
in the mixed integer linear programming for Nash equilibrium in Section 3.1.2, despite that the
solution concept requires that a strategy must be the best response with respect to all the mixed
strategies. In fact, this condition is not enough for strong Nash equilibrium. Example 3.1.1 is
an counter-example.
Example 3.1.1. Consider the game in Fig. 3.1. There are two Nash equilibria: one pure,
(a3 , a6 ), and one mixed, ( 12 a1 + 21 a2 , 12 a5 + 12 a6 ). Focus on (a3 , a6 ) and consider the possible pure
strategy multilateral deviations: there is no outcome that provides both agents a utility strictly
greater than 1. For instance, (a1 , a4 ) is better for agent 1 than (a3 , a6 ), but it is not for agent 2.
With (a2 , a4 ) we have the reverse. However, (a3 , a4 ) is not Pareto efficient, as shown by the
Pareto frontier in the figure. Indeed, ( 12 a1 + 21 a2 , 21 a4 + 21 a5 ) strictly Pareto dominates (a3 , a6 ).
Instead, the former being on the Pareto frontier, it is a strong Nash equilibrium.
In fact, for the Pareto optimality, we need a formulation considering the entire Pareto frontier.
Example 3.1.2 illustrates that in 2 two agent game, the strong Nash equilibrium should be on
the Pareto frontier.
12
agent 1
a1
a2
a3
a4
5, 0
0, 5
0, 0
agent
a5
0, 5
5, 0
0, 0
2
a6
0, 0
0, 0
1, 1
Figure 3.1: Example of two agent game which is not enough to consider pure multilateral deviations
5
E[U2 ]
4
3
SNE
2
NE
1
0
0
1
2
3
E[U1 ]
4
5
Figure 3.2: The Pareto frontier of game in Figure 3.1
agent 1
Example 3.1.2. Consider the game in Fig. 3.3, where ρ is an arbitrarily small positive value.
There is a unique Nash equilibrium, (a3 , a6 ), and this Nash equilibrium is Pareto efficient being
on the Pareto frontier. Notice that the sum of the agents’ utilities at (a3 , a6 ) is strictly smaller
than the sum at (a1 , a4 ).
a1
a2
a3
a4
5, 0
0, 0
ρ, 0
agent
a5
0, 0
0, 5
ρ, 0
2
a6
0, ρ
0, ρ
2, 2
Figure 3.3: Example of strong Nash equilibrium on Pareto frontier
3.1.2
Formulations of Nash Equilibrium
In this section we introduce formulations for Nash Equilibrium. We denote Ui the utility matrix
of player i, xi the strategy played by player i, xij the probability for player i playing action j ,
1 a vector whose all elements are 1 and M1 a matrix whose all elements are 1.
13
5
E[U2 ]
4
3
SNE
2
1
0
0
1
2
3
E[U1 ]
4
5
Figure 3.4: The Pareto frontier of game in Figure 3.3
NLCP The problem of finding an Nash Equilibrium can be formulated as a non-linear complementary problem. The idea comes from the definition of Nash equilibrium. We know that,
in a Nash Equilibrium, the strategy played by every agent i should be the best response of the
strategy played by other players, this statement can be formulate by:
max Ui ⋅ ∏ xj
xi ≥ 0
(3.1)
j≠i
∀i ∈ N
1 ⋅ xi = 1
T
∀i ∈ N
(3.2)
(3.3)
In equation 3.1 the term Ui ⋅ ∏j≠i xj represents the expect utility of player i. It is different from
multiplications between vectors and matrices.
If we consider the system (3.1)–(3.3) as primal, we can write the dual problem:
∀i ∈ N
min vi
1vi − Ui ⋅ ∏ xj ≥ 0
∀i ∈ N
j≠i
(3.4)
(3.5)
According to the complementary slackness theorem, solving system (3.1)–(3.3) is equivalent to
solving:
xi ≥ 0
∀i ∈ N
1vi − Ui ⋅ ∏ xj ≥ 0
∀i ∈ N
j≠i
xTi ⋅ (1vi − Ui ⋅ ∏ xj ) = 0
∀i ∈ N
j≠i
1T ⋅ x i = 1
∀i ∈ N
(3.6)
(3.7)
(3.8)
(3.9)
Although there is no generic solver for non-linear complementary problems, in the case of two
players, the problem reduces to a linear complementary problem. Allows us to solve it using
algorithm mentioned in [8].
14
MILP The mixed integer linear programming formulation was introduced by Sandholm-GilpinConitzer [24]. It is based on the following observation: In each Nash equilibrium, every pure
strategy is either played with 0 probability, or has 0 regret. Also, any vector of mixed strategies
for the players where every pure strategy is either played with probability 0, or has 0 regret, is a
Nash equilibrium.
The observation can be explained by the fact that given the strategies of other players, the
best response of current player has regret 0 and the pure strategies which has regret 0 can be in
the support of the strategy played in Nash equilibrium.
The formulation is given by:
1T ⋅ xi = 1
uaij =
∀i ∈ N
∑ ps−i ui (aij , s−i )
∀i ∈ N, ∀aij ∈ Ai
s−i ∈S−i
ui ≥ uaij
raij = ui − uaij
paij ≤ 1 − baij
uaij ≥ 0
raij ≥ 0
(3.12)
∀i ∈ N, ∀aij ∈ Ai
∀i ∈ N, ∀aij ∈ Ai
∀i ∈ N
(3.15)
(3.16)
(3.17)
∀i ∈ N, ∀aij ∈ Ai
∀i ∈ N, ∀aij ∈ Ai
baij ∈ {0, 1}
(3.11)
∀i ∈ N, ∀aij ∈ Ai
∀i ∈ N, ∀aij ∈ Ai
∀i ∈ N, ∀aij ∈ Ai
raij ≤ M baij
paij ≥ 0
ui ≥ 0
(3.10)
∀i ∈ N, ∀aij ∈ Ai
(3.13)
(3.14)
(3.18)
(3.19)
(3.20)
In this formulation variables are uaij , ui , raij , baij , paij . The strategy to be found is x = (x1 , x2 , . . . , xn ),
with xi = (pai1 , . . . , paim ) for which aij ∈ Ai . Equation (3.10) and (3.16) – (3.20) defines the domain of all the variables. Equation (3.11) calculates the expect utility of a pure strategy aij
given a strategy profile s−i of other players. ui represents the maximum utility player i can have
given the strategy profile s−i , raij is the regret of each action, calculated by (3.13). baij is a
binary variable, it must be 1 if the correspond action aij is not played in x, and 0 otherwise.
Finally, M is a big number, in practice we can set it to the biggest possible regret.
We are free to add a linear object function to the formulation (3.10) – (3.20) since the original
version does not have one. This allows us to find a Nash equilibrium which satisfies a special
condition, such as finding the Nash equilibrium whose social welfare is maximum. As long as
the object function is linear, the problem is a mixed integer linear programming, which can be
solved by using a generic solver such as CPLEX [15].
3.1.3
Formulations of Pareto optimality
As was mentioned in section 2.2.1,Pareto optimality defines a partial ordering of strategy profiles.
The ordering can be captured by formulating through a multi-objective problem, since it is nature
to connect the partial ordering of Pareto optimal to the concept of optimal in multi-objective
scenario. We associate these two concepts by introducing the concept of optimality respect to a
specific cone.
Definition 3.1.3. One vector y ∈ Rn is a convex combination of m vectors {x1 , . . . , xm } of Rn
when it is possible to find m real numbers λ1 , . . . , λm such that:
m
∑ λi x i = y
i=1
15
λi ≥ 0
∀i = 1, . . . , m
Definition 3.1.4. One set D ⊆ Rn is a cone if any convex combination of any subset of D
belongs to D.
We can use the concept of cone to define a partial ordering of vectors in Rn .
Notation 3.1.5. For any vector z1 and z2 in Rn , we denote z1 ≤D z2 if z2 − z1 ∈ (D ∖ {0}), in
this case we also say z2 dominates z1 respect to D.
According to the definitions and notations introduced, it is clear that z1 ≤D z2 if and only
if z2 ∈ z1 + (D ∖ {0}). It is a partial ordering over Rn , since we also have vectors such that
z2 − z1 ∉ (D ∖ {0}) and z1 − z2 ∉ (D ∖ {0}).
Definition 3.1.6. One vector of a solution space x∗ ∈ Z is efficient respect to a cone D if ∄x ∈ Z
such that x ≤D x∗ .
If we take cone D = Rn+ , we get exactly the concept of Pareto optimality. More precisely,
it is exactly the opposite of the Pareto optimality introduced in section 2.2.1, because here the
optimality is defined when a vector is at the left side of the relation ≤D . This will not cause
problems into our formulation we will discuss since we can switch the objective functions for
maximum to minimum. We express the constraints for the membership of a strategy of the
Pareto frontier as:
max Ui ⋅ ∏ xj
xi ≥ 0
∀i ∈ N
j∈N
∀i ∈ N
1 ⋅ xi = 1
T
(3.21)
(3.22)
∀i ∈ N
(3.23)
∀i ∈ N
(3.24)
The variables of this formulation are: x = (x1 , . . . , xn ), the term Ui ⋅ ∏j∈N xj represents the utility
of player i when x is played. To match the formulation to the concept of optimality respect to
a cone, we can change the object function to:
min − Ui ⋅ ∏ xj
j∈N
Necessary formulation Multi-objective problems can be derived into feasibility problems by
using Karush-Kuhn-Tucker conditions. However, in the case of Pareto optimality, the problem
after applying the Karush-Kuhn-Tucker conditions admits not only global Pareto optimal points,
but also local Pareto optimal points. Furthermore, the utility functions of strategy games are
in general non-linear and nonconvex, which means that we can only apply the Karush-KuhnTucker necessary conditions. [3] Therefore, the formulation we obtained through Karush-KuhnTucker condition is only necessary for our original problem. The Karush-Kuhn-Tucker necessary
conditions can be written as follows:
Theorem 3.1.7. Let X be a non-empty open set in Rn , and let fi ∶ Rn ↦ R for i = 1, . . . , n and
gj ∶ Rn ↦ R for j = 1, . . . , m. Consider the problem P to minimize [f1 (x), . . . , fn (x)] subject to
x ∈ X and gj (x) ≤ 0 for j = 1, . . . , m. Let x be a feasible solution, and denote J = {j∶ gj (x) = 0}.
Suppose that fi and gj for j ∈ J are differentiable at x and that gj for j ∉ J are continuous at
x. Furthermore, suppose that ∇gj (x) for j ∈ J are linearly independent. If x solves problem P
16
locally, there exists λ = (λ1 , . . . , λn ) and scalars uj for j ∈ J such that
n
∑ λi ∇fi (x) + ∑ uj ∇gj (x) = 0
i=1
jinJ
(3.25)
uj ≥ 0
∀j ∈ J
λi ≥ 0
λ>0
i = 1, . . . , n
(3.26)
(3.27)
(3.28)
In addition to the above assumptions, if gj for each j ∉ J is also differentiable at x, the foregoing
conditions can be written in the following equivalent form:
n
m
i=1
j=1
∑ λi ∇fi (x) + ∑ uj ∇gj (x) = 0
(3.29)
uj gj (x) = 0
uj ≥ 0
λi ≥ 0
j = 1, . . . , m
j = 1, . . . , m
i = 1, . . . , n
λ>0
(3.30)
(3.31)
(3.32)
(3.33)
If we take our formulation of Pareto optimality (3.21)–(3.23), considering a strategy game
(N, A, u) with n players and m actions for each player, applying Theorem 3.1.7, we have fi =
−Ui ∏j∈N xj for i = 1, . . . , n, n × m functions of gj ∶ gnm = −xnm , each xnm corresponds to the
probability of player n playing action m, n functions of gj ∶ gnm+i = xi1 + ⋅ ⋅ ⋅ + xim − 1 and another
n functions of gj ∶ gnm+n+i = −xi1 − ⋅ ⋅ ⋅ − xim + 1.
There is another drawback of using Karush-Kuhn-Tucker condition apart from the fact that
the result is necessary, the formulation after applying the condition is in general non-linear in
the case of Pareto optimality formulation. We need other approaches that are more applicable
in practice.
Sufficient formulation The main reason of applying Karush-Kuhn-Tucker condition into the
Pareto optimality formulation is to eliminate the multi-objective function. Another way to do
this is to use the Lagrange function.
Definition 3.1.8. Given a multi-objective function [f1 (x), . . . , fn (x)] with
fi ∶ Rn ↦ R, the function
n
L(x, λ) = ∑ λi fi (x)
i=1
is the Lagrange function of [f1 (x), . . . , fn (x)], where λ = (λ1 , . . . , λn ).
In a multi-objective problem, instead of maximizing or minimizing the multi-objective function, we can maximizing or minimizing the correspond Lagrange function. We can update our
Pareto optimality formulation as:
n
max ∑(λi Ui ⋅ ∏ xj )
i=1
xi ≥ 0
(3.34)
j∈N
∀i ∈ N
1 ⋅ xi = 1
T
∀i ∈ N
λ>0
λi ≥ 0
∀i ∈ N
17
(3.35)
(3.36)
(3.37)
(3.38)
Formulation (3.34)–(3.38) is non-linear in general, but we can transform it to a linear complementary problem if we consider correlated strategies, allows us to give a sufficient formulation
for Pareto optimality. We define the following problem primal:
n
max ∑(λi Ui )x
(3.39)
i=1
x≥0
(3.40)
1 ⋅x=1
T
(3.41)
λ>0
λi ≥ 0
∀i ∈ N
(3.42)
(3.43)
Formulation (3.39)–(3.43) solves the Pareto optimal problem considering correlated strategies,
the dual problem correspond to it is:
min v
(3.44)
n
1v − ∑ Ui λi ≥ 0
(3.45)
i=1
Applying the complementary slackness theorem, the primal problem (3.39)–(3.43) is equivalent
to:
x≥0
(3.46)
1 ⋅x=1
T
n
(3.47)
λ>0
λi ≥ 0
∀i ∈ N
1v − ∑ Ui λi ≥ 0
(3.48)
(3.49)
(3.50)
i=1
n
xT ⋅ (1v − ∑ Ui λi ) = 0
(3.51)
i=1
The problem (3.46)–(3.51) is a linear complementary problem. It can be solved in practice using
algorithm mentioned in [8].
3.2
3.2.1
Verification for Strong Nash equilibrium
General idea
In this section, we focus on the problem of verifying if a mixed strategy profile is a strong Nash
equilibrium. Like what has been done in Section 3.1, we divide the problem to verifying of Nash
equilibrium and verifying of Pareto optimality.
The verification of the Nash equilibrium can be done easily since when the strategy profile x
is known, all formulations presented in Section 3.1.2 becomes linear. It remains to be discussed
how the Pareto optimality could be verified, since we have not developed any exact formulation
for Pareto optimality.
Once the problems of verifying Pareto optimality is solved, we can easily have a algorithm
to verify if a Nash equilibrium is also a strong Nash equilibrium. We report this algorithm
briefly here as algorithm 1. It enumerates all the possible coalitions of agents (except the empty
18
Algorithm 1 verifySNE(x)
1: E ←Ð enumerate(N ′ ⊆ N ∶ N ≠ ∅)
2: for all element of E do
3:
x′′ = (xi )i∈N ′
4:
(ϑ, x′ ) ←Ð verifyPareto(N ′ , {Ui ∏j/∈N ′ xj }i∈N ′ , x′′ )
5:
if ¬ϑ then
6:
return (false, N ′ , x′ )
7: return (true, ∅, ∅)
coalition), and for each coalition it verifies whether the coalition strategy is Pareto efficient. If it
is not, the algorithm returns the coalition N ′ for which a multilateral deviation is possible and
a strongly Pareto dominant coalition strategy x′ .
3.2.2
Pareto verification algorithms
One method to verify if a strategy profile x is Pareto optimal in a strategy game (N, A, u) with
n players is to use the following formulation:
n
max ∑ ǫi
(3.52)
i=1
U i ⋅ ∏ x j ≥ U i ⋅ ∏ x j + ǫi
j∈N
xi ≥ 0
∀i ∈ N
j∈N
∀i ∈ N
1 ⋅ xi = 1
ǫi ≥ 0
T
∀i ∈ N
∀i ∈ N
(3.53)
(3.54)
(3.55)
(3.56)
The current strategy x is not in Pareto frontier if formulation (3.52)–(3.56) returns a solution,
the dominator is obtained through variable x. One disadvantage of this formulation is that it is
non-linear in general, it is necessary to find a way to solve it.
First we consider equation (3.53), if we remove the slack variable ǫ and change the ≥ sign to
<, we should remove equation (3.56) as well. At the same time, we should remove the object
function because there is no variables left in it. As a consequence, the problem becomes a
feasibility problem:
Ui ⋅ ∏ xj > Ui ⋅ ∏ xj
j∈N
xi ≥ 0
∀i ∈ N
j∈N
∀i ∈ N
1 ⋅ xi = 1
T
∀i ∈ N
(3.57)
(3.58)
(3.59)
Note that the term Ui ⋅ ∏j∈N xj is a constant number which depends on the input, provided that
the sum of all outcomes of a mixed strategy is 1, we can write equation (3.57) into:
(Ui − M1 ⋅ (Ui ⋅ ∏ xj )) ⋅ ∏ xj > 0
j∈N
j∈N
Here the matrix M1 denotes a matrix that all elements are 1.
19
(3.60)
If we define an auxiliary (n+1)th player, whose utility matrix is Un+1 = (Ui −M1 ⋅(Ui ⋅∏j∈N xj )),
we can consider a minmax problem in the new (n+1)–player game:
min
x1 ,...,xn
(3.61)
vn+1
vn+1 ≥ Un+1,i ⋅
1T ⋅ x i = 1
xi ≥ 0
∏
xi
i∈N ∖{n+1}
∀i ∈ N ∖ {n + 1}
(3.62)
∀i ∈ N
∀i ∈ N
(3.63)
(3.64)
The auxiliary player has n actions, each corresponds to one player in the original problem. The
matrix Un+1,i denotes the utility matrix of the auxiliary player when he plays action i. System
(3.58)–(3.60) has solution if and only if a positive minmax value vn+1 for the auxiliary player is
found.
One way to solve whether system (3.61)–(3.64) admits a positive minmax value of the auxiliary
player is to enumerate the supports of the players up to size n, thanks to the following theorem,
which could be a corollary of the result in [12]:
Theorem 3.2.1. Given a m × ⋅ ⋅ ⋅ × m × k (n+1)–player game with m > k, there must exist a
minmax strategy profile of player 1, 2, . . . , n with support k for each player which minimize the
maximum utility of the (n + 1)th player.
Proof. The proof of the Theorem 3.2.1 is mainly base on the studies of [25], which gives us an
important result:
Theorem 3.2.2. Given a m × k 2–player game with m > k, there exist a minmax strategy profile
of player 1 with support k.
Now considering a m × ⋅ ⋅ ⋅ × m × k (n+1)–player game, adopting formulation (3.61)–(3.64), if
we have a solution x∗1 , . . . , x∗n whose support of player 1 is greater than k, take the system:
min vn+1
x1
vn+1 ≥ Un+1,i ⋅ x1
1T ⋅ x 1 = 1
(3.65)
∏
x∗i
i∈N ∖{1,n+1}
∀i ∈ N ∖ {n + 1}
(3.66)
(3.67)
x1 ≥ 0
(3.68)
It is equivalent to consider the minmax strategy of a 2–player game, by theorem 3.2.2, we know
for sure that there exist a x1 whose support is k. Repeat the same reasoning on other (n − 1)
players, we can find a minmax strategy profile that each player’s support is k.
According to the above theorem, we can report algorithm 2 as a verification algorithm for
Pareto optimality.
Algorithm 2 verifyPareto(N, {Ui }i∈N , x)
1: E ←Ð enumerate(supp1 , . . . , suppn ∶ ∀i, ∣suppi ∣ = n)
2: for all elements of E do
2
3:
if ∃(x′1 , . . . , x′n ) ∈ Rn ∶ [ψ(N, x, supp)] then
′
4:
return (false, x )
5: return (true, ∅)
20
ψ(N, x, supp) =
⎧
⋀i∈N (∑a1 ∈supp1 . . . ∑an ∈suppn (Un+1,i (a1 , . . . , an ) ⋅ ∏k∈N x′k,ak > 0)) ∧
⎪
⎪
⎪
∧
⎨ ⋀i∈N (∑ai ∈suppi x′i,ai = 1)
⎪
⎪
′
⎪
(x
≥
0)
⋀
⎩ i∈N,ai ∈suppi i,ai
The algorithm enumerates (by means of enumerate) all the possible joint supports of size n, to
be precise, we need to enumerate ∏i∈N (mni ) different joint supports of size n. For each joint
support, we need to evaluate formula ψ and we can do that by using the quantifier elimination
algorithm presented in [2]. If ψ is true, then there is a strategy profile x′ that Pareto dominates
the input x.
21
4
Iterative algorithms for strong Nash equilibrium
4.1
Basic Branch and bound algorithm
Given the difficulties in deriving a finite set of (necessary and sufficient) constraints for the membership of a strategy to be in the Pareto frontier, we propose an algorithm that iterates between
the computation of an NE and the verification of a strong Nash equilibrium. Basically, the algorithm will compute an NE and verify whether or not it is a strong Nash equilibrium. If not, it
will compute a new NE excluding the space of strategies that are dominated by strategy profile
found during the strong Nash equilibrium verification. This process repeats until a strong Nash
equilibrium is found or it is proven that none exists. Our idea is supported by the experimental
evaluation presented in [24], where the authors show that the compute time to find an NE (even
optimal) is negligible with respect to the compute time to enumerate all the Nash equilibria,
so calling an NE–finding oracle a number of times can be faster than enumerating all the Nash
equilibria.
The algorithm we propose is essentially a spatial branch–and–bound algorithm. A state is
denoted by s and the set of states by S. Each state s is associated with a convex subspace Vs of
the solution space of the problem (3.6)–(3.9). Specifically, Vs is defined by box constraints over
v1 and v2 : v 1,s ≤ v1 ≤ v 1,s and v 2,s ≤ v2 ≤ v 2,s (where v i and v i are the upper and lower bounds
respectively of the box). We denote by s0 the state in which v i = U i and v i = U i for every i ∈ N ,
where U i and U i are respectively the minimum and maximum entries of Ui . Any strong Nash
equilibrium, if exists, is in Vs0 .
The algorithm is reported in Algorithm 3 and works as follows. At Step 1, set S is populated
with the initial states. Then the algorithm repeats Steps 3–10 while S is not empty. At Step 3 a
state s is removed from S and at Step 4 an oracle is called to find an NE in subspace Vs . If there
is such an NE x, at Step 6 the algorithm verifies whether x is a strong Nash equilibrium and, in
the affirmative case, x is returned. In the negative case in which x is Pareto dominated by x′ ,
at Step 9 new states are generated from the current state s in which the subspace dominated by
x′ is excluded. At Step 10, redundant states in S are pruned. We now discuss the subroutines
that the algorithm calls in more detail.
S ←Ð initialize: Generates the initial set S of states. We consider two possible initializations.
In the first (init1 ), we assign S = {s0 }. In the second (init2 ), we compute all the pure strategy
profiles that are resilient to pure strategy multilateral deviations. This can be done in polynomial
time by comparing agents’ utilities provided by each outcome w.r.t the agents’ utilities provided
22
Algorithm 3 findingSNE
1: S ←Ð initialize
2: repeat
3:
s ←Ð remove(S)
4:
(ϑ, x) ←Ð findNE(s)
5:
if ϑ = true then
6:
(ϑ, ⋅, x′ ) ←Ð verifySNE(x)
7:
if ϑ = true then
8:
return x
9:
S = S ∪ branch(s, x′ )
10:
S ←Ð filter(S)
11: until S is empty
12: return false
by all the other outcomes. Call X the set of pairs (v̂1,h , v̂2,h ) of the agents’ expected utilities given
by these strategy profiles. We generate a number of states to exclude the subspace dominated
by the elements in X. Order the elements of X in increasing order of v̂1,h and call h the number
of elements in X. The following states s are generated: for every h ∈ {1, . . . , h − 1} a state s is
generated with Vs = [v̂1,h , v̂1,h+1 ] × [v̂2,h+1 , U 2 ]; two additional states with zero–measure Vs are
generated, the first with Vs = [U 1 , v̂1,1 ] × [v̂2,1 , v̂2,1 ], the second with Vs = [v̂1,h , v̂1,h ] × [U 2 , v̂2,h ].
With respect to init1, init2 excludes some dominated subspaces avoiding the algorithm to search
for an NE in such subspaces, but, on the other hand, it can introduce a large number of states
requiring the algorithm to call the NE–finding oracle many more times.
Example 4.1.1. Consider the game reported in Fig. 4.1. We show the composition of X when
init2 is used. X is composed by four elements ω1 , ω2 , ω3 , ω4 . The generated states s1 , s2 , s3
are reported in the figure as boxes delimited by dashed lines: Vs1 = [3, 3.4] × [3.4, 5], Vs2 =
[3.4, 5] × [3, 5], Vs3 = [5, 10] × [0, 5] (for simplicity we omit the states with zero–measure Vs in
what follows).
s ←Ð remove(S): Receives the set of states S as input, removes a state s from S according
to some given strategy (e.g., depth first, breadth first, random), and returns s.
(ϑ, x) ←Ð findNE(s): Receives a state s as input, searches for an NE in the subspace bounded
by Vs , and returns ϑ = true and an NE x, if there is an NE where agents’ utilities are in Vs , and
ϑ = false, otherwise. The introduction of constraints Vs makes some algorithms available in the
literature to find an NE not to be applicable, e.g., the Lemke–Howson. NE–finding oracles can
be: PNS [22], LS–PNS [4], and MIP Nash [24].
The adoption of MIP Nash as NE–finding oracle can be easily done, recall the formulation
of MILP in Section 3.1.2, the variable vi represents exactly the expect utility of each player. We
can put the constraints Vs over variable vi .
Example 4.1.2. In Fig. 4.2: findNE(s1 ) would return the NE with v1 = v2 = 3.4; findNE(s2 )
could return either the NE returned with s1 or the mixed NE with v1 = v2 = 4; findNE(s3 ) would
return the NE with v1 = 8.2 and v2 = 0; findNE(s) with s ∈ {s4 , s5 , s6 } would return no NE;
findNE(s7 ) would return the mixed NE with v1 = v2 = 4.
S ′ ←Ð branch(s, x): Receives a state s and a strategy profile x and returns a number of new
states in which the subspace dominated by x is excluded. The generation of the new states is
as follows. Call v̂1,s and v̂2,s the expected utilities of agents 1 and 2 respectively provided by x
23
agent 1
a1
a2
a3
a4
a5
a6
a7
a8
5, 0
0, 0
10, 0
−10, −10
−10, −10
−10, −10
−10, −10
a9
5, 3
3, 5
5, 0
−10, −10
−10, −10
−10, −10
−10, −10
agent 2
a11
−10, −10
−10, −10
−10, 1
8.2, 0
−10, −10
−10, −10
−10, −10
a10
3, 5
5, 3
0, 1
−10, −10
−10, −10
−10, −10
−10, −10
a12
−10, −10
−10, −10
−10, −10
−10, −10
2, 2
−10, −10
−10, −10
a13
−10, −10
−10, −10
−10, −10
−10, −10
−10, −10
2, 4
−10, −10
a14
−10, −10
−10, −10
−10, −10
−10, −10
−10, −10
−10, −10
3.4, 3.4
Figure 4.1: Example of two–agent game
s1 s6
5
s7
s4
s5
ω1
NE
4
SNE
ω2 =NE
E[U2 ]
3
ω6
ω3
NE
2
1
ω5
ω4
0
0
1
2
3
4
5
6
7
8
NE
9
10
E[U1 ]
Figure 4.2: Pareto frontier of game in Fig. 4.1
given as input. A new state s′ is generated with Vs′ = [v 1,s , min{v 1,s , v̂1,s }] × [v̂2,s , v 2,s ] and, if
min{v 1,s , v̂1,s } ≠ v 1,s , a new state s′′ is generated with Vs′′ = [min{v 1,s , v̂1,s }, v 1,s ] × [v 2,s , v 2,s ].
Example 4.1.3. In Fig. 4.2: branch(s3 , xω5 ) produces two states, s4 and s5 , where Vs4 = [5, 8.2]×
[0.4, 5] (dark grey) and Vs5 = [8.2, 10] × [0, 5] (light grey); branch(s2 , xω6 ) produces two states, s6
and s7 , where Vs6 = [3.4, 3.7] × [3.6, 5] (dark grey) and Vs7 = [3.7, 5] × [3, 5] (light grey). (xωj is
the strategy profile associated with ωj .)
S ′ ←Ð filter(S): Receives the set of states S as input and returns a set S ′ subset of S after
having pruned states. If there is a pair of states s, s′ with s′ ≠ s and such that v 1,s ≤ v 1,s′ and v 2,s ≤
v 2,s′ , then we can remove s and s′ and add a new state s′′ with Vs′′ = [v 1,s , v 1,s′ ] × [v 2,s′ , v 2,s′ ].
Example 4.1.4. In Fig. 4.2: suppose S = {s1 , s4 , s5 , s6 , s7 }. States s1 and s6 can be removed
and state s8 with Vs8 = [3, 3.7] × [3.6, 5] can be added.
Theorem 4.1.5. Algorithm 3 is sound and complete.
Soundness is by definition of findNE and of verifySNE. Completeness is due to filter and branch removing only Pareto-dominated solution subspaces. When Nash equilibria are finite, Algorithm 3
terminates in finite time. In the next section, we propose a variation able to deal also with
games admitting a continuous of Nash equilibria (however, while the extension of Algorithm 3
with more than two agents appears to be simple, it is not clear whether the algorithm described
in the next section can be extended to such case).
24
4.2
Mixed integer linear programming to strong Nash
By exploiting mixed–integer–linear programming we can avoid having to spatially branch the
solution space into subspaces. We call this new algorithm iterated MIP 2StrongNash, given that
it exploits iteratively a version of mixed integer programming Nash that is ad hoc for strong
Nash equilibrium. It is reported in Algorithm 4. With this algorithm, we have a unique state at
each iteration and, instead of generating additional states, we add non–convex constraints. The
branch and bound process per state is relegated to the Mixed Integer linear programming solver.
Algorithm 4 Iterated MIP 2StrongNash
1: initialize
2: while true do
3:
(ϑ, x) ←Ð findNE
4:
if ϑ = false then
5:
return false
6:
(ϑ, ⋅, x′ ) ←Ð verifySNE(x)
7:
if ϑ = true then
8:
return x
9:
branch
10:
filter
The core of this algorithm is the mixed integer linear programming Nash formulation for the
computation of an NE, reported in Section 3.1.2.
Given that the program is a mixed integer linear programming, integer and/or linear constraints
can be easily added. We now describe the subroutines employed in the algorithm.
initialize. In the case of init1, no additional constraint is added. In the case of init2, after
having found X, we add three constraints for each element (v̂1,k , v̂2,k ):
v1 ≥ v̂1,k + (U 1 − U 1 ) ⋅ zk
v2 ≥ v̂2,k + (U 2 − U 2 ) ⋅ (1 − zk )
zk ∈ {0, 1}
(4.1)
(4.2)
(4.3)
where constraints (4.1) and (4.2) exclude that both v1 and v2 are simultaneously smaller than
v̂1,k and v̂2,k , respectively. A different binary variable zk is introduced for each k.
branch. After having found an NE x that is Pareto dominated by x′ with agents’ utilities
(v̂1 , v̂2 ), constraints (4.1)–(4.2) are added with a new binary variable zk and v̂1,k = v̂1 and
v̂2,k = v̂2 .
filter. If there are two variables zk , zk′ with zk ≠ zk′ and such that v̂1,k ≤ v̂1,k′ and v̂2,k ≤ v̂2,k′
then variable zk and the three constraints associated with zk can be removed.
It is easy to add features to the algorithm. Here we discuss some relevant examples of that.
Social welfare maximization. A heuristic to find Pareto efficient solutions is to maximize the
cumulative utility of the agents. We can do it by adding an objective function in the MILP
formulation:
max
v1 + v2
(4.4)
Notice that by employing this feature, Iterated MIP 2StrongNash returns an optimal strong
Nash equilibrium. In addition, this feature allows the algorithm to terminate within finite time
even when the game is degenerate and admits a continuous of Nash equilibria. This is essentially
25
because, although a continuous of Nash equilibria contains infinite Nash equilibria, the Nash
equilibrium of the continuous maximizing the social welfare is only one. Thus, either such a
Nash equilibrium is a strong Nash equilibrium and the algorithm terminates or all the Nash
equilibria of the continuous are Pareto dominated and therefore they all are discarded in a
unique iteration.
Upper bound over social welfare. We can exploit the social welfare maximization also to add
constraints over the solution space. Specifically, call v ∗ the value of the objective function at
iteration k. We can add the constraint:
v1 + v2 ≤ sw
(4.5)
v1 + v2 ≥ sw
(4.6)
at iteration k + 1 where sw = v ∗ . Therefore, sw decreases monotonically at each iteration.
Lower bound over social welfare. We can find a linear lower bound over the social welfare
as follows. We order all the constraints (4.1)–(4.3) in increasing order of v̂1,k . Let k to be the
number of constraints (4.1)–(4.3). We define sw = mink∈{1,...,k−1} {v̂1,k , v̂2,k+1 }. It is the lower
bound on social welfare. If sw > sw, then there is no strong Nash equilibrium. Otherwise, we
can call findNE with an additional constraint:
Constraints (4.5) and (4.6) are redundant. However, they can be used by the MILP solver to
speed up the compute time.
Box conditions over utilities of players. Considering the states presented in Section 4.1, any
strong Nash equilibrium, if exists, is in Vs0 . We can add the following constraints to help the
solver to find a solution:
ui ≥ U i
∀i ∈ N
∀i ∈ N
ui ≤ U i
(4.7)
(4.8)
where Ui and Ui are the smallest and the biggest element in the utility matrix of player i. We
denote Iterated MIP 2StrongNash with this feature enabled as Iterated Box MIP 2StrongNash.
4.3
Ad-hoc implementation for Pareto verification in 2–player
games
Due to the fact that the general case of the quantifier elimination algorithm [2] is difficult to
implement, for 2–player games, we can use an ad-hoc algorithm to solve:
⎧
⎪
⎪
⎪
⎪
⎪
ψ=⎨
⎪
⎪
⎪
⎪
⎪
⎩
x1 U3,1 x2 > 0∧
x1 U3,1 x2 > 0∧
∑ x1,i = 1 ∧ ∑ x2,i = 1∧
x1 ≥ 0 ∧ x1 ≥ 0
26
(4.9)
where ψ is the equation to be solved in Algorithm 2 in the case of 2 players. Recall Theorem 3.2.1,
we are sure that it is enough to enumerate the support of size 2, therefore, we can write:
U (1, 1) U1 (1, 2)
U3,1 = ( 1
)
U1 (2, 1) U1 (1, 2)
(4.10)
U (1, 1) U2 (1, 2)
)
U3,2 = ( 2
U2 (2, 1) U2 (1, 2)
x1 = (x11
(4.11)
1 − x11 )
T
(4.12)
1 − x21 )
(4.13)
x
U (1, 1) U1 (1, 2)
1 − x11 ) ( 1
) ( 21 ) > 0
U1 (2, 1) U1 (1, 2) 1 − x21
(4.14)
x2 = (x21
T
ψ could be reduced to system:
(x11
(x11
U (1, 1) U2 (1, 2)
x
1 − x11 ) ( 2
) ( 21 ) > 0
U2 (2, 1) U2 (1, 2) 1 − x21
x11 , x21 ∈ [0, 1]
(4.15)
(4.16)
resolve the matrices, we get:
x11 x21 [U1 (1, 1) − U1 (2, 1) − U1 (1, 2) + U1 (2, 2)]+
x11 [U1 (1, 2) − U1 (2, 2)]+
x21 [U1 (2, 1) − U1 (2, 2)]+
U1 (2, 2) >0
(4.17)
x11 x21 [U2 (1, 1) − U2 (2, 1) − U2 (1, 2) + U2 (2, 2)]+
x11 [U2 (1, 2) − U2 (2, 2)]+
x21 [U2 (2, 1) − U2 (2, 2)]+
(4.18)
U2 (2, 2) >0
let:
A1 =U1 (1, 1) − U1 (2, 1) − U1 (1, 2) + U1 (2, 2)
B1 =U1 (1, 2) − U1 (2, 2)
(4.19)
(4.20)
C1 =U1 (2, 1) − U1 (2, 2)
D1 =U1 (2, 2)
A2 =U2 (1, 1) − U2 (2, 1) − U2 (1, 2) + U2 (2, 2)
(4.21)
(4.22)
(4.23)
D2 =U2 (2, 2)
(4.26)
B2 =U2 (1, 2) − U2 (2, 2)
C2 =U2 (2, 1) − U2 (2, 2)
(4.24)
(4.25)
we can re-write Equation 4.17 and Equation 4.18 into:
x21 (A1 x11 + C1 ) > − D1 − B1 x11
x21 (A2 x11 + C2 ) > − D2 − B2 x11
27
(4.27)
(4.28)
Remember the fact that A1 , A2 , B1 , B2 , C1 , C2 , D1 , D2 is known, our problem is reduced to find
a pair of variable x11 , x12 ∈ [0, 1] that satisfies both Equation 4.27 and Equation 4.28. If this
kind of pair x11 , x12 exists, we find the Pareto dominator.
We should discuss over the following nine situations:
Case 1: A1 x11 + C1 > 0, A2 x11 + C2 > 0
Case 2: A1 x11 + C1 > 0, A2 x11 + C2 = 0
Case 3: A1 x11 + C1 > 0, A2 x11 + C2 < 0
Case 4: A1 x11 + C1 = 0, A2 x11 + C2 > 0
Case 5: A1 x11 + C1 = 0, A2 x11 + C2 = 0
Case 6: A1 x11 + C1 = 0, A2 x11 + C2 < 0
Case 7: A1 x11 + C1 < 0, A2 x11 + C2 > 0
Case 8: A1 x11 + C1 < 0, A2 x11 + C2 = 0
Case 9: A1 x11 + C1 < 0, A2 x11 + C2 < 0
In case 1, we can write the problem as:
A1 x11 + C1 > 0
−D1 − B1 x11
x21 >
A1 x11 + C1
A2 x11 + C2 > 0
−D2 − B2 x11
x21 >
A2 x11 + C2
x11 , x21 ∈ [0, 1]
(4.29)
(4.30)
(4.31)
(4.32)
(4.33)
Consider Equation 4.30 and Equation 4.32, it is safe to fix x21 = 1 since this choice gives the greatest possibility to find a feasible x11 . After that, all Equations from (4.29)–(4.32) are inequalities
of x11 in first order. It can be verified easily if a value between [0, 1] is admitted.
In case 2, the problem is:
A1 x11 + C1 > 0
−D1 − B1 x11
x21 >
A1 x11 + C1
A2 x11 + C2 = 0
B2 x11 + D2 > 0
x11 , x21 ∈ [0, 1]
(4.34)
(4.35)
(4.36)
(4.37)
(4.38)
As in the case 1, we can safely fix x21 = 1, but here the problem is even simpler because
we can fix x11 according to Equation 4.36. We should verify the value of x11 , x21 in Equation 4.34,Equation 4.35,Equation 4.37 and Equation 4.38. If all the equations hold, we find a
Pareto dominator.
28
The problem we should consider in case 3 is:
A1 x11 + C1 > 0
−D1 − B1 x11
x21 >
A1 x11 + C1
A2 x11 + C2 < 0
−D2 − B2 x11
x21 <
A2 x11 + C2
x11 , x21 ∈ [0, 1]
(4.39)
(4.40)
(4.41)
(4.42)
(4.43)
Equation 4.39, Equation 4.41 and Equation 4.43 are inequalities of first order of x11 , we can
obtain an interval (α, β) (if exists) of x11 considering only these equations. we can consider the
problem as if there exists a value of x11 ∈ (α, β), such that:
−D2 − B2 x11 −D1 − B1 x11
<
A2 x11 + C2
A1 x11 + C1
(4.44)
2 −B2 x11 −D1 −B1 x11
, A1 x11 +C1 ) and a
If such a value of x11 can be found, we can take x21 a value in ( −D
A2 x11 +C2
dominator is found.
Since the sign of denominators of the left and right part of Equation 4.44 are known, it can
be transformed to a standard second order inequality of x11 :
Rx211 + Sx11 + T > 0
(4.45)
We should discuss further over the sign of R and the sign of δ = S 2 − 4 ∗ R ∗ T . If R > 0 and δ ≤ 0,
S
in the δ = 0 case). If R > 0 and δ > 0, feasible
any value of x11 ∈ (α, β) can be taken (except − 2R
√
√
δ
δ
value of x11 exists if α < −S−
or β > −S+
. If R = 0, Equation 4.45 is reduced to first order
2R
2R
and can be verified consider the interval (α, β).√If R < 0 and δ √
≤ 0, no dominator exists. If R < 0
−S− δ
−S+ δ
and δ > 0, feasible value of x11 exists if α < 2R and β > 2R .
Now we discuss the situation in case 5:
A1 x11 + C1 = 0
B2 x11 + D2 > 0
A2 x11 + C2 = 0
B2 x11 + D2 > 0
x11 , x21 ∈ [0, 1]
(4.46)
(4.47)
(4.48)
(4.49)
(4.50)
It can be verified easily if a feasible value of x11 exists. x21 can take any value in [0, 1].
In case 6:
A1 x11 + C1 = 0
B2 x11 + D2 > 0
A2 x11 + C2 < 0
−D2 − B2 x11
x21 <
A2 x11 + C2
x11 , x21 ∈ [0, 1]
(4.51)
(4.52)
(4.53)
(4.54)
(4.55)
we can fix x11 according to Equation 4.51, and fix x21 = 0. We should verify if this pair of value
of x11 , x21 Satisfies other equations. If yes, we find a dominator.
29
Since case 4,7,8 are symmetric to case 2,3,6 respectively, they can be solved in the same way
and we do not need to explain any more. Case 9 is the opposite of case 1, we can fix x21 = 0 in
case 9 and follows the same reasoning in case 1.
30
5
Tests and results
In this section, we represent the type of game instances we deploy to test our algorithm, the
results and the comparison between different algorithms.
5.1
Test Setting and objectives
We experimentally evaluated the various configurations of Algorithm 3 and Algorithm 4 on 2–
player games. We considered four configurations of Algorithm 4(both with box constraints and
without box constraints):
mode 0 in which init1 is used and no additional features are active
mode 1 in which init1 is used with the social welfare maximization and the upper bound over
the social welfare
mode 2 is init2 plus the upper bound over the social welfare
mode 3 is as mode 1 with the lower bound of social welfare. In algorithm 3
In Iterated Box MIP 2StrongNash, we evaluated 2 more configurations:
mode 4 in which init2 is used and no additional features are active (as mode 2 without upper
bound over the social welfare)
mode 5 as mode 3 without upper bound over the social welfare
We considered the Nash equilibrium oracle MIP Nash. The branching criteria for Algorithm 3
used is depth-first.
We implemented the strong Nash equilibrium verification and the branch and bound in the
C programming language and the MIP Nash as Nash equilibrium finding oracle in AMPL with
CPLEX. The Pareto verification part we implemented is the one presented in Section 4.3. We
used an Intel Core i7-2630QM processor and Linux kernel 2.6.32. Due to the fact that CPLEX
will take the advantage of all the 8 logical cores provided by the Intel Core processor, while the
strong Nash equilibrium verification procedure is single-threaded, we measure the time spent by
the verification and MILP Nash separately and accumulate the final time using the formula:
accumulated time = 7 ∗ Nash finding oracle time + overall time
31
where the overall time is the measured time for the whole algorithm to finish.
The game instances we deploy are GAMUT instances and MISSING instances which will
be introduced in Section 5.2. The test on GAMUT instances can give us a general idea of the
performances of the algorithms, enables us to have a comparisons with other Nash equilibrium algorithms because GAMUT is in fact a de-facto test-bed for Nash equilibrium finding algorithms.
The test on MISSING instances allows us to analysis the performances under the different circumstances in the sense of different number of strong Nash equilibria and Nash equilibria which
is not strong Nash equilibria in the game, different size of support of the strong Nash equilibrium,
etc., especially in the case of different modes applied. Moreover, since the MISSING instances
are designed by us, if an instance admits strong Nash equilibrium or not is known to us, we can
verify the correctness of our algorithm.
To conclude, the objective of the testing are:
• Get the performance measure of our algorithm (a.k.a how much time it will spent until
termination) on GAMUT instances.
• Verify the correctness of the output (find a strong Nash equilibrium or not), using MISSING
instances.
• Analysis how the performance will change in the case of different number of Nash equilibria,
support size of strong Nash equilibrium, etc., compare the performances between different
initialization configurations and different features enabled.
5.2
Game instances
5.2.1
GAMUT instances
GAMUT is a suite of game generators designated for testing game-theoretic algorithms [19].
It is considered the ubiquitous benchmark testbed for algorithms calculating Nash equilibria.
In our test, we selected a subset of GAMUT classes which are typically widely used in testing Nash equilibria algorithms, they are BertrandOligopoly, BidirectionalLEG, CovariantGame,
DispersionGame, GraphicalGame, LocationGame, MinimumEffortGame, PolymatrixGame, RandomGame, TravelersDilemma, UniformLEG and WarOfAttrition. Some of the classes have different suffixes follow the name, which means different initialization of the game generator.
5.2.2
MISSING instances
The name MISSING stands for MIxed Strategy SNE INstance Generator. In order to evaluate
the algorithm under different conditions of the size of the support of strong Nash equilibrium,
the number of other Nash equilibria which are not strong Nash equilibria,etc., we need a class
of instances that those properties of the game are known, we designed MISSING instance in
2–player games to fulfil this requirement.
Each MISSING instance has either no strong Nash equilibria, or exact one strong Nash equilibrium whose support per player is multiples of 2, which is a parameter that can be tuned when
using the generator. Table 5.1 and Table 5.2 are game examples that admit Nash equilibrium
with support 2 and support 4 respectively. The missing instances are based on instances whose
structure is showed in these tables, it is not difficult to generalize the structure to have instances
whose support is multiples of 2.
The problem of game in Table 5.2 is that it admits also Nash equilibria with support less
than 4. It is necessary to add other actions so that they are not Nash equilibria any more.
32
agent 1
a1
a2
agent 2
a3
a4
5,0 0,5
0,5 5,0
agent 1
Table 5.1: Example game admitting Nash equilibrium with support 2
a1
a2
a3
a4
a5
5,0
0,5
5,0
0,5
agent 2
a6
a7
0,5 5,0
5,0 0,5
0,5 5,0
5,0 0,5
a8
0,5
5,0
0,5
5,0
Table 5.2: Example game admitting Nash equilibrium with support 4
agent 1
Table 5.3 shows a game admits only one Nash equilibrium with support 4. It can be obtained
from Table 5.2 padding 4 actions for agent 1 and 4 action for agent 2. Action a5 of agent 1
rules out all Nash equilibria in which agent 2 does not play action a9 because it is a dominate
strategy if a9 is not played, same reasoning for action pair (a6 , a10 ), (a7 , a11 ) and (a8 , a12 ). At
the same time, action a13 , a14 , a15 , a16 of agent 2 rules out all Nash equilibria in which agent
2 does not play action a1 , a2 , a3 and a4 . Other utilities are set to a negative value to ensure
that the instance does not admit other Nash equilibria. The only Nash equilibrium admitted by
this game is agent 1 randomize over a1 , a2 , a3 and a4 , agent 2 randomize over a9 , a10 , a11 and
a12 , which is also a strong Nash equilibrium since there is no Pareto domination strategy to the
payoff (2.5, 2.5).
Table 5.3 is an example of MISSING instance that does not admit other Nash equilibria other
than the strong Nash equilibrium. To generate instance admits also other Nash equilibria, addi-
a1
a2
a3
a4
a5
a6
a7
a8
a9
5,0
0,5
5,0
0,5
-15,-5
5,-5
5,-5
5,-5
a10
0,5
5,0
0,5
5,0
5,-5
-15,-5
5,-5
5,-5
a11
5,0
0,5
5,0
0,5
5,-5
5,-5
-15,-5
5,-5
agent 2
a12
a13
0,5
-5,-15
5,0
-5,5
0,5
-5,5
5,0
-5,5
5,-5
-5,-5
5,-5
-5,-5
5,-5
-5,-5
-15,-5 -5,-5
a14
-5,5
-5,-15
-5,5
-5,5
-5,-5
-5,-5
-5,-5
-5,-5
a15
-5,5
-5,5
-5,-15
-5,5
-5,-5
-5,-5
-5,-5
-5,-5
a16
-5,5
-5,5
-5,5
-5,-15
-5,-5
-5,-5
-5,-5
-5,-5
Table 5.3: Example game admitting only one Nash equilibrium with support 4
33
agent 1
a1
a2
a3
a4
a5
5,0
0,5
-15,-5
-5,-5
agent 2
a6
a7
0,5
-5,-15
5,0
-5,-5
-5,-5
-5,-5
-5,-15 -5,-5
a8
-5,-5
-5,-15
-5,-5
-5,-5
Table 5.4: Example game admitting only one Nash equilibrium with support 2
agent 1
tional paddings are necessary. To add a Nash equilibrium which is not strong Nash equilibrium
in game showed in Table 5.4, we need to add the Nash equilibrium itself as well as a dominating
strategy profile which is not Nash equilibrium to make sure the added Nash equilibrium is not a
strong Nash equilibrium. At the some time, to ensure that agent 1 and 2 randomize over their
respective first two actions is a strong Nash equilibrium, there should be no strategy profiles
in the newly padded area that Pareto dominates the strong Nash equilibrium whose payoff is
(2.5, 2.5), which means that the dominating strategy profile of the Nash equilibrium should not
dominate the strong Nash equilibrium, one example is Table 5.5.
a1
a2
a3
a4
a5
a6
a7
5,0
0,5
-15,-5
-5,-5
-5,-5
-5,-5
a8
0,5
5,0
-5,-5
-5,-15
-5,-5
-5,-5
agent
a9
-5,-15
-5,-5
-5,-5
-5,-5
-5,-5
-5,-5
2
a10
-5,-5
-5,-15
-5,-5
-5,-5
-5,-5
-5,-5
a11
-5,-5
-5,-5
-5,-5
-5,-5
1,10
-5,-5
a12
-5,-5
-5,-5
-5,-5
3,-5
-5,-5
2,11
Table 5.5: Example game admitting one Nash equilibrium and one strong Nash equilibrium with
support 2
It is also possible to add mixed Nash equilibria to the MISSING instances. One example is
Table 5.6, the game admits two Nash equilibria which are not strong Nash equilibria. To simplify
our implementation of the MISSING generator, we will generate Nash equilibrium with support
size 2 when mixed Nash equilibrium is required. The social welfare of the Nash equilibria can
be greater than the strong Nash equilibrium (11 for the first Nash equilibrium) or less than the
strong Nash equilibrium (4.5 for the second Nash equilibrium).
According to the game structure discussed so far, the game size of each MISSING instance
would be
s = 2 ∗ ss + 2 ∗ p + 3 ∗ m
(5.1)
where:
ss is the size of the support of the strong Nash equilibrium.
p is the number of pure strategy Nash equilibrium which is not strong Nash equilibrium admitted
by the generated game.
34
agent 1
a1
a2
a3
a4
a5
a6
a7
a8
a9
a10
5,0
0,5
-15,-5
-5,-5
-5,-5
-5,-5
-5,-5
-5,-5
-5,-5
a11
0,5
5,0
-5,-5
-5,-15
-5,-5
-5,-5
-5,-5
-5,-5
-5,-5
a12
-5,-15
-5,-5
-5,-5
-5,-5
-5,-5
-5,-5
-5,-5
-5,-5
-5,-5
agent 2
a13
a14
-5,-5
-5,-5
-5,-15
-5,-5
-5,-5
-5,-5
-5,-5
-5,-5
-5,-5
1, 10
-5,-5
-5,-5
-5,-5
-5,-5
-5,-5
-5,-5
-5,-5
-5,-5
a15
-5,-5
-5,-5
-5,-5
3, −5
-5,-5
2, 11
-5,-5
-5,-5
-5,-5
a16
-5,-5
-5,-5
-5,-5
-5,-5
-5,-5
-5,-5
8, 0
0, 1
-5,-5
a17
-5,-5
-5,-5
-5,-5
-5,-5
-5,-5
-5,-5
0, 1
8, 0
-5,-5
a18
-5,-5
-5,-5
-5,-5
-5,-5
-5,-5
6, −5
-5,-5
-5,-5
5, 1
Table 5.6: MISSING instance that admits mixed Nash equilibrium
m is the number of mixed strategy Nash equilibrium with support 2 which is not strong Nash
equilibrium admitted by the generated game.
This fact, however, does not prevent us to generate games whose size is larger, we can simply
add dummy actions with negative payoffs. In this way, we can require our generator to generate
games with a specific size. It is true that the padding area admits weakly Nash equilibria, which
means that the number of Nash equilibria of a MISSING instance with padding is greater than
p + m. However,those weakly Nash equilibria will never dominate the Nash equilibria we tend to
add in the game because of their low payoff.
Algorithm 5 MISSING
1: (s, W, U ) ←Ð initialize
2: U ←Ð writeSNE(s)
3: U ←Ð purifySNE(U )
4: repeat
5:
w ←Ð remove(W )
6:
repeat
7:
(v1 , v2 , d1 , d2 , t, ε) ←Ð generatePayoffs(w)
8:
ϑ ←Ð verifyPayoffs(v1 , v2 , d1 , d2 , s, ε)
9:
until ϑ = false
10:
writePayoffs(v1 , v2 , d1 , d2 , t, ε)
11: until W is empty
12: U ←Ð raiseToPositive(U )
13: return U
The MISSING instance generation algorithm is reported in Algorithm 5. The parameters of
the generation algorithm are:
• Size of the support of the strong Nash equilibrium per agent (zero indicates that there is
no strong Nash equilibrium in the generated game)
• Number of non–strong Nash equilibria (pure and mixed, generated randomly with a social
welfare that can be larger or smaller than the strong Nash equilibrium’s one)
• The output game size (cannot be smaller according to Equation 5.1)
35
At the beginning, a positive random number is generated as the payoff for each player when the
strong Nash equilibrium is played, saved in s, a series of positive random numbers is generated
as the social welfare of the Nash equilibria, saved in W , and the utility matrix of the game U was
initialized to all elements equal to −s. This is done in the procedure initialize. Then, procedure
writeSNE writes the part of payoff matrix where the strong Nash equilibrium resides in, in the
example of Table 5.6, the correspond entries are marked with underline. purifySNE updates the
part of the utility matrix (marked in bold italic font in example of Table 5.6) to ensure that
there is only one strong Nash equilibrium in the underlined region.
The payoffs of players in other Nash equilibria are generated according to the social welfare
in W . generatePayoffs(w) generates the random payoffs of each player (v1 , v2 ), the dominator (d1 , d2 ), an additional number t to be put in the matrix of player 1 to ensure the dominator is not a Nash equilibrium, and a small number ε used only in mixed Nash equilibria.
verifyPayoffs(v1 , v2 , d1 , d2 , s, ε) returns true if (d1 , d2 ) or (v1 + ε, v2 + ε) dominates (s, s), in this
case generatePayoffs will be called again because the generated value cannot be added. After
the generated payoffs passed the test in verifyPayoffs, writePayoffs(v1 , v2 , d1 , d2 , t, ε) writes the
payoffs to the utility matrix(indicated using over-line in example of Table 5.6), for mixed Nash
equilibria, the written value will be (v1 − ε, v2 + ε) and (v1 + ε, v2 − ε).
Finally, the raiseToPositive(U ) procedure adds a number to all elements in the utility matrix
U to make them positive, before it is returned by the algorithm.
The positions of the entries to be updated by the procedure writePayoffs(v1 , v2 , d1 , d2 , t, ε)
is controlled by some global variables which are not mentioned above. We put entries for pure
strategy Nash equilibria first, then mixed strategies with support 2, the procedure knows how
many pure and mixed strategy Nash equilibria he has already written and calculates the position
of the entry that should be updated.
5.2.3
Generation configurations
The tests of GAMUT instances are based on 20 instances of 2–player games per GAMUT game
class with a number of actions per agent in {25, 50}. The test of MISSING instances are based
on 20 instances per combination of the following parameters:
• The game size m1 = m2 = m from 10 to 100 step 5 (if admits by Equation 5.1)
• The support of the strong Nash equilibrium per player ∣supp∣ ∈ {0, 2, 4, 6, 8, 10} where
∣supp∣ = 0 means that there is no strong Nash equilibrium
• Number of non–strong Nash equilibria ∈ {0, 3, 6, 9, 12}, half pure and half mixed(In the case
of 3 and 9, there is one more pure than mixed)
5.3
Test results
The test data we report for GAMUT classes are:
• The average aggregated time spent for the algorithm to terminate
• The average aggregated time spent for the algorithm to terminate when a strong Nash
equilibrium is found
• The average aggregated time spent for the algorithm to terminate when no strong Nash
equilibria are found
36
In the test of Iterated MIP 2StrongNash, we made an additional check on whether there are
some GAMUT classes that admits only mixed strategies.
The test data we report for MISSING instances are:
• The average aggregated time spent for the algorithm to terminate
• The number of iterations (for branch and bound, we report number of AMPL calls and
number of verification calls separately)
• The average computation time for each AMPL call
• The percentage of time spent for Pareto verification
The data are too much to put here, we leave them to the appendix. Here we present some
summaries, with respect to the objectives we pointed out in Section 5.1.
• The most difficult GAMUT classes for Algorithm 3 and Iterated MIP 2StrongNash are CovariantGame, RandomGame, GraphicalGame and PolymatrixGame, while the four most
difficult classes for Iterated Box MIP 2StrongNash are BertrandOlygopoly, CovariantGame,
GraphicalGame and PolymatrixGame. The percentage of finding a strong Nash equilibrium of the three algorithms are consistent, which implies their correctness. Only
TravellerDilemma and BertrandOlygopoly instances never admit strong Nash equilibria,
whereas, with the other classes, the percentage of instances admitting a strong Nash equilibrium is averagely high (with many classes, all the generated instances admit strong Nash
equilibria). It is also obvious that for both algorithms finding a strong Nash equilibrium
when it exists is significantly faster than certifying that it does not exist. Iterated Box
MIP 2StrongNash is faster than Algorithm 3 in almost all the classes, except BertrandOlygopoly and LocationGame. However, the original Iterated MIP 2StrongNash is slower than
Algorithm 3 in most cases. This is because the solution space of the MILP in Iterated MIP
2StrongNash when invoked is larger, it will search all strategies whose expected utilities of
player 1 and 2 are (EU1 , EU2 ) ∈ R2 , while the state condition of Algorithm 3 limits the
search in a box.
• According to the mixed strategy check on Iterated MIP 2StrongNash, only in WarOfAttrition the algorithm returns mixed strategy. It was also verified that those instances which
with the algorithm found strong Nash equilibrium admits also pure strategy strong Nash
equilibrium. This fact motivates us to develop MISSING instances that admits only mixed
strategy strong Nash equilibrium.
• Some results obtained for MISSING instances are noisy, especially in the cases when the
number of Nash equilibria which are not strong Nash equilibrium is not zero. The reason
is that we put random payoffs for those Nash equilibria in the instances. Moreover, the
weakly Nash equilibria in the padding area may also affects the results. Nevertheless, it
is verified that all our algorithms can find the strong Nash equilibrium successfully, if one
exists.
• Algorithm 3 behaves better in init2 than in init1 (1.5 times–2 times speedup when strong
Nash equilibrium exists, 8 times speedup when does not). The decreased time spent is
mainly due to the decreased number of states explored (number of AMPL calls) and the
decreased number of verification calls. The time spent for each MILP AMPL call is around
0.4s under almost all circumstances, the reason is that the box constraints for the states
make the MILP problem easy to solve for CPLEX, the time measured is largely due to the
37
overhead of the system call. More states are explored as the number of Nash equilibria
in the instance increases, which means the algorithm need more time when the number of
non–strong Nash equilibrium increases, it is reasonable since the strong Nash equilibrium
is difficult to find in the presence of more Nash equilibria.
• Compared between different modes of Iterated MIP 2StrongNash and Iterated Box MIP
2StrongNash, the most significant configuration is the social welfare maximization. We can
see a decrease of number of iterations from mode 0 to mode 1. However, the social welfare
maximization negatively affect the compute time per iteration spent by the MIP Nash at
the same time (especially in the case of large support of strong Nash equilibrium). It can
be observed that the lower bound over the social welfare when combined with social welfare
maximization does not provide significant improvements with the used instances, neither
do the change between init1 and init2. The average computation time per AMPL call
graph indicated that the largest increase in NE–finding compute time with respect to the
strong Nash equilibrium support is in the iteration in which the strong Nash equilibrium is
returned, while with the other iterations the increase is smaller. This is because the strong
Nash equilibrium presents a large support and it results harder to be founded, instead the
other non–strong Nash equilibria present a small support. Surprisingly, mode 0 seems to be
the best in the case of large support strong Nash equilibrium and big number of non-strong
Nash equilibrium among mode 0 to mode 3. The main reason is that in mode 0 almost
all AMPL calls returns in a relatively short time, other modes the AMPL problem is more
difficult for the solver, which overrides the fact that in other modes the iteration number
is lower.
• We can see from all the algorithms, that for large size games most times are spent in the
verification process. This shows the need for developing techniques to speed up the strong
Nash equilibrium verification algorithm.
• It is reasonable that Iterated Box MIP 2StrongNash is never slower than Iterated MIP
2StrongNash. Most times saved by the former are in the AMPL calls. Combined with
the results of the GAMUT classes, we can conclude that it is better to always enable the
feature of box conditions over utilities of players.
• Compare the results of mode 0,1,4,5 for Iterated Box MIP 2StrongNash, mode 0 is best
when the support size of strong Nash equilibrium is small, while in the opposite case, mode
5 outperforms others. There is no evidence that the average time spent by each AMPL
in mode 5 changes with the support size of the strong Nash equilibrium. Unlike init2 in
Algorithm 3, in mode 4, however, we do not observe a decrease of number of iterations.
5.4
PNS adoption for branch and bound algorithm
As mentioned in Section 4.1, we can adopt different Nash finding oracle to the branch and bound
algorithm, an alternative other than MILP is PNS.
To adopt the constraints Vs of the states into the feasibility problem (2.1)–(2.5) solved by
PNS, we put constraints on vi , with the lower bound li and the upper bound ui :
v i ≥ li
v i ≤ ui
∀i ∈ N
∀i ∈ N
(5.2)
(5.3)
The reason we can do this is that the expected utility of all actions played in a mixed strategy
Nash equilibrium should be equal. vi represents both the expected utility of actions which is in
38
the support of Nash equilibrium and the expected utility of the player when Nash equilibrium is
played.
The adoption of PNS as the NEfinding oracle in Algorithm 3 is not satisfactory. In most
cases, the algorithm will not finish in 1 hour for one instances of 25*25 size. The reason is
that PNS works in a the that scans the supports from smallest to largest, trying to find a
Nash equilibrium in a specific support. This is good when Nash equilibrium exists in small
supports(as for most GAMUT classes). However, in Algorithm 3 we will have a sequence of
states, according to the tests we have already done, about half states does not admit Nash
equilibria (the number of verification calls is about half the number of AMPL calls). In these
states, PNS is forced to enumerate all the supports. The adoption will work only when it finds
a strong Nash equilibrium by a sequence of Nash equilibrium finding sub-problems(or states),
each admitting a Nash equilibrium with small support, which never occurs in practice.
39
6
Conclusions and future work
In this thesis we are devoting to solve the problem of finding a strong Nash equilibria in generic
strategy games. The first approach is to develop different formulations for strong Nash equilibrium, trying to solve the problem with standard algorithms or a solver. We have divided the
problem of finding a strong Nash equilibrium into the problem of finding a Nash equilibrium and
the problem of calculating the Pareto frontier. We have introduced NLCP and MILP formulation
for Nash equilibrium, a necessary and a sufficient formulation for the Pareto frontier.
As the exact formulation for Pareto frontier is difficult to obtain, we turned to the verification
of the Pareto optimality. If a Nash equilibrium is Pareto efficient in all possible coalitions, it is
also a strong Nash equilibrium. We have developed Algorithm 3 based on this idea, then proposed
the more compact Algorithm 4. We have implemented the algorithms in 2–player settings and
tested them in different situations.
Although the problem of finding strong Nash equilibrium in 3 or more player games still needs
discussion, our algorithm works well and solves instances for 2-player games in acceptable time.
The test results of MISSING instances showed us some properties of the algorithm, despite the
noise introduced by the random payoff.
Remaining open questions include the following:
• Given a game with an arbitrary number of agents, is it possible to find better formulations
the strong Nash equilibrium finding problem with a finite set of constraints? Can we find
a formulation which is both sufficient and necessary?
• How the general case of the quantifier elimination algorithm in [2] can be implemented?
What will be the performances of our algorithm applied to 3 or more players, if the algorithm in [2] is implemented and adopted to our algorithm?
• Can we improve the MISSING class in order to have a more stable result? Can we combine
in some way GAMUT classes with MISSING instances to have a better evaluation our
algorithms in different conditions?
40
Bibliography
[1] R. J. Aumann. Acceptable points in general cooperative n-person games. In R. D. Luce
and A. W. Tucker, editors, Contribution to the theory of game IV, Annals of Mathematical
Study 40, pages 287–324. University Press, 1959.
[2] S. Basu, R. Pollack, and M.F. Coste-Roy. Algorithms in Real Algebraic Geometry. Algorithms And Computation in Mathematics. Springer, 2006.
[3] D.P. Bertsekas. Nonlinear programming. Athena scientific optimization and computation
series. Athena Scientific, 1999.
[4] Sofia Ceppi, Nicola Gatti, Giorgio Patrini, and Marco Rocco. Local search techniques for
computing equilibria in two-player general-sum strategic-form games. In Proceedings of the
9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 Volume 1, AAMAS ’10, pages 1469–1470, Richland, SC, 2010. International Foundation for
Autonomous Agents and Multiagent Systems.
[5] A.H. van den Elzen and A.J.J. Talman. Finding a nash equilibrium in noncooperative
n-person games by solving a sequence of linear stationary point problems. Research Memorandum 570, Tilburg University, Faculty of Economics and Business Administration, 1992.
[6] A. Epstein, M. Feldman, and Y. Mansour. Strong equilibrium in cost sharing connection
games. In ACM EC, pages 84–92, 2007.
[7] Uriel Feige and Inbal Talgam-Cohen. A direct reduction from k-player to 2-player approximate nash equilibrium. In Proceedings of the Third international conference on Algorithmic
game theory, SAGT’10, pages 138–149, Berlin, Heidelberg, 2010. Springer-Verlag.
[8] R. Fletcher. Practical methods of optimization; (2nd ed.). Wiley-Interscience, New York,
NY, USA, 1987.
[9] Nicola Gatti, Giorgio Patrini, Marco Rocco, and Tuomas Sandholm. Combining local search
techniques and path following for bimatrix games. In Nando de Freitas and Kevin P. Murphy,
editors, UAI, pages 286–295. AUAI Press, 2012.
41
[10] L. Gourvès and J. Monnot. On strong equilibria in the max cut game. In WINE, pages
608–615, 2009.
[11] Srihari Govindan and Robert Wilson. A global newton method to compute nash equilibria.
J. Economic Theory, 110(1):65–86, 2003.
[12] Kristoffer Arnsfelt Hansen, Thomas Dueholm Hansen, Peter Bro Miltersen, and
Troels Bjerre Sørensen. Approximability and parameterized complexity of minmax values. In Christos H. Papadimitriou and Shuzhong Zhang, editors, WINE, volume 5385 of
Lecture Notes in Computer Science, pages 684–695. Springer, 2008.
[13] A. Hayrapetyan, E. Tardos, and T. Wexler. The effect of collusion in congestion games. In
STOC, pages 89–98, 2006.
[14] R. Holzman and N. Law-Yone. Strong equilibrium in congestion games. GAME ECON
BEHAV, 21:85–101, 1997.
[15] Inc Ilog. ILOG CPLEX 10.2 Reference Manual and Software. Sunnyvale, CA, 2007.
[16] A X Jiang and K Leyton-Brown. A tutorial on the proof of the existence of nash equilibria.
Technical report, Department of Computer Science, University of British Columbia, 2009.
[17] C. E. Lemke. Bimatrix equilibrium points and mathematical programming. Management
Science, 11(7):pp. 681–689, 1965.
[18] C. E. Lemke and Jr. Howson, J. T. Equilibrium points of bimatrix games. Journal of the
Society for Industrial and Applied Mathematics, 12(2):pp. 413–423, 1964.
[19] K. Leyton-Brown, E. Nudelman, J. Wortman, and Y. Shoham. Run the gamut: A comprehensive approach to evaluating game-theoretic algorithms. In Proceedings Proceedings of
AAMAS-04, 2004.
[20] John Nash. Non-cooperative games. Annals of Mathematics, 54(2):pp. 286–295, 1951.
[21] Rabia Nessah and Guoqiang Tian. On the existence of strong nash equilibria. Working
Papers 2009-ECO-06, IESEG School of Management, April 2009.
[22] Ryan Porter, Eugene Nudelman, and Yoav Shoham. Simple search methods for finding a
nash equilibrium. In Proceedings of the 19th national conference on Artifical intelligence,
AAAI’04, pages 664–669. AAAI Press, 2004.
[23] O. Rozenfeld and M. Tennenholtz. Strong and correlated strong equilibria in monotone
congestion games. In WINE, pages 74–86, 2006.
[24] Thomas Sandholm, Andrew Gilpin, and Vincent Conitzer. Mixed-integer programming
methods for finding nash equilibria. In Proceedings of the 20th national conference on
Artificial intelligence - Volume 2, AAAI’05, pages 495–501. AAAI Press, 2005.
[25] L S Shapley and R N Snow. Basic solutions of discrete games. In In Contributions to the
Theory of Games, number 24 in Annals of Mathematics Studies, pages 27–35. Princeton
University Press, 1950.
[26] Y. Shoham and K. Leyton-Brown. Multiagent Systems: Algorithmic, Game-Theoretic, and
Logical Foundations. Cambridge University Press, 2008.
42
[27] G. Van Der Laan, A. J. J. Talman, and L. Van Der Heyden. Simplicial variable dimension
algorithms for solving the nonlinear complementarity problem on a product of unit simplices
using a general labelling. Mathematics of Operations Research, 12(3):377–397, 1987.
43
Appendix
Experimental test results
class
BertrandOlygopoly
BidirectionalLEG–CG
BidirectionalLEG–RG
BidirectionalLEG-SG
CovariantGame–Pos
CovariantGame–Rand
CovariantGame–Zero
DispersionGame
GraphicalGame–RG
GraphicalGame–Road
GraphicalGame–SG
GraphicalGame–SW
LocationGame
MinimumEffortGame
PolymatrixGame–CG
PolymatrixGame–RG
PolymatrixGame–Road
PolymatrixGame–SW
RandomGame
TravelersDilemma
UniformLEG–CG
UniformLEG–RG
UniformLEG–SG
WarOfAttrition
Y
0%
100%
95%
100%
100%
40%
60%
100%
35%
50%
60%
50%
100%
100%
55%
40%
80%
55%
65%
0%
85%
100%
100%
100%
time
17.81 s
6.09 s
5.29 s
5.80 s
11.90 s
194.01 s
30.30 s
3.12 s
44.09 s
58.55 s
30.48 s
46.99 s
3.25 s
53.28 s
38.92 s
57.37 s
59.65 s
43.81 s
41.87 s
265.96 s
9.21 s
8.71 s
8.00 s
4.89 s
time—Y
–
6.09 s
5.14 s
5.80 s
11.90 s
25.79 s
18.42 s
3.12 s
27.87 s
35.47 s
13.66 s
45.90 s
3.25 s
53.28 s
32.88 s
39.49 s
33.97 s
54.18 s
33.03 s
–
8.01 s
8.71 s
8.00 s
4.89 s
time—N
17.81 s
–
8.09 s
–
–
306.16 s
48.12 s
–
52.83 s
81.63 s
50.70 s
48.09 s
–
–
46.29 s
69.28 s
162.37 s
31.15 s
58.29 s
265.96 s
16.00 s
–
–
–
Table A.1: Results of Algorithm 3 with Nash equilibrium finding oracle MIP Nash with 25x25 GAMUT instances:
instances admitting SNEs (Y), average compute time (time), average compute time when there is a strong Nash
equilibrium (time—Y), average compute time when there is no strong Nash equilibrium (time—N).
44
class
BertrandOlygopoly
BidirectionalLEG–CG
BidirectionalLEG–RG
BidirectionalLEG-SG
CovariantGame–Pos
CovariantGame–Rand
CovariantGame–Zero
DispersionGame
GraphicalGame–RG
GraphicalGame–Road
GraphicalGame–SG
GraphicalGame–SW
LocationGame
MinimumEffortGame
PolymatrixGame–CG
PolymatrixGame–RG
PolymatrixGame–Road
PolymatrixGame–SW
RandomGame
TravelersDilemma
UniformLEG–CG
UniformLEG–RG
UniformLEG–SG
WarOfAttrition
Y
0%
100%
95%
100%
100%
40%
60%
100%
35%
50%
60%
50%
100%
100%
55%
40%
80%
55%
65%
0%
85%
100%
100%
100%
time
48.25 s
13.21 s
15.10 s
9,95 s
29.53 s
1286.48 s
12254.19 s
6.87 s
3286.89 s
7654.42 s
1007.94 s
279.35 s
5.86 s
84.85 s
705.95 s
1485.11 s
3767.87 s
6774.05 s
7702.85 s
580.06 s
15.75 s
11.81 s
8.83 s
7.35 s
time—Y
–
13.21 s
13.23 s
9.95 s
29.53 s
94.94 s
25365.43 s
6.87 s
3198.38 s
15194.15 s
1164.12 s
259.24 s
5.86 s
84.85 s
1016.71 s
2223.95 s
5294.01 s
269.81 s
1697.59 s
–
14.61 s
11.81 s
8.83 s
7.35 s
time—N
48.25 s
–
50.67 s
–
–
2742.80 s
3513.36 s
–
3334.54 s
3594.56 s
773.67 s
299.45 s
–
–
395.20 s
376.86 s
1902.58 s
9561.58 s
12616.25 s
580.06 s
18.40 s
–
–
–
Table A.2: Results of Algorithm 3 with Nash equilibrium finding oracle MIP Nash with 50x50 GAMUT instances:
instances admitting SNEs (Y), average compute time (time), average compute time when there is a strong Nash
equilibrium (time—Y), average compute time when there is no strong Nash equilibrium (time—N).
45
class
BertrandOlygopoly
BidirectionalLEG–CG
BidirectionalLEG–RG
BidirectionalLEG-SG
CovariantGame–Pos
CovariantGame–Rand
CovariantGame–Zero
DispersionGame
GraphicalGame–RG
GraphicalGame–Road
GraphicalGame–SG
GraphicalGame–SW
LocationGame
MinimumEffortGame
PolymatrixGame–CG
PolymatrixGame–RG
PolymatrixGame–Road
PolymatrixGame–SW
RandomGame
TravelersDilemma
UniformLEG–CG
UniformLEG–RG
UniformLEG–SG
WarOfAttrition
Y
0%
100%
95%
100%
100%
40%
60%
100%
35%
50%
60%
50%
100%
100%
55%
40%
80%
55%
65%
0%
85%
100%
100%
100%
time
12.07 s
4.60 s
5.12 s
4.72 s
7.07 s
204.54 s
22.78 s
4.56 s
36.27 s
41.89 s
20.43 s
33.30 s
4.46 s
12.31 s
25.70 s
30.92 s
18.04 s
23.14 s
27.00 s
97.40 s
7.64 s
6.92 s
5.00 s
4.93 s
time—Y
–
4.60 s
4.99 s
4.72 s
7.07 s
13.10 s
13.45 s
4.56 s
12.70 s
12.77 s
10.15 s
17.71 s
4.46 s
12.31 s
12.92 s
12.49 s
10.89 s
14.77 s
11.60 s
–
7.04 s
6.92 s
5.00 s
4.93 s
time—N
12.07 s
–
7.65 s
–
–
332.18 s
36.76 s
–
48.97 s
71.00 s
35.85 s
48.90 s
–
–
41.33 s
43.20 s
46.66 s
33.37 s
55.59 s
97.40 s
11.05 s
–
–
–
Table A.3: Results of Iterated Box MIP 2StrongNash with 25x25 GAMUT instances: instances admitting SNEs (Y),
average compute time (time), average compute time when there is a strong Nash equilibrium (time—Y), average
compute time when there is no strong Nash equilibrium (time—N).
46
class
BertrandOlygopoly
BidirectionalLEG–CG
BidirectionalLEG–RG
BidirectionalLEG-SG
CovariantGame–Pos
CovariantGame–Rand
CovariantGame–Zero
DispersionGame
GraphicalGame–RG
GraphicalGame–Road
GraphicalGame–SG
GraphicalGame–SW
LocationGame
MinimumEffortGame
PolymatrixGame–CG
PolymatrixGame–RG
PolymatrixGame–Road
PolymatrixGame–SW
RandomGame
TravelersDilemma
UniformLEG–CG
UniformLEG–RG
UniformLEG–SG
WarOfAttrition
Y
0%
100%
95%
100%
100%
40%
60%
100%
35%
50%
60%
50%
100%
100%
55%
40%
80%
55%
65%
0%
85%
100%
100%
100%
time
2802.40 s
9.32 s
12.14 s
7.98 s
15.43 s
955.09 s
415.75 s
6.38 s
151.79 s
215.71 s
185.49 s
224.10 s
6.56 s
19.50 s
225.52 s
103.48 s
125.67 s
185.56 s
196.70 s
111.48 s
10.56 s
9.28 s
6.97 s
6.75 s
time—Y
–
9.32 s
7.91 s
7.98 s
15.43 s
17.38 s
58.61 s
6.38 s
38.67 s
51.07 s
76.47 s
58.22 s
6.56 s
19.50 s
70.87 s
59.72 s
39.67 s
19.29 s
45.84 s
–
9.89 s
9.28 s
6.97 s
6.75 s
time—N
2802.40 s
–
92.33 s
–
–
2101.17 s
653.84 s
–
212.70 s
304.36 s
349.03 s
389.98 s
–
–
380.17 s
169.12 s
230.78 s
256.82 s
320.13 s
111.48 s
12.12 s
–
–
–
Table A.4: Results of Iterated Box MIP 2StrongNash with 50x50 GAMUT instances: instances admitting SNEs (Y),
average compute time (time), average compute time when there is a strong Nash equilibrium (time—Y), average
compute time when there is no strong Nash equilibrium (time—N).
47
class
BertrandOlygopoly
BidirectionalLEG–CG
BidirectionalLEG–RG
BidirectionalLEG-SG
CovariantGame–Pos
CovariantGame–Rand
CovariantGame–Zero
DispersionGame
GraphicalGame–RG
GraphicalGame–Road
GraphicalGame–SG
GraphicalGame–SW
LocationGame
MinimumEffortGame
PolymatrixGame–CG
PolymatrixGame–RG
PolymatrixGame–Road
PolymatrixGame–SW
RandomGame
TravelersDilemma
UniformLEG–CG
UniformLEG–RG
UniformLEG–SG
WarOfAttrition
Y
0%
100%
95%
100%
100%
40%
60%
100%
35%
50%
60%
50%
100%
100%
55%
40%
80%
55%
65%
0%
85%
100%
100%
100%
mY
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
45%
omY
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
time
10.8 s
5.1 s
5.4 s
4.5 s
5.7 s
254.9 s
38.7 s
6.2 s
53.6 s
43.8 s
28.3 s
43.0 s
3.4 s
5.1 s
28.9 s
46.9 s
17.9 s
23.3 s
32.0 s
17.9 s
6.5 s
5.6 s
3.8 s
6.6 s
time—Y
–
5.1 s
5.3 s
4.5 s
5.7 s
10.2 s
13.8 s
6.2 s
15.7 s
13.5 s
13.7 s
14.1 s
3.4 s
5.1 s
12.2 s
12.7 s
11.8 s
10.7 s
11.2 s
–
5.4 s
5.6 s
3.8 s
6.6 s
time—N
10.8 s
–
8.6 s
–
–
418.0 s
75.9 s
–
74.1 s
74.1 s
50.2 s
71.8 s
–
–
49.2 s
69.7 s
42.3 s
38.8 s
70.6 s
17.9 s
12.4 s
–
–
–
Table A.5: Results of Iterated MIP 2StrongNash with 25x25 GAMUT instances: instances admitting SNEs (Y),
instances admitting mixed strategy SNE (mY), instances admitting only mixed strategy SNE (omY), compute time
(time), compute time when there is an SNE (time—Y), compute time when there is no SNE (time—N).
48
class
BertrandOlygopoly
BidirectionalLEG–CG
BidirectionalLEG–RG
BidirectionalLEG-SG
CovariantGame–Pos
CovariantGame–Rand
CovariantGame–Zero
DispersionGame
GraphicalGame–RG
GraphicalGame–Road
GraphicalGame–SG
GraphicalGame–SW
LocationGame
MinimumEffortGame
PolymatrixGame–CG
PolymatrixGame–RG
PolymatrixGame–Road
PolymatrixGame–SW
RandomGame
TravelersDilemma
UniformLEG–CG
UniformLEG–RG
UniformLEG–SG
WarOfAttrition
Y
0%
100%
95%
100%
100%
55%
40%
100%
35%
35%
60%
50%
100%
100%
45%
55%
45%
25%
45%
0%
75%
100%
100%
100%
mY
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
35%
omY
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
time
1963.6 s
12.2 s
29.6 s
9.7 s
61.5 s
17,701.5 s
3,734.1 s
17.6 s
6,446.4 s
15,247.7 s
10,217.3 s
6,529.3 s
6.5 s
22.3 s
3,860.0 s
5,004.3 s
8,091.9 s
6,950.3 s
6,998.0 s
101.7 s
33.2 s
11.2 s
8.5 s
13.8 s
time—Y
–
12.2 s
29.6 s
9.7 s
61.5 s
53.9 s
1,184.2 s
17.6 s
3,395.1 s
495.0 s
4,029.5 s
1,116.5 s
6.5 s
22.3 s
892.9 s
4,668.0 s
1,752.5 s
3,478.7 s
6,229.9 s
–
35.9 s
11.2 s
8.5 s
13.8 s
time—N
1963.6 s
–
8.6 s
–
–
39,270.9 s
5,434.1 s
–
8,089.4 s
23,191.4 s
19,498.9 s
11,942.1 s
–
–
6,287.6 s
5,415.4 s
13,278.6 s
8,107.5 s
7,628.2 s
101.7 s
25.2 s
–
–
–
Table A.6: Results of Iterated MIP 2StrongNash with 50x50 GAMUT instances: instances admitting SNEs (Y),
instances admitting mixed strategy SNE (mY), instances admitting only mixed strategy SNE (omY), compute time
(time), compute time when there is an SNE (time—Y), compute time when there is no SNE (time—N).
49
average compute time (s)
init = 1 / supp = 2
240
200
200
200
160
160
160
120
120
120
80
80
80
40
40
40
0
0
average compute time (s)
20
40
60
m1 = m2
80
100
0
0
init = 1 / supp = 8
20
40
60
m 1 = m2
80
100
0
20
40
60
m1 = m2
80
100
init = 1 / supp = 10
240
240
200
200
160
160
120
120
80
80
40
40
0
0
0
average compute time (s)
init = 1 / supp = 6
240
0
20
40
60
m1 = m2
80
100
0
init = 2 / supp = 2
20
40
60
m 1 = m2
80
100
init = 2 / supp = 4
init = 2 / supp = 6
240
240
240
200
200
200
160
160
160
120
120
120
80
80
80
40
40
40
0
0
0
average compute time (s)
init = 1 / supp = 4
240
20
40
60
m1 = m2
80
100
0
0
init = 2 / supp = 8
20
40
60
m 1 = m2
80
100
0
20
40
60
m1 = m2
80
100
init = 2 / supp = 10
240
240
200
200
160
160
120
120
80
80
40
40
0
0
0
20
40
60
m1 = m2
80
100
0
20
40
60
m 1 = m2
80
100
Figure A.1: Average compute time of Algorithm 3 with MISSING instances: black solid line as no other NE, black
dashed line as 3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey dotted line as 12
other NEs.
50
avg. Ampl calls
init = 1 / supp = 2
init = 1 / supp = 4
40
40
32
32
32
24
24
24
16
16
16
8
8
8
0
0
0
20
40
60
m1 = m2
80
100
0
0
avg. Ampl calls
init = 1 / supp = 8
40
32
32
24
24
16
16
8
8
40
60
m 1 = m2
80
100
0
20
40
60
m1 = m2
80
100
0
0
20
40
60
m1 = m2
80
100
0
init = 2 / supp = 2
avg. Ampl calls
20
init = 1 / supp = 10
40
0
20
40
60
m 1 = m2
80
100
init = 2 / supp = 4
init = 2 / supp = 6
40
40
40
32
32
32
24
24
24
16
16
16
8
8
8
0
0
0
20
40
60
m1 = m2
80
100
0
0
init = 2 / supp = 8
avg. Ampl calls
init = 1 / supp = 6
40
20
40
60
m 1 = m2
80
100
0
20
40
60
m1 = m2
80
100
init = 2 / supp = 10
40
40
32
32
24
24
16
16
8
8
0
0
0
20
40
60
m1 = m2
80
100
0
20
40
60
m 1 = m2
80
100
Figure A.2: Average number of AMPL calls per instance of Algorithm 3 with MISSING instances: black solid line as
no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey
dotted line as 12 other NEs.
51
avg. verification calls
init = 1 / supp = 2
init = 1 / supp = 4
40
40
32
32
32
24
24
24
16
16
16
8
8
8
0
0
0
20
40
60
m1 = m2
80
100
0
0
avg. verification calls
init = 1 / supp = 8
40
32
32
24
24
16
16
8
8
0
40
60
m 1 = m2
80
100
0
20
40
60
m1 = m2
80
100
0
20
40
60
m1 = m2
80
100
0
init = 2 / supp = 2
avg. verification calls
20
init = 1 / supp = 10
40
0
20
40
60
m 1 = m2
80
100
init = 2 / supp = 4
init = 2 / supp = 6
40
40
40
32
32
32
24
24
24
16
16
16
8
8
8
0
0
0
20
40
60
m1 = m2
80
100
0
0
init = 2 / supp = 8
avg. verification calls
init = 1 / supp = 6
40
20
40
60
m 1 = m2
80
100
0
20
40
60
m1 = m2
80
100
init = 2 / supp = 10
40
40
32
32
24
24
16
16
8
8
0
0
0
20
40
60
m1 = m2
80
100
0
20
40
60
m 1 = m2
80
100
Figure A.3: Average number of verification calls per instance of Algorithm 3 with MISSING instances: black solid line
as no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs,
grey dotted line as 12 other NEs.
52
% time for verification
init = 1 / supp = 2
1.0
init = 1 / supp = 4
1.0
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
0
0
20
40
60
80
100
0
0
20
% time for verification
m1 = m2
% time for verification
60
80
100
0
20
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
40
60
80
100
m1 = m2
init = 1 / supp = 10
1.0
0
0
20
40
60
80
100
0
20
m1 = m2
init = 2 / supp = 2
1.0
40
60
80
100
m 1 = m2
init = 2 / supp = 4
1.0
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
0
0
20
40
60
80
100
0
0
20
40
60
80
100
m 1 = m2
init = 2 / supp = 8
1.0
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
20
40
60
80
100
m1 = m2
init = 2 / supp = 10
1.0
0.8
init = 2 / supp = 6
1.0
0.8
m1 = m2
% time for verification
40
m 1 = m2
init = 1 / supp = 8
1.0
init = 1 / supp = 6
1.0
0
0
20
40
60
m1 = m2
80
100
0
20
40
60
80
100
m 1 = m2
Figure A.4: Average percentage of time taken by SNE verification of Algorithm 3 with MISSING instances: black
solid line as no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other
NEs, grey dotted line as 12 other NEs.
53
compute time per call (s)
init = 1 / supp = 4
init = 1 / supp = 2
2.0
2.0
1.6
1.6
1.6
1.2
1.2
1.2
0.8
0.8
0.8
0.4
0.4
0.4
0
0
0
20
40
60
80
100
0
0
20
compute time per call (s)
m1 = m2
init = 1 / supp = 8
60
80
100
0
20
40
60
80
100
m1 = m 2
init = 1 / supp = 10
2.0
1.6
1.6
1.2
1.2
0.8
0.8
0.4
0.4
0
0
20
40
60
80
100
0
20
m1 = m2
compute time per call (s)
40
m 1 = m2
2.0
0
40
60
80
100
m 1 = m2
init = 2 / supp = 2
init = 2 / supp = 4
init = 2 / supp = 6
2.0
2.0
2.0
1.6
1.6
1.6
1.2
1.2
1.2
0.8
0.8
0.8
0.4
0.4
0.4
0
0
0
20
40
60
80
100
0
0
20
m1 = m2
compute time per call (s)
init = 1 / supp = 6
2.0
40
60
80
100
m 1 = m2
init = 2 / supp = 8
0
20
40
60
80
100
m1 = m 2
init = 2 / supp = 10
2.0
2.0
1.6
1.6
1.2
1.2
0.8
0.8
0.4
0.4
0
0
0
20
40
60
m1 = m2
80
100
0
20
40
60
80
100
m 1 = m2
Figure A.5: Average compute time of MIP Nash per call of Algorithm 3 with MISSING instances: black solid line as
no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey
dotted line as 12 other NEs.
54
average compute time (s)
init = 1 / supp = 0
init = 2 / supp = 0
240
240
200
200
160
160
120
120
80
80
40
40
0
0
0
20
40
60
80
m1 = m2
100
0
avg. Ampl calls
init = 1 / supp = 0
40
32
32
24
24
16
16
8
8
20
40
60
80
m1 = m2
100
0
init = 1 / supp = 0
avg. verification calls
60
80
m1 = m2
100
0
0
20
40
60
80
m1 = m2
100
init = 2 / supp = 0
40
40
32
32
24
24
16
16
8
8
0
0
0
20
40
60
80
m1 = m2
100
init = 1 / supp = 0
1.0
0
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
40
60
80
m1 = m2
100
init = 2 / supp = 0
1.0
0.8
20
0
0
20
40
60
80
100
0
20
m1 = m2
compute time per call (s)
40
init = 2 / supp = 0
40
0
% time for verification
20
40
60
80
100
m1 = m2
init = 1 / supp = 0
init = 2 / supp = 0
2.0
2.0
1.6
1.6
1.2
1.2
0.8
0.8
0.4
0.4
0
0
0
20
40
60
80
100
0
m1 = m2
20
40
60
80
100
m1 = m2
Figure A.6: Average results obtained with Algorithm 3 with MISSING instances in which there is no SNE: black
dashed line as 3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey dotted line as 12
other NEs.
55
average compute time (s)
average compute time (s)
average compute time (s)
average compute time (s)
mode = 0 / supp = 2
320
mode = 0 / supp = 4
320
280
280
280
240
240
240
200
200
200
160
160
160
120
120
120
80
80
80
40
40
40
0
0
0
20
40
60
m1 = m2
80
100
mode = 1 / supp = 2
320
0
0
20
40
60
m 1 = m2
80
100
mode = 1 / supp = 4
320
0
280
280
240
240
240
200
200
200
160
160
160
120
120
120
80
80
80
40
40
40
0
0
20
40
60
m1 = m2
80
100
mode = 2 / supp = 2
320
20
40
60
m 1 = m2
80
100
mode = 2 / supp = 4
320
0
280
280
240
240
200
200
200
160
160
160
120
120
120
80
80
80
40
40
40
0
0
40
60
m1 = m2
80
100
mode = 3 / supp = 2
320
20
40
60
m 1 = m2
80
100
mode = 3 / supp = 4
320
0
280
280
240
240
200
200
200
160
160
160
120
120
120
80
80
80
40
40
40
0
0
40
60
m1 = m2
80
100
100
20
40
60
m1 = m2
80
100
20
40
60
m1 = m2
80
100
mode = 3 / supp = 6
320
240
20
80
0
0
280
0
60
m1 = m2
mode = 2 / supp = 6
320
240
20
40
0
0
280
0
20
mode = 1 / supp = 6
320
280
0
mode = 0 / supp = 6
320
0
0
20
40
60
m 1 = m2
80
100
0
20
40
60
m1 = m2
80
100
Figure A.7: Average compute time of Iterated MIP 2StrongNash with MISSING instances: black solid line as no other
NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey dotted
line as 12 other NEs (Part 1)
56
average compute time (s)
average compute time (s)
average compute time (s)
average compute time (s)
mode = 0 / supp = 8
320
mode = 0 / supp = 10
320
280
280
240
240
200
200
160
160
120
120
80
80
40
40
0
0
0
20
40
60
m1 = m2
80
100
mode = 1 / supp = 8
320
0
280
240
240
200
200
160
160
120
120
80
80
40
40
0
40
60
m 1 = m2
80
100
mode = 1 / supp = 10
320
280
20
0
0
20
40
60
m1 = m2
80
100
mode = 2 / supp = 8
320
0
280
240
240
200
200
160
160
120
120
80
80
40
40
0
40
60
m 1 = m2
80
100
mode = 2 / supp = 10
320
280
20
0
0
20
40
60
m1 = m2
80
100
mode = 3 / supp = 8
320
0
280
240
240
200
200
160
160
120
120
80
80
40
40
0
40
60
m 1 = m2
80
100
mode = 3 / supp = 10
320
280
20
0
0
20
40
60
m1 = m2
80
100
0
20
40
60
m 1 = m2
80
100
Figure A.8: Average compute time of Iterated MIP 2StrongNash with MISSING instances: black solid line as no other
NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey dotted
line as 12 other NEs (Part 2)
57
mode = 0 / supp = 2
average iterations
10
mode = 0 / supp = 4
10
8
8
8
6
6
6
4
4
4
2
2
2
0
0
0
20
40
60
80
100
0
0
20
m1 = m2
average iterations
80
100
0
8
6
6
6
4
4
4
2
2
2
0
40
60
80
100
20
40
60
80
100
0
mode = 2 / supp = 4
10
8
6
6
6
4
4
4
2
2
2
0
20
40
60
80
100
20
40
60
80
100
0
mode = 3 / supp = 4
10
8
6
6
6
4
4
4
2
2
2
0
20
40
60
m1 = m2
80
100
80
100
40
60
80
100
mode = 3 / supp = 6
10
8
0
60
m1 = m2
8
0
20
m 1 = m2
mode = 3 / supp = 2
40
0
0
m1 = m2
10
100
mode = 2 / supp = 6
10
8
0
80
m1 = m2
8
0
20
m 1 = m2
mode = 2 / supp = 2
60
0
0
m1 = m2
10
40
mode = 1 / supp = 6
10
8
20
20
m1 = m2
8
0
average iterations
60
mode = 1 / supp = 4
10
0
average iterations
40
m 1 = m2
mode = 1 / supp = 2
10
mode = 0 / supp = 6
10
0
0
20
40
60
m 1 = m2
80
100
0
20
40
60
80
100
m1 = m2
Figure A.9: Average number of iterations of Iterated MIP 2StrongNash with MISSING instances: black solid line as
no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey
dotted line as 12 other NEs (Part 1)
58
mode = 0 / supp = 8
average iterations
10
mode = 0 / supp = 10
10
8
8
6
6
4
4
2
2
0
0
0
20
40
60
80
100
0
20
m1 = m2
average iterations
8
8
6
6
4
4
2
2
100
0
0
20
40
60
80
100
0
20
m1 = m2
40
60
80
100
m 1 = m2
mode = 2 / supp = 8
10
average iterations
80
mode = 1 / supp = 10
10
0
mode = 2 / supp = 10
10
8
8
6
6
4
4
2
2
0
0
0
20
40
60
80
100
0
20
m1 = m2
40
60
80
100
m 1 = m2
mode = 3 / supp = 8
10
average iterations
60
m 1 = m2
mode = 1 / supp = 8
10
40
mode = 3 / supp = 10
10
8
8
6
6
4
4
2
2
0
0
0
20
40
60
m1 = m2
80
100
0
20
40
60
80
100
m 1 = m2
Figure A.10: Average number of iterations of Iterated MIP 2StrongNash with MISSING instances: black solid line as
no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey
dotted line as 12 other NEs (Part 2)
59
compute time per iteration (s)
compute time per iteration (s)
compute time per iteration (s)
compute time per iteration (s)
mode = 0 / supp = 2
24
mode = 0 / supp = 4
24
20
20
20
16
16
16
12
12
12
8
8
8
4
4
4
0
0
0
20
40
60
m1 = m2
80
100
mode = 1 / supp = 2
24
0
0
20
40
60
m 1 = m2
80
100
mode = 1 / supp = 4
24
0
20
20
16
16
16
12
12
12
8
8
8
4
4
4
0
0
20
40
60
m1 = m2
80
100
mode = 2 / supp = 2
24
20
40
60
m 1 = m2
80
100
mode = 2 / supp = 4
0
20
20
16
16
16
12
12
12
8
8
8
4
4
4
0
20
40
60
m1 = m2
80
100
mode = 3 / supp = 2
24
20
40
60
m 1 = m2
80
100
mode = 3 / supp = 4
0
20
20
16
16
16
12
12
12
8
8
8
4
4
4
0
20
40
60
m1 = m2
80
100
100
20
40
60
m1 = m 2
80
100
20
40
60
m1 = m 2
80
100
mode = 3 / supp = 6
24
20
0
80
0
0
24
0
60
m1 = m 2
mode = 2 / supp = 6
24
20
0
40
0
0
24
0
20
mode = 1 / supp = 6
24
20
0
mode = 0 / supp = 6
24
0
0
20
40
60
m 1 = m2
80
100
0
20
40
60
m1 = m 2
80
100
Figure A.11: Average compute time of MIP Nash per iteration of Iterated MIP 2StrongNash with MISSING instances:
black solid line as no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey solid line as
9 other NEs, grey dotted line as 12 other NEs (Part 1)
60
compute time per iteration (s)
compute time per iteration (s)
compute time per iteration (s)
compute time per iteration (s)
mode = 0 / supp = 8
24
mode = 0 / supp = 10
24
20
20
16
16
12
12
8
8
4
4
0
0
0
20
40
60
m1 = m2
80
100
mode = 1 / supp = 8
24
0
20
16
16
12
12
8
8
4
4
0
40
60
m 1 = m2
80
100
mode = 1 / supp = 10
24
20
20
0
0
20
40
60
m1 = m2
80
100
mode = 2 / supp = 8
24
0
20
16
16
12
12
8
8
4
4
0
40
60
m 1 = m2
80
100
mode = 2 / supp = 10
24
20
20
0
0
20
40
60
m1 = m2
80
100
mode = 3 / supp = 8
24
0
20
16
16
12
12
8
8
4
4
0
40
60
m 1 = m2
80
100
mode = 3 / supp = 10
24
20
20
0
0
20
40
60
m1 = m2
80
100
0
20
40
60
m 1 = m2
80
100
Figure A.12: Average compute time of MIP Nash per iteration of Iterated MIP 2StrongNash with MISSING instances:
black solid line as no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey solid line as
9 other NEs, grey dotted line as 12 other NEs (Part 2)
61
% time for verification
mode = 0 / supp = 2
1.0
mode = 0 / supp = 4
1.0
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
0
0
20
40
60
80
100
0
0
20
% time for verification
m1 = m2
% time for verification
60
80
100
0
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
0
20
40
60
80
100
20
40
60
80
100
0
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
20
40
60
80
100
20
40
60
80
100
0
mode = 3 / supp = 4
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
40
60
m1 = m2
80
100
60
80
100
40
60
80
100
mode = 3 / supp = 6
1.0
0.6
0
40
m1 = m2
0.8
20
20
m 1 = m2
mode = 3 / supp = 2
0
20
0
0
m1 = m2
1.0
100
mode = 2 / supp = 6
1.0
0.8
0
80
m1 = m2
mode = 2 / supp = 4
1.0
0
60
0
0
m 1 = m2
mode = 2 / supp = 2
1.0
40
mode = 1 / supp = 6
1.0
0.8
0
20
m1 = m2
mode = 1 / supp = 4
1.0
m1 = m2
% time for verification
40
m 1 = m2
mode = 1 / supp = 2
1.0
mode = 0 / supp = 6
1.0
0
0
20
40
60
m 1 = m2
80
100
0
20
40
60
80
100
m1 = m2
Figure A.13: Average percentage of time taken by SNE verification in Iterated MIP 2StrongNash with MISSING
instances: black solid line as no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey
solid line as 9 other NEs, grey dotted line as 12 other NEs (Part 1)
62
% time for verification
mode = 0 / supp = 8
1.0
mode = 0 / supp = 10
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0
20
40
60
80
100
0
20
% time for verification
m1 = m2
% time for verification
80
100
mode = 1 / supp = 10
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0
20
40
60
80
100
0
20
m1 = m2
mode = 2 / supp = 8
1.0
40
60
80
100
m 1 = m2
mode = 2 / supp = 10
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0
20
40
60
80
100
0
20
m1 = m2
% time for verification
60
m 1 = m2
mode = 1 / supp = 8
1.0
40
60
80
100
m 1 = m2
mode = 3 / supp = 8
1.0
40
mode = 3 / supp = 10
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0
20
40
60
m1 = m2
80
100
0
20
40
60
80
100
m 1 = m2
Figure A.14: Average percentage of time taken by SNE verification in Iterated MIP 2StrongNash with MISSING
instances: black solid line as no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey
solid line as 9 other NEs, grey dotted line as 12 other NEs (Part 2)
63
average compute time (s)
mode = 0
320
280
280
240
240
240
240
200
200
200
200
160
160
160
160
120
120
120
120
80
80
80
80
40
40
40
40
0
0
0
0
20
average iterations
40
m1
60
= m2
0
80
mode = 0
20
40
m1
60
= m2
80
0
0
mode = 1
12
20
40
m1
60
= m2
80
mode = 2
12
0
10
10
10
8
8
8
8
6
6
6
6
4
4
4
4
2
2
2
2
0
20
40
60
m1 = m2
80
mode = 0
24
0
0
20
40
60
m1 = m2
80
20
40
60
m1 = m2
80
mode = 2
24
20
20
20
16
16
16
12
12
12
12
8
8
8
8
4
4
4
4
0
20
40
m1
60
= m2
80
0
0
20
mode = 0
40
m1
60
= m2
80
20
40
m1
60
= m2
80
1.0
0.8
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0
60
m1 = m2
80
0
0
20
40
60
80
80
40
60
m1 = m2
80
0
0
m1 = m2
60
mode = 3
1.0
40
20
mode = 2
1.0
20
40
m1 = m2
mode = 3
0
1.0
0
80
0
0
mode = 1
0
20
24
16
0
60
mode = 3
0
20
0
40
m1 = m2
0
0
mode = 1
24
20
12
10
0
mode = 3
320
280
0
average time per iteration (s)
mode = 2
320
280
12
% time for verification
mode = 1
320
20
40
60
m1 = m2
80
0
20
40
60
80
m1 = m2
Figure A.15: Average results obtained with Iterated MIP 2StrongNash with MISSING instances in which there is no
SNE: black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey dotted
line as 12 other NEs.
64
average compute time (s)
average compute time (s)
average compute time (s)
average compute time (s)
mode = 0 / supp = 2
320
mode = 0 / supp = 4
320
280
280
280
240
240
240
200
200
200
160
160
160
120
120
120
80
80
80
40
40
40
0
0
0
20
40
60
m1 = m2
80
100
mode = 1 / supp = 2
320
0
0
20
40
60
m 1 = m2
80
100
mode = 1 / supp = 4
320
0
280
280
240
240
240
200
200
200
160
160
160
120
120
120
80
80
80
40
40
40
0
0
20
40
60
m1 = m2
80
100
mode = 2 / supp = 2
320
20
40
60
m 1 = m2
80
100
mode = 2 / supp = 4
320
0
280
280
240
240
200
200
200
160
160
160
120
120
120
80
80
80
40
40
40
0
0
40
60
m1 = m2
80
100
mode = 3 / supp = 2
320
20
40
60
m 1 = m2
80
100
mode = 3 / supp = 4
320
0
280
280
240
240
200
200
200
160
160
160
120
120
120
80
80
80
40
40
40
0
0
40
60
m1 = m2
80
100
100
20
40
60
m1 = m2
80
100
20
40
60
m1 = m2
80
100
mode = 3 / supp = 6
320
240
20
80
0
0
280
0
60
m1 = m2
mode = 2 / supp = 6
320
240
20
40
0
0
280
0
20
mode = 1 / supp = 6
320
280
0
mode = 0 / supp = 6
320
0
0
20
40
60
m 1 = m2
80
100
0
20
40
60
m1 = m2
80
100
Figure A.16: Average compute time of Iterated Box MIP 2StrongNash with MISSING instances: black solid line as
no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey
dotted line as 12 other NEs (Part 1)
65
average compute time (s)
average compute time (s)
average compute time (s)
average compute time (s)
mode = 0 / supp = 8
320
mode = 0 / supp = 10
320
280
280
240
240
200
200
160
160
120
120
80
80
40
40
0
0
0
20
40
60
m1 = m2
80
100
mode = 1 / supp = 8
320
0
280
240
240
200
200
160
160
120
120
80
80
40
40
0
40
60
m 1 = m2
80
100
mode = 1 / supp = 10
320
280
20
0
0
20
40
60
m1 = m2
80
100
mode = 2 / supp = 8
320
0
280
240
240
200
200
160
160
120
120
80
80
40
40
0
40
60
m 1 = m2
80
100
mode = 2 / supp = 10
320
280
20
0
0
20
40
60
m1 = m2
80
100
mode = 3 / supp = 8
320
0
280
240
240
200
200
160
160
120
120
80
80
40
40
0
40
60
m 1 = m2
80
100
mode = 3 / supp = 10
320
280
20
0
0
20
40
60
m1 = m2
80
100
0
20
40
60
m 1 = m2
80
100
Figure A.17: Average compute time of Iterated Box MIP 2StrongNash with MISSING instances: black solid line as
no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey
dotted line as 12 other NEs (Part 2)
66
average compute time (s)
average compute time (s)
average compute time (s)
average compute time (s)
mode = 0 / supp = 2
320
mode = 0 / supp = 4
320
280
280
280
240
240
240
200
200
200
160
160
160
120
120
120
80
80
80
40
40
40
0
0
0
20
40
60
m1 = m2
80
100
mode = 1 / supp = 2
320
0
0
20
40
60
m 1 = m2
80
100
mode = 1 / supp = 4
320
0
280
280
240
240
240
200
200
200
160
160
160
120
120
120
80
80
80
40
40
40
0
0
20
40
60
m1 = m2
80
100
mode = 4 / supp = 2
320
20
40
60
m 1 = m2
80
100
mode = 4 / supp = 4
320
0
280
280
240
240
200
200
200
160
160
160
120
120
120
80
80
80
40
40
40
0
0
40
60
m1 = m2
80
100
mode = 5 / supp = 2
320
20
40
60
m 1 = m2
80
100
mode = 5 / supp = 4
320
0
280
280
240
240
200
200
200
160
160
160
120
120
120
80
80
80
40
40
40
0
0
40
60
m1 = m2
80
100
100
20
40
60
m1 = m2
80
100
20
40
60
m1 = m2
80
100
mode = 5 / supp = 6
320
240
20
80
0
0
280
0
60
m1 = m2
mode = 4 / supp = 6
320
240
20
40
0
0
280
0
20
mode = 1 / supp = 6
320
280
0
mode = 0 / supp = 6
320
0
0
20
40
60
m 1 = m2
80
100
0
20
40
60
m1 = m2
80
100
Figure A.18: Average compute time of Iterated Box MIP 2StrongNash with MISSING instances: black solid line as
no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey
dotted line as 12 other NEs (Part 3)
67
average compute time (s)
average compute time (s)
average compute time (s)
average compute time (s)
mode = 0 / supp = 8
320
mode = 0 / supp = 10
320
280
280
240
240
200
200
160
160
120
120
80
80
40
40
0
0
0
20
40
60
m1 = m2
80
100
mode = 1 / supp = 8
320
0
280
240
240
200
200
160
160
120
120
80
80
40
40
0
40
60
m 1 = m2
80
100
mode = 1 / supp = 10
320
280
20
0
0
20
40
60
m1 = m2
80
100
mode = 4 / supp = 8
320
0
280
240
240
200
200
160
160
120
120
80
80
40
40
0
40
60
m 1 = m2
80
100
mode = 4 / supp = 10
320
280
20
0
0
20
40
60
m1 = m2
80
100
mode = 5 / supp = 8
320
0
280
240
240
200
200
160
160
120
120
80
80
40
40
0
40
60
m 1 = m2
80
100
mode = 5 / supp = 10
320
280
20
0
0
20
40
60
m1 = m2
80
100
0
20
40
60
m 1 = m2
80
100
Figure A.19: Average compute time of Iterated Box MIP 2StrongNash with MISSING instances: black solid line as
no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey
dotted line as 12 other NEs (Part 4)
68
mode = 0 / supp = 2
average iterations
10
mode = 0 / supp = 4
10
8
8
8
6
6
6
4
4
4
2
2
2
0
0
0
20
40
60
80
100
0
0
20
m1 = m2
average iterations
80
100
0
8
6
6
6
4
4
4
2
2
2
0
40
60
80
100
20
40
60
80
100
0
mode = 2 / supp = 4
10
8
6
6
6
4
4
4
2
2
2
0
20
40
60
80
100
20
40
60
80
100
0
mode = 3 / supp = 4
10
8
6
6
6
4
4
4
2
2
2
0
20
40
60
m1 = m2
80
100
80
100
40
60
80
100
mode = 3 / supp = 6
10
8
0
60
m1 = m2
8
0
20
m 1 = m2
mode = 3 / supp = 2
40
0
0
m1 = m2
10
100
mode = 2 / supp = 6
10
8
0
80
m1 = m2
8
0
20
m 1 = m2
mode = 2 / supp = 2
60
0
0
m1 = m2
10
40
mode = 1 / supp = 6
10
8
20
20
m1 = m2
8
0
average iterations
60
mode = 1 / supp = 4
10
0
average iterations
40
m 1 = m2
mode = 1 / supp = 2
10
mode = 0 / supp = 6
10
0
0
20
40
60
m 1 = m2
80
100
0
20
40
60
80
100
m1 = m2
Figure A.20: Average number of iterations of Iterated Box MIP 2StrongNash with MISSING instances: black solid
line as no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs,
grey dotted line as 12 other NEs (Part 1)
69
mode = 0 / supp = 8
average iterations
10
mode = 0 / supp = 10
10
8
8
6
6
4
4
2
2
0
0
0
20
40
60
80
100
0
20
m1 = m2
average iterations
8
8
6
6
4
4
2
2
100
0
0
20
40
60
80
100
0
20
m1 = m2
40
60
80
100
m 1 = m2
mode = 2 / supp = 8
10
average iterations
80
mode = 1 / supp = 10
10
0
mode = 2 / supp = 10
10
8
8
6
6
4
4
2
2
0
0
0
20
40
60
80
100
0
20
m1 = m2
40
60
80
100
m 1 = m2
mode = 3 / supp = 8
10
average iterations
60
m 1 = m2
mode = 1 / supp = 8
10
40
mode = 3 / supp = 10
10
8
8
6
6
4
4
2
2
0
0
0
20
40
60
m1 = m2
80
100
0
20
40
60
80
100
m 1 = m2
Figure A.21: Average number of iterations of Iterated Box MIP 2StrongNash with MISSING instances: black solid
line as no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs,
grey dotted line as 12 other NEs (Part 2)
70
mode = 0 / supp = 2
average iterations
10
mode = 0 / supp = 4
10
8
8
8
6
6
6
4
4
4
2
2
2
0
0
0
20
40
60
80
100
0
0
20
m1 = m2
average iterations
80
100
0
8
6
6
6
4
4
4
2
2
2
0
40
60
80
100
20
40
60
80
100
0
mode = 4 / supp = 4
10
8
6
6
6
4
4
4
2
2
2
0
20
40
60
80
100
20
40
60
80
100
0
mode = 5 / supp = 4
10
8
6
6
6
4
4
4
2
2
2
0
20
40
60
m1 = m2
80
100
80
100
40
60
80
100
mode = 5 / supp = 6
10
8
0
60
m1 = m2
8
0
20
m 1 = m2
mode = 5 / supp = 2
40
0
0
m1 = m2
10
100
mode = 4 / supp = 6
10
8
0
80
m1 = m2
8
0
20
m 1 = m2
mode = 4 / supp = 2
60
0
0
m1 = m2
10
40
mode = 1 / supp = 6
10
8
20
20
m1 = m2
8
0
average iterations
60
mode = 1 / supp = 4
10
0
average iterations
40
m 1 = m2
mode = 1 / supp = 2
10
mode = 0 / supp = 6
10
0
0
20
40
60
m 1 = m2
80
100
0
20
40
60
80
100
m1 = m2
Figure A.22: Average number of iterations of Iterated Box MIP 2StrongNash with MISSING instances: black solid
line as no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs,
grey dotted line as 12 other NEs (Part 3)
71
mode = 0 / supp = 8
average iterations
10
mode = 0 / supp = 10
10
8
8
6
6
4
4
2
2
0
0
0
20
40
60
80
100
0
20
m1 = m2
average iterations
8
8
6
6
4
4
2
2
100
0
0
20
40
60
80
100
0
20
m1 = m2
40
60
80
100
m 1 = m2
mode = 4 / supp = 8
10
average iterations
80
mode = 1 / supp = 10
10
0
mode = 4 / supp = 10
10
8
8
6
6
4
4
2
2
0
0
0
20
40
60
80
100
0
20
m1 = m2
40
60
80
100
m 1 = m2
mode = 5 / supp = 8
10
average iterations
60
m 1 = m2
mode = 1 / supp = 8
10
40
mode = 5 / supp = 10
10
8
8
6
6
4
4
2
2
0
0
0
20
40
60
m1 = m2
80
100
0
20
40
60
80
100
m 1 = m2
Figure A.23: Average number of iterations of Iterated Box MIP 2StrongNash with MISSING instances: black solid
line as no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs,
grey dotted line as 12 other NEs (Part 4)
72
compute time per iteration (s)
compute time per iteration (s)
compute time per iteration (s)
compute time per iteration (s)
mode = 0 / supp = 2
24
mode = 0 / supp = 4
24
20
20
20
16
16
16
12
12
12
8
8
8
4
4
4
0
0
0
20
40
60
m1 = m2
80
100
mode = 1 / supp = 2
24
0
0
20
40
60
m 1 = m2
80
100
mode = 1 / supp = 4
24
0
20
20
16
16
16
12
12
12
8
8
8
4
4
4
0
0
20
40
60
m1 = m2
80
100
mode = 2 / supp = 2
24
20
40
60
m 1 = m2
80
100
mode = 2 / supp = 4
0
20
20
16
16
16
12
12
12
8
8
8
4
4
4
0
20
40
60
m1 = m2
80
100
mode = 3 / supp = 2
24
20
40
60
m 1 = m2
80
100
mode = 3 / supp = 4
0
20
20
16
16
16
12
12
12
8
8
8
4
4
4
0
20
40
60
m1 = m2
80
100
100
20
40
60
m1 = m 2
80
100
20
40
60
m1 = m 2
80
100
mode = 3 / supp = 6
24
20
0
80
0
0
24
0
60
m1 = m 2
mode = 2 / supp = 6
24
20
0
40
0
0
24
0
20
mode = 1 / supp = 6
24
20
0
mode = 0 / supp = 6
24
0
0
20
40
60
m 1 = m2
80
100
0
20
40
60
m1 = m 2
80
100
Figure A.24: Average compute time of MIP Nash per iteration of Iterated Box MIP 2StrongNash with MISSING
instances: black solid line as no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey
solid line as 9 other NEs, grey dotted line as 12 other NEs (Part 1)
73
compute time per iteration (s)
compute time per iteration (s)
compute time per iteration (s)
compute time per iteration (s)
mode = 0 / supp = 8
24
mode = 0 / supp = 10
24
20
20
16
16
12
12
8
8
4
4
0
0
0
20
40
60
m1 = m2
80
100
mode = 1 / supp = 8
24
0
20
16
16
12
12
8
8
4
4
0
40
60
m 1 = m2
80
100
mode = 1 / supp = 10
24
20
20
0
0
20
40
60
m1 = m2
80
100
mode = 2 / supp = 8
24
0
20
16
16
12
12
8
8
4
4
0
40
60
m 1 = m2
80
100
mode = 2 / supp = 10
24
20
20
0
0
20
40
60
m1 = m2
80
100
mode = 3 / supp = 8
24
0
20
16
16
12
12
8
8
4
4
0
40
60
m 1 = m2
80
100
mode = 3 / supp = 10
24
20
20
0
0
20
40
60
m1 = m2
80
100
0
20
40
60
m 1 = m2
80
100
Figure A.25: Average compute time of MIP Nash per iteration of Iterated Box MIP 2StrongNash with MISSING
instances: black solid line as no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey
solid line as 9 other NEs, grey dotted line as 12 other NEs (Part 2)
74
compute time per iteration (s)
compute time per iteration (s)
compute time per iteration (s)
compute time per iteration (s)
mode = 0 / supp = 2
24
mode = 0 / supp = 4
24
20
20
20
16
16
16
12
12
12
8
8
8
4
4
4
0
0
0
20
40
60
m1 = m2
80
100
mode = 1 / supp = 2
24
0
0
20
40
60
m 1 = m2
80
100
mode = 1 / supp = 4
24
0
20
20
16
16
16
12
12
12
8
8
8
4
4
4
0
0
20
40
60
m1 = m2
80
100
mode = 4 / supp = 2
24
20
40
60
m 1 = m2
80
100
mode = 4 / supp = 4
0
20
20
16
16
16
12
12
12
8
8
8
4
4
4
0
20
40
60
m1 = m2
80
100
mode = 5 / supp = 2
24
20
40
60
m 1 = m2
80
100
mode = 5 / supp = 4
0
20
20
16
16
16
12
12
12
8
8
8
4
4
4
0
20
40
60
m1 = m2
80
100
100
20
40
60
m1 = m 2
80
100
20
40
60
m1 = m 2
80
100
mode = 5 / supp = 6
24
20
0
80
0
0
24
0
60
m1 = m 2
mode = 4 / supp = 6
24
20
0
40
0
0
24
0
20
mode = 1 / supp = 6
24
20
0
mode = 0 / supp = 6
24
0
0
20
40
60
m 1 = m2
80
100
0
20
40
60
m1 = m 2
80
100
Figure A.26: Average compute time of MIP Nash per iteration of Iterated Box MIP 2StrongNash with MISSING
instances: black solid line as no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey
solid line as 9 other NEs, grey dotted line as 12 other NEs (Part 3)
75
compute time per iteration (s)
compute time per iteration (s)
compute time per iteration (s)
compute time per iteration (s)
mode = 0 / supp = 8
24
mode = 0 / supp = 10
24
20
20
16
16
12
12
8
8
4
4
0
0
0
20
40
60
m1 = m2
80
100
mode = 1 / supp = 8
24
0
20
16
16
12
12
8
8
4
4
0
40
60
m 1 = m2
80
100
mode = 1 / supp = 10
24
20
20
0
0
20
40
60
m1 = m2
80
100
mode = 4 / supp = 8
24
0
20
16
16
12
12
8
8
4
4
0
40
60
m 1 = m2
80
100
mode = 4 / supp = 10
24
20
20
0
0
20
40
60
m1 = m2
80
100
mode = 5 / supp = 8
24
0
20
16
16
12
12
8
8
4
4
0
40
60
m 1 = m2
80
100
mode = 5 / supp = 10
24
20
20
0
0
20
40
60
m1 = m2
80
100
0
20
40
60
m 1 = m2
80
100
Figure A.27: Average compute time of MIP Nash per iteration of Iterated Box MIP 2StrongNash with MISSING
instances: black solid line as no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey
solid line as 9 other NEs, grey dotted line as 12 other NEs (Part 4)
76
% time for verification
mode = 0 / supp = 2
1.0
mode = 0 / supp = 4
1.0
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
0
0
20
40
60
80
100
0
0
20
% time for verification
m1 = m2
% time for verification
60
80
100
0
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
0
20
40
60
80
100
20
40
60
80
100
0
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
20
40
60
80
100
20
40
60
80
100
0
mode = 3 / supp = 4
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
40
60
m1 = m2
80
100
60
80
100
40
60
80
100
mode = 3 / supp = 6
1.0
0.6
0
40
m1 = m2
0.8
20
20
m 1 = m2
mode = 3 / supp = 2
0
20
0
0
m1 = m2
1.0
100
mode = 2 / supp = 6
1.0
0.8
0
80
m1 = m2
mode = 2 / supp = 4
1.0
0
60
0
0
m 1 = m2
mode = 2 / supp = 2
1.0
40
mode = 1 / supp = 6
1.0
0.8
0
20
m1 = m2
mode = 1 / supp = 4
1.0
m1 = m2
% time for verification
40
m 1 = m2
mode = 1 / supp = 2
1.0
mode = 0 / supp = 6
1.0
0
0
20
40
60
m 1 = m2
80
100
0
20
40
60
80
100
m1 = m2
Figure A.28: Average percentage of time taken by SNE verification in Iterated Box MIP 2StrongNash with MISSING
instances: black solid line as no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey
solid line as 9 other NEs, grey dotted line as 12 other NEs (Part 1)
77
% time for verification
mode = 0 / supp = 8
1.0
mode = 0 / supp = 10
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0
20
40
60
80
100
0
20
% time for verification
m1 = m2
% time for verification
80
100
mode = 1 / supp = 10
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0
20
40
60
80
100
0
20
m1 = m2
mode = 2 / supp = 8
1.0
40
60
80
100
m 1 = m2
mode = 2 / supp = 10
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0
20
40
60
80
100
0
20
m1 = m2
% time for verification
60
m 1 = m2
mode = 1 / supp = 8
1.0
40
60
80
100
m 1 = m2
mode = 3 / supp = 8
1.0
40
mode = 3 / supp = 10
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0
20
40
60
m1 = m2
80
100
0
20
40
60
80
100
m 1 = m2
Figure A.29: Average percentage of time taken by SNE verification in Iterated Box MIP 2StrongNash with MISSING
instances: black solid line as no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey
solid line as 9 other NEs, grey dotted line as 12 other NEs (Part 2)
78
% time for verification
mode = 0 / supp = 2
1.0
mode = 0 / supp = 4
1.0
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
0
0
20
40
60
80
100
0
0
20
% time for verification
m1 = m2
% time for verification
60
80
100
0
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
0
20
40
60
80
100
20
40
60
80
100
0
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
20
40
60
80
100
20
40
60
80
100
0
mode = 5 / supp = 4
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
40
60
m1 = m2
80
100
60
80
100
40
60
80
100
mode = 5 / supp = 6
1.0
0.6
0
40
m1 = m2
0.8
20
20
m 1 = m2
mode = 5 / supp = 2
0
20
0
0
m1 = m2
1.0
100
mode = 4 / supp = 6
1.0
0.8
0
80
m1 = m2
mode = 4 / supp = 4
1.0
0
60
0
0
m 1 = m2
mode = 4 / supp = 2
1.0
40
mode = 1 / supp = 6
1.0
0.8
0
20
m1 = m2
mode = 1 / supp = 4
1.0
m1 = m2
% time for verification
40
m 1 = m2
mode = 1 / supp = 2
1.0
mode = 0 / supp = 6
1.0
0
0
20
40
60
m 1 = m2
80
100
0
20
40
60
80
100
m1 = m2
Figure A.30: Average percentage of time taken by SNE verification in Iterated Box MIP 2StrongNash with MISSING
instances: black solid line as no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey
solid line as 9 other NEs, grey dotted line as 12 other NEs (Part 3)
79
% time for verification
mode = 0 / supp = 8
1.0
mode = 0 / supp = 10
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0
20
40
60
80
100
0
20
% time for verification
m1 = m2
% time for verification
80
100
mode = 1 / supp = 10
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0
20
40
60
80
100
0
20
m1 = m2
mode = 4 / supp = 8
1.0
40
60
80
100
m 1 = m2
mode = 4 / supp = 10
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0
20
40
60
80
100
0
20
m1 = m2
% time for verification
60
m 1 = m2
mode = 1 / supp = 8
1.0
40
60
80
100
m 1 = m2
mode = 5 / supp = 8
1.0
40
mode = 5 / supp = 10
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0
20
40
60
m1 = m2
80
100
0
20
40
60
80
100
m 1 = m2
Figure A.31: Average percentage of time taken by SNE verification in Iterated Box MIP 2StrongNash with MISSING
instances: black solid line as no other NE, black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey
solid line as 9 other NEs, grey dotted line as 12 other NEs (Part 4)
80
average compute time (s)
mode = 0
320
280
280
280
240
240
240
200
200
200
200
160
160
160
160
120
120
120
120
80
80
80
80
40
40
40
40
0
0
0
0
20
average iterations
40
m1
60
= m2
80
mode = 0
0
20
40
m1
60
= m2
80
0
0
mode = 1
10
20
40
m1
60
= m2
80
mode = 2
10
0
8
8
8
6
6
6
6
4
4
4
4
2
2
2
2
0
20
40
m1
60
= m2
80
mode = 0
24
0
0
20
40
m1
60
= m2
80
20
40
m1
60
= m2
80
mode = 2
24
20
20
16
16
16
16
12
12
12
12
8
8
8
8
4
4
4
4
0
0
0
40
m1
60
= m2
80
0
20
mode = 0
40
m1
60
= m2
80
20
40
m1
60
= m2
80
1.0
1.0
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0
60
m1 = m2
80
0
0
20
40
60
80
80
40
60
m1 = m2
80
0
0
m1 = m2
60
mode = 3
1.0
40
20
mode = 2
0.8
20
40
m1 = m2
mode = 3
0
1.0
0
80
0
0
mode = 1
0
20
24
20
20
60
mode = 3
0
20
0
40
m1 = m2
0
0
mode = 1
24
20
10
8
0
mode = 3
320
240
0
average time per iteration (s)
mode = 2
320
280
10
% time for verification
mode = 1
320
20
40
60
m1 = m2
80
0
20
40
60
80
m1 = m2
Figure A.32: Average results obtained with Iterated Box MIP 2StrongNash with MISSING instances in which there
is no SNE: black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey
dotted line as 12 other NEs (Part 1)
81
average compute time (s)
mode = 0
320
280
280
280
240
240
240
200
200
200
200
160
160
160
160
120
120
120
120
80
80
80
80
40
40
40
40
0
0
0
0
20
average iterations
40
m1
60
= m2
80
mode = 0
0
20
40
m1
60
= m2
80
0
0
mode = 1
10
20
40
m1
60
= m2
80
mode = 4
10
0
8
8
8
6
6
6
6
4
4
4
4
2
2
2
2
0
20
40
m1
60
= m2
80
mode = 0
24
0
0
20
40
m1
60
= m2
80
20
40
m1
60
= m2
80
mode = 4
24
20
20
16
16
16
16
12
12
12
12
8
8
8
8
4
4
4
4
0
0
0
40
m1
60
= m2
80
0
20
mode = 0
40
m1
60
= m2
80
20
40
m1
60
= m2
80
1.0
1.0
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0
60
m1 = m2
80
0
0
20
40
60
80
80
40
60
m1 = m2
80
0
0
m1 = m2
60
mode = 5
1.0
40
20
mode = 4
0.8
20
40
m1 = m2
mode = 5
0
1.0
0
80
0
0
mode = 1
0
20
24
20
20
60
mode = 5
0
20
0
40
m1 = m2
0
0
mode = 1
24
20
10
8
0
mode = 5
320
240
0
average time per iteration (s)
mode = 4
320
280
10
% time for verification
mode = 1
320
20
40
60
m1 = m2
80
0
20
40
60
80
m1 = m2
Figure A.33: Average results obtained with Iterated Box MIP 2StrongNash with MISSING instances in which there
is no SNE: black dashed line as 3 other NEs, black dotted line as 6 other NEs, grey solid line as 9 other NEs, grey
dotted line as 12 other NEs (Part 2)
82
© Copyright 2026 Paperzz