The price of anarchy in social dilemmas: Traditional

Organizational Behavior and Human Decision Processes 120 (2013) 142–153
Contents lists available at SciVerse ScienceDirect
Organizational Behavior and Human Decision Processes
journal homepage: www.elsevier.com/locate/obhdp
The price of anarchy in social dilemmas: Traditional research paradigms
and new network applications
Vincent Mak a,1, Amnon Rapoport b,⇑
a
b
Cambridge Judge Business School, University of Cambridge, Trumpington Street, Cambridge CB2 1AG, United Kingdom
Department of Management and Marketing, A. Gary Anderson Graduate School of Management, University of California at Riverside, Riverside, CA 92521, United States
a r t i c l e
i n f o
Article history:
Received 7 November 2011
Accepted 16 June 2012
Keywords:
Social dilemmas
Price of anarchy
Transportation and communication
networks
a b s t r a c t
Research on social dilemmas has largely been concerned with whether, and under what conditions, selfish
decisions by autonomous individuals jointly result in socially inefficient outcomes. By contrast, considerably less emphasis has been placed on the extent of the inefficiency in those outcomes relative to the
social optimum, and how the extent of inefficiency in theory compares with what is observed in experiments or practice. In this expository article, we introduce and subsequently extend the price of anarchy
(PoA), an index that originated in studies on communication in computer science, and illustrate how it
can be used to characterize the extent of inefficiency in social dilemmas. A second purpose of our article
is to introduce a class of social dilemmas that occur when individuals selfishly choose routes in networks,
and illustrate how the concept of PoA can be helpful in studying them.
Ó 2012 Elsevier Inc. All rights reserved.
Introduction
In this paper, we introduce an index of inefficiency in social
dilemmas called the price of anarchy (PoA). Our primary aim is
to place the extent of inefficiency on the agenda of experimental research on social dilemmas, as this area of research has largely been
preoccupied with whether socially inefficient outcomes occur and
what factors influence the probability of their occurrence. To
achieve our aim, we illustrate the application of PoA in a number
of examples; some of these (like the classic public goods game) will
be familiar to researchers on social dilemmas, while others involve
route choices in networks that might be less familiar. We hope that
our introductory exposition will engender new research on social
dilemmas in networks.
But what are social dilemmas? In his path-breaking review of
social dilemmas – a cornerstone in the development of research
in this field – Dawes (1980) defined a social dilemma as an interactive decision making situation that satisfies two properties. First,
the individual payoff for each agent who chooses to defect is
strictly higher than the payoff for choosing to cooperate, no matter
what choices are made by the other members of his group. The second property mandates that members of the population gain a
higher payoff if all cooperate than if all defect. In the language of
game theory, Dawes has opted to define social dilemmas as non-
⇑ Corresponding author. Fax: +1 951 827 3970.
E-mail addresses: [email protected] (V. Mak), [email protected]
(A. Rapoport).
1
Fax: +44 (0) 1223 339701.
0749-5978/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved.
http://dx.doi.org/10.1016/j.obhdp.2012.06.006
cooperative one-shot n-person games in which (i) all the n players
have strictly dominant strategies that (ii) collectively result in a socially inefficient (Nash) equilibrium.2 Tailored to the paradigmatic
n-person Prisoner’s Dilemma (PD) game (e.g., Rapoport, 1970), the
definition has several drawbacks. First, it does not allow for games
with mixed-strategy equilibria, which are progressively more often
included in the ever expanding scope of social dilemma research.
Second, it unnecessarily restricts the scope of social dilemma research by excluding games with multiple equilibria.
Even more restrictive is the requirement that all the n players
have strictly dominant strategies. Historically, this restriction has
been relaxed in subsequent reviews of social dilemma research
by Messick and Brewer (1983), Kollock (1998), and several experimental studies. In particular, Kollock offers a more general definition of social dilemmas as (p. 183): ‘‘. . .situations in which
individual rationality leads to collective irrationality. That is, individually reasonable behavior leads to a situation in which everyone
is worse off than they might have been otherwise.’’ Kollock argues
that, for example, the 2 2 Assurance game,3 in which neither
player has a dominant strategy, ‘‘is a more accurate model than
the Prisoner’s Dilemmas Game of many social dilemmas situations’’
(p. 187). And when discussing social dilemmas that satisfy the two
properties listed by Dawes, he wrote (p. 185): ‘‘However, not all social dilemmas involve dominating strategies.’’
2
See the section ‘‘The price of anarchy’’ for a formal exposition of the idea of
equilibrium.
3
The 2 2 Assurance game is a coordination game with two equilibria: mutual
cooperation, which is socially optimal, and mutual defection, which is socially
inefficient.
V. Mak, A. Rapoport / Organizational Behavior and Human Decision Processes 120 (2013) 142–153
To add another important example, and illustrate how the field
of social dilemmas has been expanded and consequently enriched
in the last 30 years by relaxing the definition of Dawes, consider
an experiment that, ironically, was conducted by van de Kragt,
Orbell, and Dawes (1983) only 3 years after Dawes had published
his Annual Review of Psychology paper. The experiment concerns a
one-shot step-level public goods game. The game is played by a
group of n symmetric players with no pre-play communication.
Each player is endowed with e units (e > 0) and must decide independently and simultaneously either to contribute all of them to
the benefit of his group (i.e., the public good), or keep all of them
for himself. If m or more players (1 < m 6 n) contribute their
endowments,4 then each player i receives a reward of r units (e < r);
if m 1 or fewer players contribute, then the contributors lose their
endowments, whereas the non-contributors keep theirs. It is easy to
verify that if exactly m players contribute their endowments, so that
everyone receives the reward, while the remaining n m players (in
cases when m < n) keep their endowments, the total group payoff is
maximized. This outcome satisfies the second property listed by
Dawes. However, in violation of the first property, universal contribution is no longer a dominant strategy. In fact, this ‘‘minimal contribution set’’ game has n!=½m!ðn mÞ! asymmetric equilibria in pure
strategies in which exactly m players contribute. Additionally, the
game has another symmetric equilibrium in which no player contributes. Therefore, while the socially optimal outcome in a step-level
public goods game is an equilibrium, it is possible that a socially
inefficient outcome occurs, which is also an equilibrium. Indeed, this
has often been reported by experimental studies of this game or its
variants (see, e.g., Chen, Au, & Komorita, 1996; Mak & Zwick, 2010;
McCarter, Budescu, & Scheffran, 2011, among many others).
Our reading of the literature on social dilemmas in psychology
and economics suggests that cases like the step-level public goods
game are quite common. In fact, most research on social dilemmas
is motivated by the observation that self-interested behavior by
autonomous decision makers generally leads to inefficient
outcomes; the presence of dominant strategies is not specifically
required. Therefore, in the present paper, we take the broader view
of defining social dilemma situations as non-cooperative games
with socially inefficient outcomes. As such, social dilemmas
encompass vastly different ranges of situations, from the classic
context of exploitation of commons resource pools to overpopulation, deforestation, and congestion of traffic networks (as shall be
described later). Common to all these social dilemmas is one single
theme, namely, that there exists a ‘‘worst-case equilibrium’’ which
is socially inefficient.
Eliciting cooperation
If individual rationality is, in general, not a sufficient condition
for achieving collective rationality (e.g., Sandler, 1992), then what
proposals may be advanced for eliciting behavior that increases social welfare? This question is of immense importance because of the
critical role of social dilemmas in modern society. It has far reaching
policy and educational implications that have been studied in much
detail (see e.g., Komorita & Parks, 1994; Ostrom, 1990; Ostrom,
Gardner, & Walker, 1994). Alternative solutions to social dilemmas
have been reviewed and critically discussed by Dawes (1980), Messick and Brewer (1983), Van Lange, Liebrand, Messick, and Wilke
(1992), Kollock (1998), and others. They include motivational solutions in which some or all of the decision makers have other-regarding (e.g., altruistic) preferences, and strategic solutions that may or
may not assume changes in the fundamental structure of the game.
4
If m = 1, this game is known as the Volunteer’s Dilemma; it has different properties
from what is discussed here.
143
For example, Dawes, McTavish, and Shaklee (1977) reported that
pre-play discussion of the dilemma significantly reduced the frequency of socially inefficient outcomes in n-person PD games. van
Dijk, de Kwaadsteniet, and De Cremer (2009) pointed out the need
for common understanding among players in facilitating coordination to achieve socially optimal outcomes. Brewer (1979) and Edney
(1980) suggested that cooperative solutions to social dilemmas may
be facilitated by exploiting social ties arising from social group
identity. Numerous studies (e.g., Isaac & Walker, 1988; Isaac,
Walker, & Thomas, 1984; Rapoport & Chammah, 1965) have demonstrated experimentally that changes in the payoff structure
may affect the frequency of socially efficient outcomes, such that
the greater the personal return from cooperation or the lower the
personal return from defection, the higher the level of cooperation.
Recent research highlights the impact of uncertainty about the payoff structure, rather than just its (expected) values, on cooperation
(e.g., McCarter, Rockmann, & Northcraft, 2010; van Dijk, Wit, Wilke,
& Budescu, 2004). In general, these solutions shy away from calling
for social designers to recommend which courses of action the
players should take or, in more controversial cases, for central
authorities to enforce such courses of actions. If norms of social
behavior are formed over time (e.g., in small communities), then
they are supposed to be established voluntarily.
Other solutions that originated in biology and computer science
have taken a distinctly more authoritarian perspective. In an influential article on the ‘‘tragedy of the commons’’, the biologist Hardin
(1968) concluded that ‘‘freedom in a commons brings ruin to all’’
and advocated ‘‘mutual coercion mutually agreed upon.’’ In computer science, where it is generally not the case that agents are
completely unrestricted, Roughgarden (2009) suggested that efficient joint outcomes ‘‘could be improved upon given dictatorial
control over everyone’s actions.’’ Others have been looking for a
middle ground between centrally enforced solutions and completely unregulated anarchy. For example, Anshelevich et al.
(2008) pointed out that agents using communication networks
interact with an underlying protocol that proposes a collective
solution to all the users who may individually either accept or reject it; as such, the protocol designers may at least seek to promote
the best possible equilibrium strategies in terms of total welfare
(see the Section ‘‘Extensions and generalizations’’).
The price of anarchy
Imposing changes in the payoff structure, conducting pre-play
communication, or establishing superordinate authority to control
everyone’s action, are almost always costly in terms of time and
money, often infeasible, and may frequently trigger socially undesirable reactions (e.g., a negative reaction among the group members to the infringement of their individual freedom). Therefore, a
key question is which proposal or combination of proposals to
implement (if any), and under what conditions in order to achieve
near-optimal outcomes. This question cannot be answered in practice without measuring the potential extent of inefficiency caused
by the behavior of independent, self-interested individuals. If the
extent of inefficiency, even in the worst scenario, is relatively
small, then the cost of implementing procedures to elicit cooperative behavior may exceed whatever gain in efficiency that could result. But if it is relatively large, then it might be worthwhile to
bring about conditions under which decentralized optimization
by selfish individuals is guaranteed to produce outcomes that are
near-optimal.
Three steps ought to be taken in order to answer this question.
The first is to choose a formal model that defines ‘‘the outcome of
selfish behavior.’’ The second is to define a measure of the efficiency of each outcome, often referred to as a welfare function.
144
V. Mak, A. Rapoport / Organizational Behavior and Human Decision Processes 120 (2013) 142–153
The third is to define an index that quantifies the loss of efficiency
resulting from selfish and uncoordinated behavior.
Regarding the first step, the formal model most often chosen to
define ‘‘the outcome of selfish behavior’’ is the Nash equilibrium, of
which the Nash equilibrium in pure strategies is the simplest to
illustrate. A pure-strategy (Nash) equilibrium is defined as follows:
each of n players in a non-cooperative game selects a (pure) strategy si from a set of strategies Si (i = 1, 2,. . ., n), with the resulting
utility or payoff ui(s) of player i having as its argument the strategy
profile s ¼ fs1 ; s2 ; . . . sn g. That is, the payoff of a player, in principle,
depends on the strategies chosen by all the players. A strategy profile is called a pure-strategy equilibrium if no player can be better
off by unilateral deviation, i.e., ui ðsÞ P ui ðs0i ; si Þ for every i and
s0i 2 Si , where si denotes the strategies chosen by all the n players
other than player i. It then follows that in equilibrium each strategy
is a best response to the equilibrium strategies chosen by the other
players. More generally, an equilibrium is defined over mixed
strategies, which is a broader notion than pure strategies. A mixed
strategy for player i is one in which i chooses over the strategies in
Si with a specific probability assigned to each strategy; if one of
these probabilities is 1, then the strategy becomes a pure strategy.
A mixed-strategy equilibrium is one in which a player cannot be
better off by deviating from his mixed strategy, given the mixed
strategies of all other players.
Regarding the second step, natural candidates for measuring
efficiency include the sum of the utilities of all the players (utilitarian function), the minimum utility across all the players (an egalitarian function), or any other function that is deemed meaningful
for the particular social dilemma situation under consideration.
The most commonly chosen function – the one that we use
throughout this paper – is the utilitarian function that is the sum
of all the players’ utilities.
The third step can be fulfilled by the notion of price of anarchy
(PoA), which is the focus of this paper. The notion was first developed by the computer scientists Koutsoupias and Papadimitriou
(1999), who proposed the quantification of the loss of efficiency
resulting from selfish behavior by the ratio between the social welfare of the worst-case equilibrium (i.e., the equilibrium with the
lowest social welfare among all equilibria) and that of the socially
optimal outcome (i.e., the joint outcome that maximizes social
welfare regardless of whether it is an equilibrium or not). Papadimitriou (2001) then named this index the price of anarchy. We
formally illustrate this concept as follows, but for simplicity limit
ourselves to pure-strategy outcomes; it is straightforward to generalize the formulation to include all mixed-strategy outcomes.
Consider a game played by a set of n players with strategy set Si
for each player i and well-defined utilities or payoffs ui(s)
(i = 1, 2, . . ., n) for each strategy profile s as discussed before; as
such, any joint outcome of the game can completely be defined
as a strategy profile. Denote by S the set of all possible strategy
profiles. One can then define, as suggested earlier in this section,
a measure of efficiency W(s) for each strategy profile (i.e., joint outcome) s, which is the total utility summed over the individual utilities of all the players in the outcome. We next identify a subset
E # S to be the set of strategy profiles that are pure-strategy equilibria. If the payoffs in all equilibria are positive, then the price of
anarchy is defined by:
PoA ¼ maxWðsÞ=minWðsÞ:
s2S
s2E
If the payoffs are framed as costs, then in measuring efficiency,
instead of social welfare to be maximized we should consider social cost (the total cost summed over the individual costs of all
the players) to be minimized. The price of anarchy is then defined
by:
PoA ¼ maxCðsÞ=minCðsÞ;
s2E
s2S
where C(s) is the total cost summed over the costs of all players in
an outcome with strategy profile s. The PoA can similarly be defined
when mixed-strategy outcomes are also considered. Note that the
PoA is always defined so that it is larger than or equal to unity. As
can be intuited from the definitions, the higher the PoA, the more
‘‘severe’’ the social dilemma is.5
A social dilemma game has a well-defined PoA whenever players’ payoffs in all equilibria can be framed as all positive gains or all
costs relative to a natural reference point, so that the social welfare
measures in the worst-case equilibrium as well as the socially optimal outcome would have the same sign. Therefore, for example,
the application of the PoA is problematic in the 2 2 PD game in
which mutual cooperation yields 15 units while mutual defection
yields 15 units to each player. However, in many social dilemma
experiments, the condition of applicability is fulfilled as players
typically have positive payoffs even in the worst-case equilibrium
(such as keeping all their endowments in a public goods game).6 In
the social dilemmas in networks that are introduced in this article,
this condition is also fulfilled as payoffs can always be framed as
costs of travel.
The PoA has the desirable empirical property of being invariant
to re-scaling of the payoff units – measuring the payoffs in cents
rather than dollars would not change the PoA of a social dilemma
situation. Thus, while it is not necessarily the only possible index of
inefficiency in social dilemmas (for example, one might suggest
that the difference, rather than ratio, between welfare in the socially optimal outcome and the worse-case equilibrium could also
serve as a valid index), it is nevertheless a convenient and meaningful index; the PoA has, indeed, found many applications in algorithmic game theory (see e.g. Nisan, Roughgarden, Tardos, &
Vazirani, 2007) in problems as diverse as load balancing, traffic
routing, fair cost-sharing allocation, and network design.
To sum up, the PoA is an index that is applicable across many
social dilemma games and is useful in succinctly highlighting the
games’ welfare properties. As such, it is instructive to compare
the PoA with another example of constructing indices for social dilemma games, namely, the two indices of cooperation proposed by
Rapoport and Chammah (1965, chap. 1). Those indices are tailored
specifically to the 2 2 PD game and make use of all four payoffs in
the symmetric PD payoff matrix (see Fig. 1A). One of the indices is
arguably a satisfactory predictor of cooperation in Rapoport and
Chammah’s experimental data. However, both indices are restricted to the 2 2 PD game; they are not generalizable to other
n n PD games (n > 2) and certainly not to other social dilemmas.
On the other hand, the PoA allows comparison of different games
to one another, and not only variants of the same game that differ
from one another in their payoffs. Its generalizability is an advantage that Rapoport and Chammah’s indices do not have. Moreover,
Rapoport and Chammah’s indices were constructed purely as candidates for predicting experimental cooperation with no other substantive theoretical motivations behind them. By contrast, the PoA
has clear and general welfare motivations behind its construction,
so that the question of whether/how the PoA influences subjects’
perception of a social dilemma is intrinsically important, as we
shall state below.
A major purpose of the present paper is to extend the idea of
5
Note that the extent of inefficiency is not the only dimension of the ‘‘severity’’ of a
social dilemma. Kollock (1998, p. 185) mentions another dimension of severity as
whether the strategies in the socially inefficient equilibrium are strictly dominant.
The classic public goods game is among the most severe social dilemmas along this
dimension.
6
On the other hand, in game-theoretic analysis final outcomes are stated as
utilities (and not numerical values) that are unique up to an affine transformation.
Therefore, the payoffs in a social dilemma may be transformed to be all positive in an
experimental implementation with no substantive impact on the dilemma’s gametheoretic characteristics as derived from the expected utility approach.
V. Mak, A. Rapoport / Organizational Behavior and Human Decision Processes 120 (2013) 142–153
145
Fig. 1. Payoff matrices for two 2 2 games exhibiting social dilemmas. In both cases, (C, C) is the socially optimal joint outcome.
PoA to the field of experimental social dilemma research. We argue
that the idea, which captures the extent of inefficiency in social
dilemmas, has much potential in inspiring new experimental research, which has largely been preoccupied with whether socially
inefficient outcomes occur and what factors influence the probability of their occurrence. To begin with, the PoA can serve a descriptive purpose in objectively characterizing the ‘‘severity’’ of a social
dilemma in terms of the welfare change from the worst-case equilibrium to the socially optimal outcome. It may then inspire enquiries in experimental research such as:
(1) Is subjects’ psychological perception of a social dilemma
influenced by the dilemma’s PoA? Which aspects of that perception might be influenced? How might this relationship
depend on the specific payoff structure of the experimental
game, contextual factors in the experiment, or individual difference variables?
(2) If the PoA influences subjects’ perception of a social
dilemma, then how might that influence be translated to
subjects’ motivation to cooperate? Would an increase in
the PoA lead to a higher or lower motivation to cooperate
that is mediated by a change in the perception of the social
dilemma (see also the Discussion after Example 4 in the next
section)? Lastly, as with (1), how might this influence be
moderated by the payoff structure of the game, contextual
factors, or individual difference variables?
We do not venture to provide definite answers or formulate
specific hypotheses for these questions, but simply propose them
as major potential avenues for researchers to explore. We note that
research into these questions may have practical policy implications when deciding what steps to take (if at all) to decrease inefficiency in real-life social dilemmas.
Despite the apparent importance of these questions, experimental social dilemmas research in psychology and economics has paid
relatively little attention to the extent of inefficiency in social
dilemmas, be that the theoretical extent (as derived from game theory analysis) or the empirical extent (as observed in experiments).
The predominant dependent variable in experimental social dilemma research is the subject’s choice of strategy, such as the amount
of contribution in a public goods game; in step-level public goods
game, the frequency of the optimal social outcome being attained
(i.e. provision of the public good) is also a much reported variable.
Quantification of welfare loss, meanwhile, has seldom been on the
agenda. While efficiency is usually mentioned and its measurement
occasionally reported (in the form of subject earnings), the quantified efficiency loss of theoretical or observed outcome relative to
the social optimum has rarely been discussed. Even more importantly, there is scarcely any attempt to investigate how the extent
of inefficiency influences subjects’ perception of the dilemma and
their psychological motivation in behaving cooperatively.7
As the PoA was first discussed in the context of social dilemmas
in networks, a second objective of this article is to introduce this
class of social dilemmas to experimental researchers. In the rest
of this article, we first illustrate the PoA with a number of examples,
starting with the classical paradigms in social dilemma research,
and then proceeding to social dilemmas in networks. We next
introduce a number of extensions, such as the notion of the price
of stability, replacing equilibrium play with play intended to minimize regret, and the idea of price of empirical anarchy. We conclude the paper with a discussion on future research directions.
Examples from well-known social dilemmas
In this section, we illustrate the use of PoA with a number of
well-known social dilemmas, beginning with the PD game.
Example 1: The Prisoner’s Dilemma (PD) game
The simplest and most well-known example of social dilemma
is the 2 2 PD game. Fig. 1A depicts the payoff matrix of the
symmetric version of this game. In the figure, si = C denotes the
decision to cooperate, si = D the decision to defect, R is the reward
7
The observations in this paragraph can find support in a look at the social
psychology studies on social dilemmas cited in the previous section. Regarding
experimental economics research on social dilemmas, similar support can be
obtained from examining review texts such as Ledyard (1995) on public goods
games, Croson and Marks (2000) on step-level public goods games, and relevant
sections in Plott and Smith (2008), e.g. chaps. 51 and 52 and Part 6.1.
146
V. Mak, A. Rapoport / Organizational Behavior and Human Decision Processes 120 (2013) 142–153
to each player for mutual cooperation, T the temptation to defect, S
the ‘‘sucker’s payoff’’, and P the punishment for mutual defection
(e.g., Axelrod, 1984; Rapoport & Chammah, 1965). Define the social
welfare function as the sum of the two players’ payoffs. The defining inequalities for this dilemma are T > R > P > S and R > (T + S)/2;
the unique equilibrium is then (D, D) (mutual defection) with total
payoff 2R while the socially optimal outcome is (C, C) (mutual
cooperation) with total payoff 2P. Consider the case when P > 0,
so that the equilibrium payoffs are positive and the concept of
PoA is applicable. As such, PoA = R/P, a quantity that (as should
be expected) increases in R and decreases in P. For example, if
T = 5, R = 4, P = 2, S = 1, then PoA = 4/2 = 2.
Rapoport and Chammah (1965) found that the index (R P)/
(T S) had a positive correlation with degree of cooperation in
their experimental data across various PD games. This suggests
that a higher PoA (i.e., higher R and lower P) empirically increases
cooperation in a PD game, all else being equal.
Example 2: The game of chicken
The Prisoner’s Dilemma game has a unique equilibrium that is
socially inefficient. In this sense, it is different from the game of
Chicken, which has played an important role in, e.g., the analysis
of problems of strategic interaction between nations (e.g., Schelling, 1963). This game is obtained from the PD game by swapping
the two least-preferred outcomes in the individual preference
ordering of the four outcomes. Fig. 1(B) exhibits the symmetric
form of the game. The inequalities that define the game are
T > R > P > S and R > ðT þ PÞ=2. The socially optimal outcome is
(C, C) with payoff (R, R), which is not an equilibrium. There are
two pure-strategy equilibria (C, D) and (D, C) with associated payoffs (P, T) and (T, P), respectively, and the total group payoff is
T + P in either equilibrium. The game also has a mixed-strategy
equilibrium where each player chooses strategy C (‘‘cooperate’’)
and D (‘‘defect’’) with respective probabilities q and 1 q, where
q is computed from R q þ P ð1 qÞ ¼ T q þ S ð1 qÞ i.e.,
q ¼ ðP SÞ=½ðP SÞ þ ðT RÞ:
In the mixed-strategy equilibrium, each player earns the same
expected payoff of Rq þ Pð1 qÞ ¼ ½RðP SÞ þ SðT RÞ=½ðP SÞþ
ðT RÞ. Consider the case where all payoffs are positive, so that
the concept of PoA is applicable. The PoA is then the total payoff
2R in the socially optimal outcome divided by that of the worstcase equilibrium, which will be the lowest total payoff among all
equilibria. The general formula for the PoA is, therefore,
maxf½2R=ðP þ TÞ; R½ðP SÞ þ ðT RÞ=½RðP SÞ þ PðT RÞg:
For example, if T = 5, R = 4, P = 2, and S = 1 (the same set of numbers as used in our numerical example for the PD game), then, in
the mixed-strategy equilibrium, strategies C and D are played with
equal probabilities by each player. Denote the mixed strategy by
(1/2C, 1/2D). The total group payoffs associated with the three
pairs of equilibrium strategies (C, D), (D, C), and (1/2C, 1/2D) are
7, 7, and 6, respectively, so that the worst-case equilibrium is in
mixed strategies. The total group payoff in the socially optimal outcome is 2R = 8. Thus, PoA = 8/6 = 4/3.
Compared with the PD game example with the same set of numbers arranged differently in the payoff matrix (compare Fig. 1A and
B), the PoA is smaller. Intuitively, this is due to the fact that, in the
PD, the socially inefficient outcome involves mutual defection and
therefore mutual destruction of payoffs, so that both players receive the lowest possible payoffs in the game. However, in the
game of Chicken, all the three equilibria involve some probability
of occurrence of an outcome (C, D) or (D, C), in which one player obtains the highest possible payoff in the game while the other obtains a low payoff. This suggests that the social dilemma in the
game of Chicken is not as ‘‘severe’’ as that in the PD game in terms
of the welfare properties of the two dilemmas, and the difference in
PoA values in our numerical example captures this intuition. That
said, we do not infer that cooperation should then be more likely
with the game of Chicken than with the PD game upon a rearrangement of payoff parameters as described. Systematic experimental comparisons are needed to investigate such a possibility.
Example 3: The classic public goods game
Next, consider a very simple version of the classic public goods
game played by n players, where every player is endowed with one
unit and must decide independently and simultaneously whether
to keep it for himself or contribute it for the provision of the public
good. If the total contribution turns out to be k units, then every
player receives a reward of rk units, where (1/n) < r < 1. This game
is a social dilemma because a player always finds it a strictly dominant strategy to keep his one unit instead of contributing it, since
he will be better off by 1 r as a result. Nevertheless, the socially
optimal outcome is for all players to contribute their endowments
so that each ends up with a payoff of rn > 1. The socially optimal
total group payoff is, therefore, rn2 , while the equilibrium payoff
(when every player keeps his endowment) is n. The PoA is thus
ðrn2 Þ=n ¼ rn. Note that the PoA increases both with the marginal
benefit of the public good, measured by r, as well as the group size
n. This captures the intuition that the public goods dilemma is
effectively more severe when the population becomes larger and/
or when the public good can more highly improve welfare. There
is evidence that these two changes in parameter values could lead
to more cooperative behavior in public goods experiments (see,
e.g., Ledyard, 1995).8 Thus, it might be suggested that there is a positive correlation between PoA and cooperative behavior in public
goods games.
Example 4: The step-level public goods game
Consider next the following ‘‘minimal contribution set’’ game
studied by van de Kragt et al. (1983). Recall that each of n players
is endowed with one unit and must decide independently and
simultaneously whether to keep it for himself or contribute it for
the public good. If m or more players (1 < m 6 n) contribute their
endowments, then every player receives a reward of r > 1 units;
if m 1 or fewer players contribute, then all contributors lose their
endowments, whereas the non-contributors keep theirs. The socially optimal outcome, which is also an equilibrium outcome
(see Introduction), has exactly m players contributing and n m
players keeping their endowment, so that the total group payoff is
m r þ ðn mÞ ð1 þ rÞ ¼ nð1 þ rÞ m:
A worst-case equilibrium occurs when no player contributes,9
so that the total group payoff is n. Therefore,
PoA ¼ ½nð1 þ rÞ m=n ¼ 1 þ r ðm=nÞ:
Controlling for m, the PoA increases with the benefit of the
8
Isaac et al. (1984) suggest that contribution is positively related to the marginal
per capita return (MPCR), which would be r in the present example. Ledyard (1995)
and Holt and Laury (2008) discuss theoretical and empirical evidence in support of
the possibility that contribution increases with group size in a classic public goods
game controlling for MPCR (at least when the values of both group size and MPCR are
not too large). These are consistent with a positive relationship between contribution
and the PoA.
9
Clearly, in any pure- or mixed-strategy equilibrium, a player’s expected utility
cannot be less than that of definitely contributing nothing (otherwise the player could
be better off by switching from his equilibrium strategy to definitely contributing
nothing). This implies that a player’s expected utility in any equilibrium must be at
least one. Thus the equilibrium in which no player contributes must be a worst-case
equilibrium.
V. Mak, A. Rapoport / Organizational Behavior and Human Decision Processes 120 (2013) 142–153
public good, i.e. r, and also increases with the group size n. This
observation can be interpreted with a similar intuition as proposed in our earlier discussion of a similar dependence in the
classic public goods game. However, an additional feature is that
the PoA decreases with the provision threshold m, controlling for
the other two parameters. This suggests that a higher provision
threshold leads to a ‘‘less severe’’ social dilemma. It is typically
observed in experiments on step-level public goods games that
a higher provision threshold renders cooperation more difficult
and successful provision of the public good less likely (see e.g.
Croson & Marks, 2000).10 The PoA perspective highlights the other
side of coin, namely, that a higher threshold also makes cooperation less ‘‘important’’, as the improvement in welfare from the
worst-case equilibrium to the socially optimal outcome is less than
when the threshold is lower. Overall, this points to a positive correlation between the PoA and cooperation in step-level public
goods games.
Discussion
Across the four classic examples we have surveyed, there seems
to be much experimental evidence suggesting a positive correlation between the PoA and cooperation. It might be surmised from
these streams of research that the correlation is generalizable. In
other words, an increase (a reduction) in the potential welfare
improvement of cooperation might affect subjects’ perception of
the dilemma in a way that increases (decreases) subjects’ psychological motivation to cooperate. However, it may be premature to
suggest that the positive correlation between PoA and cooperation
is always true in any social dilemma. More psychological theorizing
needs to be carried out to explain why a social dilemma with more
dramatic welfare change from worst-case equilibrium to socially
optimal outcome necessarily increases the subject’s motivation
to cooperate, and more experimentation needs to be carried out
to demonstrate it. So far the evidence only shows that parameter
changes that lead to an increase (a decrease) in the PoA also lead
to a higher (lower) likelihood of cooperation, without showing that
the change in the extent of inefficiency is perceived by subjects and
influences their motivation and behavior. Even if the overall thesis
comes to be well supported, much remains unanswered, such as
how this influence might be moderated by the specific payoff
structure of the game as well as other contextual factors and individual difference variables.
Social dilemmas in directed networks
Transportation and communication networks provide the infrastructure to conduct much of our social activities. And yet social
dilemmas in networks, which have been studied by transportation
researchers interested in route choice in traffic networks and computer scientists interested in communication networks, have received relatively little attention among experimenters. In this
section, we introduce a few of these dilemmas that we hope would
trigger experimental research. The following examples are noncooperative games in which players need to choose over different
routes in a network, and the payoffs are always framed as costs.
Equilibrium outcomes are often socially inefficient, and the PoA
is a convenient index to characterize the extent of inefficiency in
each case.
147
In fair cost-sharing games (e.g., Anshelevich et al., 2008; Balcan,
Blum, & Mansour, 2010; Monderer & Shapley, 1996), n players
choose routes in a network and split the cost of the edges on their
route with other users choosing the same edges. A simple case is
where each player is a commuter who has to choose one of two
alternatives. The first is to drive his car to a common destination
at a cost of 1. The second is to share public transportation with others to the same destination, and split the exogenously determined
cost of travel k (1 < k < n) evenly among the users of public transportation. Fig. 2 depicts the game. Commuters arrive one at a time
in a given sequence and are not informed about the decisions of the
commuters who preceded them in the sequence. It is easy to see
that a worst-case equilibrium occurs when each commuter drives
his own car, so that the total group cost is n units.11 Social cost is
minimized if all the n users share public transportation and each
pays k/n, in which case the total group cost is k. The PoA is, therefore,
n/k, a quantity that increases with the group size n and decreases
with the overall cost of public transportation k.
In a variant of this game, k is an integer while the first k 1
commuters make their decisions without being informed about
the decisions of those who preceded them in the sequence. However, every commuter who makes a decision after those k 1 is
fully informed about the decisions of the m commuters who immediately preceded him in the sequence, where m < k. The socially
optimal outcome (i.e., all choose public transportation) as well as
the worst-case equilibrium (i.e., all drive their own cars) remain
the same, and, therefore, so is the PoA. Nevertheless, we may expect that, depending on the values of k and m, an experiment on
this variant may produce different empirical observations from
the standard game in which players make decisions without being
informed of any other player’s decisions.
Example 6: Selfish routing in a network – the Pigou–Knight–Downs
Paradox
The next example describes a model that has been around for a
long time (Downs, 1962) and was first discussed qualitatively by
the economist Pigou more than 90 years ago. For a non-technical
exposition, see Arnott and Small (1994), and for experimental work
see Morgan, Orzen, and Sefton (2009). In its simplest form, the
model involves a network with a common origin and a common
destination (Fig. 3). The two nodes are connected by two parallel
edges. Assume that a finite number n of network users have to travel at the same time from the origin to the destination. The top
route in Fig. 3 is a wide highway which can accommodate all the
n users, and as such is not susceptible to congestion. Assume that,
regardless of the number of users of this road, the cost per user is
fixed at 1. The bottom route is a considerably shorter but narrower
road which is susceptible to congestion; the more users simultaneously drive on this road, the slower the going and, consequently,
the higher the cost of travel. Assume that the cost per user for this
narrow road is CðmÞ ¼ a þ bðm=nÞc , where c > 0, 1 > a P 0,
b P 1 a, and m is the number of users choosing the narrow road.
If c = 1, then the cost function is linear. If, in addition, a ¼ 0, then
the cost of traveling on the narrow road is simply proportional to
the amount of traffic on it.
Consider the case when each user selfishly chooses one of the
two routes in an attempt to minimize his cost of travel. Then, the
concept of Wardrop equilibrium, which is commonly employed
in these problems (see Beckmann, McGuire, & Winsten, 1956; Morgan et al., 2009; Wardrop, 1952), prescribes that all users incur the
Example 5: Fair cost-sharing games
10
Croson and Marks (2000) suggest that contribution in a step-level public goods
game is positively related to the step return (SR), which would be rn/m in our
example. The dependence of SR with every game parameter is in the same direction
as the dependence of the PoA with that parameter.
11
In any pure- or mixed-strategy equilibrium, a player’s expected utility must be no
less than that of definitely driving his own car; otherwise, he could be better off by
switching from his equilibrium strategy to definitely driving his own car. Thus, the
equilibrium in which everyone drives his own car must be a worst-case equilibrium.
148
V. Mak, A. Rapoport / Organizational Behavior and Human Decision Processes 120 (2013) 142–153
d1
0
Public
Transport
1
d2
0
d3
0
1
1
Destination
1
0
dn-1
1
0
dn
k
Fig. 2. Graphical representation of the fair cost-sharing game. Each di (i = 1, 2, 3, . . ., n) represents a commuter who has to make a decision on whether to use public
transportation to get to the destination and split the cost k evenly with other users (1 < k < n), or drive his own car with cost 1 to the same destination (denoted by the direct
route from commuter to destination). The number beside each route (i.e. line with arrow) indicates the total cost of using that route. PoA = n/k..
Fig. 3. An example of the Pigou–Knight–Downs with two routes, where m is the
number of users of the bottom route, n is the total number of players, c > 0,
1 > a P 0, and b P 1 a. PoA = f1 ðc=b1=c Þ ½ð1 aÞ=ð1 þ cÞð1þcÞ=c g1 (see also
Table 1).
same traffic cost in equilibrium.12 Because a þ b P 1, the equilibrium cost must be that of the wider road i.e., 1.13 Hence, if there is
an improvement in the condition of the narrow road that can be captured as a change in the parameters (e.g., a decrease in a or b or an
increase in c), it will be counterbalanced by a corresponding endogenous adjustment of equilibrium demands along the two routes; as a
result, the users will incur the same costs as before and will not benefit from the changes at all. This is the ‘‘paradox’’ that has been studied by Morgan et al. (2009).
Next, suppose that some central authority has been designated
to assign the n users to specific routes in an attempt to minimize
the social welfare as measured by the total group cost of travel. Denote the number of users assigned to the narrow and wide roads by
m and n m, respectively. The total group cost summed across the
n users is:
ðn mÞ 1 þ m ½a þ bðm=nÞc ¼ n ð1 aÞm þ b mcþ1 =nc :
12
A Wardrop equilibrium is a Nash equilibrium when its number of users are all
integers. In general, the Nash equilibria converge to the Wardrop equilibria when the
group size is large. See Morgan et al. (2009) for more discussion on this issue in an
experimental context.
13
This must be true in an equilibrium in which some users choose the wide road. If
the equilibrium is not as such, it must have all users choosing the narrow road and
incur a per user cost a + b P 1; if a + b = 1, then the same conclusion is obtained as in
the text. If a + b > 1, then an individual user has an incentive to unilaterally deviate
and choose the wide road, so that it cannot be an equilibrium that all users choose the
narrow road.
The solution of the first-order condition of this expression is
m ¼ n fð1 aÞ=½bðc þ 1Þg1=c ; that is, the total group cost is minimized at this value of m.14 The resulting socially optimal total group
cost is n f1 ðc=b1=c Þ ½ð1 aÞ=ð1 þ cÞð1þcÞ=c g. The price of anarchy
is n 1 = n (the total group cost in equilibrium) divided by the socially optimal total group cost. For example, if c = 1, a ¼ 0, and
b ¼ 1, so that CðmÞ ¼ m=n on the narrow road, then the socially optimal choice of m is n/2, which results in a socially optimal total group
cost of travel of ðn=2Þ 1 þ ðn=2Þ ðn=2nÞ ¼ 3n=4. The price of anarchy is then PoA = 4/3.
Table 1 presents the values of socially optimal n m, m, C(m),
and then the PoA, for selected values of c, while the other parameters are controlled at a ¼ 0, b ¼ 1 and n = 100. In this case, the
worst equilibrium is attained when all users choose driving on
the narrow road and incur a cost of 1 per user. It is easy to see that
as c increases, m approaches n, and the PoA increases. However,
note that an increase in c corresponds to an improvement in the
condition of the narrow road, as the cost of traveling on that road
decreases controlling for m. Nevertheless, the PoA increases at the
same time. This is because the equilibrium total group cost remains unchanged (i.e., n) while the socially optimal total group
cost decreases with an increase in the number of users assigned
to the narrow road. That is, upon an increase in c, the socially optimal outcome is actually ‘‘closer’’ to the equilibrium state in which
all choose driving on the narrow road, and yet the social dilemma
is more severe in the sense of an increase in PoA.
Example 7: Choice of routes in a network – the Braess Paradox
It would seem natural to believe that increasing the capacity of
an existing transportation or communication network by widening
its links or adding one or more segments to the network would not
worsen and, most likely, would improve efficiency. Braess (1968),
an applied mathematician, shattered this belief by demonstrating
that, paradoxically, adding a link that connects two alternative
routes joining a common origin to a common destination may raise
the equilibrium travel cost of every network user. His work on
what is commonly known as the Braess Paradox (BP) has stimulated considerable theoretical research in transportation science
and computer science (e.g., Roughgarden, 2005). It also has given
rise to a few experimental studies in recent years (e.g., Gisches &
Rapoport, 2012; Morgan et al., 2009; Rapoport, Kugler, Dugar, &
Gisches, 2009; Rapoport, Mak, & Zwick, 2006). The purpose of this
14
Note that, since c > 0, the total group cost expression is convex in m for all positive
values of m. Hence, if the first-order condition yields a unique positive solution (as is
indeed the case here), that must correspond to a minimum over all positive values of
m.
149
V. Mak, A. Rapoport / Organizational Behavior and Human Decision Processes 120 (2013) 142–153
Table 1
Price of anarchy (PoA) values for selected values of c in the Pigou–Knight–Downs Paradox in Fig. 3 with a ¼ 0, b ¼ 1, and n = 100. The ‘‘wide road’’ (‘‘narrow road’’) corresponds to
the top (bottom) route in Fig. 3. The worst-case equilibrium in this case always has all users traversing the narrow road and thereby incurring a cost of 1 per user.
c
1/5
1/2
1
2
3
4
5
6
7
8
Socially optimal outcome
PoA
No. of users of wide road, n m
No. of users of narrow road, M
Cost of using wide road per user
Cost of using narrow road per user
60
56
50
42
37
33
30
28
26
24
40
44
50
58
63
67
70
72
74
76
1
1
1
1
1
1
1
1
1
1
0.933
0.851
0.750
0.615
0.528
0.465
0.418
0.380
0.350
0.325
example is to illustrate the games’ pure-strategy PoA in a very simple network exhibiting the BP, and then examine how it changes as
the group size increases.
Similarly to Example 6, we consider networks with a common
origin O and common destination D that are modeled as directed
graphs (cf. Fig. 4). Any edge of the graph linking node i to node j
is denoted by (i, j), and the traffic on the edge is denoted by fij,
which is interpreted as the number of users traversing the edge
from i to j. The cost of each user who traverses the edge (i, j) is denoted by cij(fij), which, in principle, is a function of fij. The travel
cost of a user is the sum of the edge costs across all the edges in
the path he chooses from O to D. For equilibrium consideration,
we assume that, as in previous examples, each of the n users independently seeks to choose a path that minimizes his cost of travel.
Consider now the simple example in Fig. 4(A), which we call the
basic network. The network includes two parallel routes, namely,
O ? A ? D and O ? B ? D. Each route has two segments. One segment (A ? D or O ? B) has a fixed cost (210) and as such is not subject to congestion. A second segment (O ? A or B ? D) has a
variable cost of the form cij ðfij Þ ¼ 10f ij , and is consequently subject
to congestion. The socially optimal outcome, which is also an equilibrium outcome, is for half of the users to choose route O ? A ? D
and the other half to choose route O ? B ? D. Next consider the
three-route network in Fig. 4B, which we call the augmented network. This network is the same as the basic network with the only
difference that a new link A ? B has been added to connect nodes A
and B. Travel on this new link (e.g., ‘‘bridge’’) is assumed to be costless: cAB(fAB) = 0 for all 0 6 fAB 6 n. In the augmented network, each
user has to choose independently one of three routes when he departs from O, namely, routes O ? A ? D, O ? B ? D, and
O ? A ? B ? D.
Assume that the group size (total number of players) is n = 18.
The augmented network (Fig. 4B) then has a unique equilibrium
in pure strategies where all users independently choose to travel
on route O ? A ? B ? D. This can be verified by noticing that each
unilateral deviation by a single player from route O ? A ? B ? D
to either route O ? A ? D or route O ? B ? D increases the individual cost of travel. The associated equilibrium cost of travel
(A) Basic
10fOA
summed across all the n users is cOA(18) + cAB(18) + cBD(18) = 360.
It is easy to verify that this equilibrium is socially inefficient; the
total group cost of travel is minimized when half of the users (n/
2 = 9) choose route O ? A ? D and half choose route O ? B ? D,
essentially playing the basic game’s equilibrium in Fig. 4A, in
which case the total group cost of travel will be reduced from
360 to 300. This is an illustration of the BP showing that, paradoxically, degradation of the network in Fig. 4B by removing the link
A ? B improves performance. The PoA is 360/300 = 1.2.
In the more general case when cij ðfij Þ ¼ aij þ bij fij in any edge
(i, j), where aij and bij are non-negative constants, the following
conditions are necessary and sufficient for the BP to appear across
the two networks in Fig. 4 (see Penchina, 1997):
1. The network game must have both fixed user costs and
variable user costs.
2. The two paths in the basic network must have an opposite
order of appearance of the edges associated with fixed vs.
variable costs.
3. The fixed cost of traversing the edge A ? B in the augmented
network must be less than the difference in fixed costs
between the edges dominated by fixed costs and those
dominated by variable costs.
Other networks that exhibit the BP may be constructed with
more paths (see, e.g., Gisches & Rapoport, 2012; Rapoport et al.,
2009; Roughgarden, 2005).
The realization of the BP depends jointly on the cost structure
and the group size n. We next examine the augmented network
in Fig. 4B keeping the cost structure unchanged but letting n increase from 2 to 42; only even values of n are considered. Table 2,
which is based on the analysis conducted by Rapoport et al. (2006),
presents the essential results. For each value of n, the table shows
the worst-case pure-strategy equilibrium number of users choosing each of the three routes (columns 2–4), the associated total
group cost (column 5), the socially efficient minimum total group
cost (column 6), and the associated PoA values (right-most
column). For example, if we set n = 18 as before, then under
(B) Augmented
A
D
210
10fBD
B
A
10fOA
210
O
1.072
1.174
1.333
1.626
1.896
2.151
2.394
2.630
2.858
3.081
O
210
D
0
210
10fBD
B
Fig. 4. The networks used in Example 7 to illustrate the Braess Paradox.
150
V. Mak, A. Rapoport / Organizational Behavior and Human Decision Processes 120 (2013) 142–153
Table 2
Pure-strategy price of anarchy (PoA) for selected values of n (all even numbers) in the Braess Paradox in Fig. 4. In the table, when n 6 14, the socially optimal outcome has all users
using route O ? A ? B ? D; otherwise, the socially optimal outcome has half of the group using route O ? A ? D and half using route O ? B ? D.
n
Worst-case equilibrium
2
4
6
8
10
12
14
16
18
20
22
24
26
28
30
32
34
36
38
40
42
No. of users of O ? A ? D
No. of users of O ? B ? D
No. of users of O ? A ? B ? D
Cost per user
0
0
0
0
0
0
0
0
0
0
1
3
5
7
9
11
13
15
17
19
21
0
0
0
0
0
0
0
0
0
0
1
3
5
7
9
11
13
15
17
19
21
2
4
6
8
10
12
14
16
18
20
20
18
16
14
12
10
8
6
4
2
0
40
80
120
160
200
240
280
320
360
400
420
420
420
420
420
420
420
420
420
420
420
equilibrium all the players choose route O ? A ? B ? D for a total
cost of 360. Meanwhile, the total cost of play is minimized at 300
if routes O ? A ? D and O ? B ? D are each chosen by nine users.
The PoA, as noted before and stated in Table 2, is 1.2. For another
example, if we set n = 26, then under equilibrium 5, 5, and 16 users
choose routes O ? A ? D, O ? B ? D, and O ? A ? B ? D, respectively, for a total cost of travel of 420. Total cost of travel is minimized to the value of 340 if each of the routes O ? A ? D and
O ? B ? D is chosen by n/2 = 13 users. The PoA is then 1.235,
which is higher than that when n = 18. But when n is further
increased to 34, the PoA drops again to 1.105.
Rapoport et al. (2006) summarized the findings of their purestrategy equilibrium analysis as follows:
If n 6 42, the individual cost of travel in any pure-strategy
equilibrium increases in n but never exceeds 420.
If 2 6 n 6 14, then in equilibrium all the users choose route
O ? A ? B ? D and the equilibrium solution is efficient.
If 16 6 n 6 20, then once again in equilibrium all the n players
choose route O ? A ? B ? D. However, the equilibrium solution is no longer efficient.15
If 22 6 n 6 40, then in equilibrium only a fraction of the n
users choose the route O ? A ? B ? D. The size of this fraction decreases as n increases. The equilibrium solution is still
inefficient.
If 42 6 n, then the route O ? A ? B ? D is never chosen. The
equilibrium solution is efficient.16
15
Note that choosing the route O ? A ? B ? D is a strictly dominant strategy for
every player whenever n 6 20. This is because, for any player, the maximum expected
number of other players traversing O ? A is n 1, so that the cost of traversing O ? A
is no more than 10n, which is less than 210 whenever n 6 20; and likewise with
B ? D. This means that the equilibrium in which all players choose O ? A ? B ? D is
unique whenever n 6 20.
16
When 42 < n there also exists a symmetric mixed-strategy equilibrium in which
every player randomizes with 0.5 probability between choosing O ? A ? D and
choosing O ? B ? D. In this equilibrium, for any player, the expected number of other
players traversing O ? A is (n-1)/2, and similarly for the expected number of other
players traversing B ? D. The expected travel cost of O ? A ? D and O ? B ? D are
the same and equal to 210 + 10{1+[(n-1)/2]}=215 + 5n, which is higher than the purestrategy equilibrium travel cost of 210 + 5n, and, as such, is inefficient. In this case, if
we consider not only pure-strategy but also mixed-strategy equilibria, the PoA is
larger than one.
Cost per user in socially optimal outcome
PoA
40
80
120
160
200
240
280
290
300
310
320
330
340
350
360
370
380
390
400
410
420
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.103
1.200
1.290
1.313
1.273
1.235
1.200
1.167
1.135
1.105
1.077
1.050
1.024
1.000
Table 2 further shows that the change in the pure-strategy PoA
value is non-monotonic. The PoA first increases in n reaching a maximum when n = 22, and then decreases. The BP is only manifested
for 16 6 n 6 40. In fact, Roughgarden and Tardos (2002, 2004)
proved that if the cost functions are all linear, then for any generalized routing problem the pure-strategy PoA cannot exceed 4/3.17
Extensions and generalizations
The ideas and procedures presented above may be extended in
several different directions. First, we introduce and consider another index, called the price of stability, which focuses on the best
– rather than the worst – equilibrium. Secondly, arguing that it is
not always realistic or reasonable to assume that all the decision
makers in a social dilemma situation necessarily play strategies
that form Nash equilibrium, we consider other developments that
invoke weaker and more realistic assumptions about the behavior
of selfish individuals. In particular, we discuss the case where individuals are allowed to play any sequence of actions in an attempt
to minimize their regret with respect to the best outcome that they
could possibly achieve. Finally, we further generalize the notion of
PoA to empirical arenas by proposing a similar index, called the
price of empirical anarchy, which is based on observed and replicable patterns of behavior rather than theoretical concepts like
worst-case equilibrium.
The price of stability
Strategies that jointly result in optimal outcomes are socially
desirable but often inherently unstable; when decision makers
are autonomous, these strategies are unenforceable as individuals
may defect from them with no impunity. On the other hand, equilibrium strategies are stable, as no decision maker may benefit
17
The PoA value may be enhanced experimentally by subtracting the cost of travel
from a fixed endowment, G, and thereby focusing on gains rather than losses. This is
powerful manipulation to increase the effect of structural changes in the network on
the payoffs associated with the choice of alternative routes. For example, consider the
cost structures of the two games displayed in Fig. 4, and set the endowment G = 400.
Consequently, the equilibrium payoffs in the basic and augmented network games are
400–300 = 100 and 400–360 = 40, respectively. The resulting PoA is equal to 2.5 (see,
e.g., Rapoport et al., 2006, 2009).
V. Mak, A. Rapoport / Organizational Behavior and Human Decision Processes 120 (2013) 142–153
from unilateral deviation; however, when the equilibria are inefficient, they are socially undesirable. The PoA is proposed to characterize how much worse the solution quality of the Nash
equilibrium can be relative to the quality of the socially optimal
outcome. But why focus on the worst equilibrium? Anshelevich
et al. (2008) point out that in most network applications agents
are not completely unrestricted; rather, they interact with a protocol that proposes a collective solution that each agent may either
accept or reject. Consequently, protocol designers in communication networks may wish to pursue the best Nash equilibrium in
terms of social welfare. This argument extends to social science
applications where instead of a protocol of play determined by
the network designer, unenforceable agreements are reached
endogenously by some sort of pre-play communication, or suggested to the group of decision makers by a leader specifically
nominated for this purpose. The best Nash equilibrium is naturally
viewed as the optimal solution subject to the constraint that no
decision maker has an incentive to unilaterally deviate from it once
it is offered. The price of stability (PoS) is then defined as the ratio
between the quality of the optimal ‘‘centralized’’ solution and the
solution quality of the best Nash equilibrium. When considering
only pure-strategy outcomes and using the notations defined in
the section ‘‘The Price of Anarchy’’, the PoS can formally be expressed as:
PoS ¼ maxWðsÞ=maxWðsÞ:
s2S
s2E
If the payoffs are framed as costs, then the expression takes the
form:
PoS ¼ minCðsÞ=minCðsÞ:
s2E
s2S
The PoS may similarly be defined when mixed-strategy outcomes are also considered. By definition, 1 6 PoS 6 PoA when both
indices are defined on the same set of outcomes (i.e., only purestrategy outcomes or all pure- and mixed-strategy outcomes), with
the equality PoS = PoA holding if all equilibria are equally efficient
or if there is only one, unique, equilibrium.
151
with the PoA and PoS. They consider four classes of games that include Hotelling games, market sharing games, traffic routing
games, and multiple-item auctions, and prove that in the first three
cases PoA and PoTA are the same even where play is not converging to equilibrium.
The price of empirical anarchy
The inefficiency indices introduced above are all based on theoretical notions such as equilibrium or regret minimization. However, it is also obviously important to quantify the extent of
empirical inefficiency in any specific instance of social dilemma.
This, in fact, is a simple progression from the ideas presented
above. We propose an index, called the price of empirical anarchy
(PoEA), which can be applied to experimental research. Given a
specific empirical setting of social dilemma, the PoEA is defined
as the total payoff in the socially optimal outcome divided by the
observed average total payoff in cases where players have achieved
some form of stable convergence of behavior. For example, consider again the PD game notated in Fig. 1A with T = 5, R = 4, P = 2,
S = 1. The socially optimal total payoff is 2R = 8, while the equilibrium total payoff is 2P = 4. Now suppose that, in an experiment on
this game, participants play in fixed groups of two, and each group
plays the same game repeatedly across many trials. Suppose that it
turns out that 60% of the groups end up mutually cooperating in all
later trials, while 40% end up mutually defecting in all later trials.
Then, the average total payoff among all those groups is 2R 60%þ
2P 40% ¼ 6:4. The PoEA is then 8/6.4 = 1.25. This result may be
compared with the theoretical PoA for this set of game parameters,
which is R/P = 4/2 = 2 (see earlier discussion). That is, in this example of empirical play of the PD game, the social dilemma turns out
to be less severe than theoretically predicted. This is an observation that has been commonly obtained in research reported by,
e.g., Rapoport and Chammah (1965), but here, by comparing the
PoA and PoEA, we may quantify the difference in the degree of
severity.
Minimization of regret
Conclusions
Blum, Hajiagahayi, Ligett, and Roth (2008) weakened the notions of PoA and PoS by arguing that it does not seem realistic to
assume that all decision makers will play strategies that form a
Nash equilibrium. The argument falls in three parts. First, even under a centralized authority that coordinates all the individual strategies, Nash equilibria may be difficult for the decision makers to
compute. Second, even if all the Nash equilibria are easy to compute, there is no particular reason to believe that independent
self-interested decision makers will converge to them. Other studies (Goemans, Mirrokni, & Vetta, 2006; Mirrokni & Vetta, 2004)
also have questioned the plausibility of selfish decision makers
converging to Nash equilibria, and so have many experiments in
behavioral economics (see, e.g., Camerer, 2003). Thirdly, for games
with only mixed-strategy equilibria, the assumption that decision
makers not only play so as to maximize their individual utilities,
but also so as to preserve the stability of the entire system, seems
untenable. After all, there is no immediate incentive for decision
makers to play specified mixed strategies as opposed to any of
the pure strategies in the support of the mixed strategy, as they
yield the same expected utility in equilibrium.
Instead, Blum et al. proposed a weaker assumption that every
decision maker attempts to minimize the utility difference (i.e. ‘‘regret’’) between his temporal sequence of actions with respect to
what could be his optimal actions in hindsight (see Parks, Sanna,
& Posey, 2003, for an experimental study based on a similar idea).
They then define the price of total anarchy, PoTA, in a similar way as
In introducing the notion of price of anarchy and presenting
dilemmas in networks, our work answers to the objective of the
OBHDP Special Issue on Social Dilemmas regarding ‘‘what methodological improvements and innovations would enhance social dilemma research,’’ (see the Call for Papers) and, as we hope, may
help in advancing the social dilemma literature. Indeed, a cursory
look at the experimental literature in this field tells us that the extent of inefficiency has received relatively little attention. Rather,
experimental social dilemma research is typically concerned with
whether a socially inefficient outcome occurs in settings like the
public goods game, and what factors influence the probability of
their occurrence. We argue that if the potential loss of welfare is
not large enough to justify the extra measures such as sanctions
to elicit cooperation or the cost incurred in establishing and maintaining some central authority, it is not worth implementing those
extra measures.
We suggest that it is an important question to understand how
the potential extent of inefficiency in a social dilemma situation,
such as indexed by the PoA, is related to the empirical probability
of occurrence of socially inefficient outcomes, as well as the empirical efficiency that is observed when human players are engaged in
the situation. It is no less important to investigate how any such
relationship is caused by the influence of the extent of inefficiency
on subjects’ psychological motivation in behaving cooperatively.
An example that we have used to drive home our point is Rapoport
and Chammah (1965), who reported experimental results on a
152
V. Mak, A. Rapoport / Organizational Behavior and Human Decision Processes 120 (2013) 142–153
series of PD games with varying parameters. While their objective
was to study how the percentage of mutual cooperation changed
with the parameters, their data could alternatively be examined
and analyzed from the point of view of how the theoretical and
empirical extents of social inefficiency, as indexed by the PoA
and the price of empirical anarchy, respectively, change with each
other and the other parameters. These relationships could be different from one setting to another, and moderating contextual
and individual difference factors may act on them through interesting psychological processes. Another example is the traffic network studied by Rapoport et al. (2006), the PoA of which changes
non-monotonically with group size (Table 2). Rapoport et al.
(2006) experimented on this network with group size of 10, 20,
and 40 respectively, and found that subject behavior largely conformed to equilibrium, so that the price of empirical anarchy was
consistently close to the PoA. If their results can be interpolated
for other values of group size between 10 and 40, then the setting
is one in which both theoretical and empirical extents of inefficiency in a social dilemma change non-monotonically with the size
of the population. Such possibilities invite further experimental
study in social dilemmas in networks.
Experimental research on social dilemmas in networks has been
relatively rare, despite its obvious relevance to group behavior in
social situations like traffic congestion. While the Pigou–Knight–
Downs Paradox and Braess Paradox have begun to receive some,
but still scant, attention in recent years, the fair cost-sharing game
or its variants (see earlier discussion on this game) awaits systematic experimentation. In particular, in all of these situations, it is
important to consider whether the motivational and strategic solutions that have been examined in other social dilemmas (see e.g.
Dawes, 1980; Kollock, 1998; Messick & Brewer, 1983) remain
applicable to elicit socially optimal cooperation, or whether they
should be modified significantly. Also, payoff uncertainty in these
situations (e.g., choosing a route with uncertainty over the weather
conditions) must have significant impact on the outcome and the
extent of social inefficiency; future research along this direction
is warranted.
Acknowledgement
We gratefully acknowledge financial support from the National
Science Foundation grant SES-0752662 awarded to the University
of Arizona.
References
Anshelevich, E., Dasgupta, A., Kleinberg, J., Tardos, É., Wexler, T., & Roughgarden, T.
(2008). The price of stability for network design with fair cost allocation. SIAM
Journal of Computing, 38, 1602–1623.
Arnott, R., & Small, K. A. (1994). The economics of traffic congestion. American
Scientist, 82, 446–455.
Axelrod, R. (1984). The evolution of cooperation. New York: Basic Books.
Balcan, M.-F., Blum, A., & Mansour, Y. (2010). Circumventing the price of anarchy:
Leading dynamics to good behavior. Proceedings of Innovations in Computer
Science (ICS), 201–213.
Beckmann, M. J., McGuire, C. G., & Winsten, C. B. (1956). Studies in the economics of
transportation. New Haven, CN: Yale University Press.
Blum, A., Hajiagahayi, M. T., Ligett, K., & Roth, A. (2008). Regret minimization and
the price of total anarchy. In Proceedings of the 40th annual ACM symposium on
theory of computing (pp. 373–382).
Braess, D. (1968). Über ein paradoxon der verkehrsplanung. Unternehmensforschung,
12, 258–268.
Brewer, M. B. (1979). In-group bias in the minimal group situation: A cognitive–
motivational analysis. Psychological Bulletin, 86, 307–324.
Camerer, C. F. (2003). Behavioral game theory: Experiments in strategic interaction.
Princeton, NJ: Princeton University Press.
Chen, X., Au, W. T., & Komorita, S. S. (1996). Sequential choice in a step-level public
goods dilemma: The effects of criticality and uncertainty. Organizational
Behavior and Human Decision Processes, 65, 37–47.
Croson, R. T. A., & Marks, M. B. (2000). Step returns in threshold public goods: A
meta- and experimental analysis. Experimental Economics, 2, 239–259.
Dawes, R. M. (1980). Social dilemmas. Annual Review of Psychology, 31, 169–193.
Dawes, R. M., McTavish, J., & Shaklee, H. (1977). Behavior, communication, and
assumptions about other people’s behavior in a commons dilemma situation.
Journal of Personality and Social Psychology, 35, 1–11.
Downs, A. (1962). The law of peak-hour expressway congestion. Traffic Quarterly, 16,
393–409.
Edney, J. J. (1980). The commons problem: Alternative perspectives. American
Psychologist, 35, 131–150.
Gisches, E. J., & Rapoport, A. (2012). Degrading network capacity may improve
performance: Private versus public monitoring in the Braess Paradox. Theory
and Decision, 73, 267–293.
Goemans, M., Mirrokni, V. S., & Vetta, V. (2006). Sink equilibria and convergence. In
Proceedings of the 46th annual IEEE symposium on foundations of computer science
(pp. 142–154).
Hardin, G. (1968). The tragedy of the commons. Science, 162, 1243–1248.
Holt, C. A., & Laury, S. K. (2008). Theoretical explanations of treatment effects in
voluntary contributions experiments. In C. R. Plott & V. L. Smith (Eds.),
Handbook of experimental economics results. Amsterdam: North-Holland.
Isaac, R. M., & Walker, J. (1988). Group size effects in public goods provision: The
voluntary contribution mechanism. Quarterly Journal of Economics, 103,
179–199.
Isaac, R. M., Walker, J., & Thomas, S. H. (1984). Divergent evidence on free riding: An
experimental examination of possible explanations. Public Choice, 43, 113–149.
Kollock, P. (1998). Social dilemmas: The anatomy of cooperation. Annual Review of
Sociology, 24, 183–214.
Komorita, S. S., & Parks, C. D. (1994). Social dilemmas. Madison, WI: Brown &
Benchmark.
Koutsoupias, E., & Papadimitriou, C. H. (1999). Worst-case equilibria. In 16th Annual
symposium on theoretical aspects of computer science (pp. 403–414). Berlin:
Springer.
Ledyard, J. O. (1995). Public goods: A survey of experimental research. In J. Kagel &
A. Roth (Eds.), Handbook of experimental economics. Princeton: Princeton
University Press.
Mak, V., & Zwick, R. (2010). Investment decisions and coordination problems in a
market with network externalities: An experimental study. Journal of Economic
Behavior and Organization, 76, 759–773.
McCarter, M. W., Budescu, D. V., & Scheffran, J. (2011). The give-or-take-some
dilemma: An empirical investigation of a hybrid social dilemma. Organizational
Behavior and Human Decision Processes, 116, 83–95.
McCarter, M. W., Rockmann, K. W., & Northcraft, G. B. (2010). Is it even worth it?
The effect of loss prospects in the outcome distribution of a public goods
dilemma. Organizational Behavior and Human Decision Processes, 111, 1–11.
Messick, D. M., & Brewer, M. B. (1983). Solving social dilemmas: A review. In L.
Wheeler & P. Shiver (Eds.). Review of personality and social psychology (Vol. 4,
pp. 11–44). Beverly Hills, CA: Sage.
Mirrokni, V. S., & Vetta, A. (2004). Convergence issues in competitive games.
APPROX-RANDOM, 3122, 183–194.
Monderer, D., & Shapley, L. S. (1996). Potential games. Games and Economic Behavior,
14, 124–143.
Morgan, J., Orzen, H., & Sefton, M. (2009). Network architecture and traffic flows:
Experiments on the Pigou–Knight–Downs and Braess paradoxes. Games and
Economic Behavior, 66, 348–372.
Nisan, N., Roughgarden, T., Tardos, É., & Vazirani, V. V. (2007). Algorithmic game
theory. Cambridge: Cambridge University Press.
Ostrom, E. (1990). Governing the commons: The evolution of institutions for collective
action. Cambridge: Cambridge University Press.
Ostrom, E., Gardner, R., & Walker, J. (1994). Rules, games, and common-pool resources.
Ann Arbor: University of Michigan Press.
Papadimitriou, C. H. (2001). Algorithms, games, and the Internet. In Proceedings of
the 33rd annual ACM symposium on the theory of, computing (pp. 749–753).
Parks, C. D., Sanna, L. J., & Posey, D. C. (2003). Retrospection in social dilemmas: How
thinking about the past affects future cooperation. Journal of Personality and
Social Psychology, 84, 988–996.
Penchina, C. M. (1997). Braess Paradox: Maximum penalty in a minimal critical
network. Transportation Research A, 31, 379–388.
Plott, C. R., & Smith, V. L. (2008). Handbook of experimental economics results.
Amsterdam: North-Holland.
Rapoport, A. (1970). N-person game theory: Concepts and applications. Ann Arbor, MI:
University of Michigan Press.
Rapoport, A., & Chammah, A. M. (1965). Prisoner’s dilemma: A study in conflict and
cooperation. Ann Arbor: University of Michigan Press.
Rapoport, A., Kugler, T., Dugar, S., & Gisches, E. J. (2009). Choice of routes in
congested traffic networks: Experimental tests of the Braes Paradox. Games and
Economic Behavior, 65, 538–571.
Rapoport, A., Mak, V., & Zwick, R. (2006). Navigating congested networks with
variable demands: Experimental evidence. Journal of Economic Psychology, 27,
648–666.
Roughgarden, T. (2005). Self routing and the price of anarchy. Cambridge: Cambridge
University Press.
Roughgarden, T. (2009). Intrinsic robustness of the price of anarchy. In Proceedings
of the 41st annual ACM symposium on theory of, computing (pp. 513–522).
Roughgarden, T., & Tardos, É. (2002). How bad is selfish routing? Journal of the ACM,
49, 236–259.
Roughgarden, T., & Tardos, É. (2004). Bounding the inefficiency in nonatomic
congestion games. Games and Economic Behavior, 47, 369–403.
Sandler, T. (1992). Collective action: Theory and applications. Ann Arbor: University of
Michigan Press.
V. Mak, A. Rapoport / Organizational Behavior and Human Decision Processes 120 (2013) 142–153
Schelling, T. C. (1963). The strategy of conflict. Oxford: Oxford University Press.
van de Kragt, A., Orbell, J., & Dawes, R. M. (1983). The minimal contribution set as a
solution to public goods problems. American Political Science Review, 77,
112–122.
van Dijk, E., de Kwaadsteniet, E., & De Cremer, D. (2009). Tacit coordination in social
dilemmas: The importance of having a common understanding. Journal of
Personality and Social Psychology, 96, 665–678.
van Dijk, E., Wit, A., Wilke, H., & Budescu, D. (2004). What we know (and do not
know) about the effects of uncertainty on behaviour in social dilemmas. In R.
153
Suleiman, D. Budescu, I. Fischer, & D. Messick (Eds.), Contemporary psychological
research on social dilemmas (pp. 315–331). Cambridge: Cambridge University
Press.
Van Lange, P., Liebrand, W., Messick, D., & Wilke, H. (1992). Social dilemmas:
An introduction. In W. Liebrand, D. Messick, & H. Wilke (Eds.), Social
dilemmas: Theoretical issues and research findings (pp. 1–40). London:
Pergamon Press.
Wardrop, J. G. (1952). Some theoretical aspects of road traffic research. Proceedings
of the Institution of Civil Engineers – Part II, 1, 325–378.