Chicken - Pauline W. Hoffmann, Ph.D.

Running Head:
NETWORKS AND THE GAME OF CHICKEN
Network Structure and Strategy Evolution in the Game of Chicken
Frank Tutzauer
Pauline W. Hoffmann
Margaret K. Chojnacki
State University of New York at Buffalo
International Sunbelt Social Network Conference XXII
International Network for Social Analysis
New Orleans, LA
February 15, 2002
Network Structure
2
Abstract
Cellular automata have long been used to study self-organization, system evolution, and
questions of dynamics and transformation. Beginning in the 1980’s, researchers used cellular
automata to study cooperation in the iterated Prisoner’s Dilemma (PD). For these researchers, a
cellular automaton is a lattice-like spatial grid consisting of interlinked cells. Each cell is
occupied by an organism that adopts a certain PD strategy, and each organism plays the iterated
PD with organisms in neighboring cells. After play, each organism decides to either retain its
strategy or adopt that of one of its neighbors, depending on the relative success of the strategies.
In this way, researchers show how cooperative strategies evolve based on the initial distribution
of strategies and the make up of the population.
In this paper, we use a network approach to answer similar questions about strategy
evolution in the game of Chicken, a matrix game that models phenomenon as diverse as species
competition, the California energy crisis, the NATO/Milosevic conflict, and the current war on
terrorism. We make the natural identification between a network and a cellular automaton by
equating cells of the automaton to nodes of a network, and considering two nodes to be adjacent
if they are neighboring cells in the automaton. We label the nodes of the network with the
particular strategies played, have each strategy play those strategies to which it is adjacent (in the
network sense), and then change the labels on the basis of local strategy performance.
Once we think of the strategies as evolving in a network, instead of in the classic spatial
automaton, we can ask questions of structure. In this paper, we suggest various network
structures, and for each network structure we conduct computer simulations to determine the
evolution of strategies in the network.
Network Structure
3
Network Structure and Strategy Evolution in the Game of Chicken
Introduction
Game theory can be seen in use in mathematics, biology, economics, management,
communication, and political science. Scientists and researchers in various social and natural
science disciplines, from John von Neumann in mathematics, to Robert Axelrod in political
science, to Frank Tutzauer in communication, have studied both Chicken and Prisoner’s
Dilemma (PD), two distinct and commonly studied games. They have also examined strategies
and their evolution in conflict situations (Axelrod, 1997, 1984, 1980a, 1980b; Feeley, et.al, 1997;
von Neumann, 1966).
Modern game theory, commonly attributed to John von Neumann (1928; 1937) has the
following features (Luce & Raiffa, 1957, p. 2-3):

The possible outcomes of a given situation are well specified and the player(s) has a
consistent pattern of preferences among the outcomes.

The variables controlling the outcomes are also well defined.

Each individual player strives to maximize his/her utility.

Each player knows the preferences of the other player(s).
Prisoners Dilemma
The PD fits this game theory with its own defined set of rules. The PD is a matrix game
used to show how cooperative or competitive choices influence the result of a decision making
process (Axelrod, 1984). Most notably, it contains a dilemma.
The distinguishing feature of the Prisoner’s Dilemma is that in the short run, neither side
can benefit itself with a selfish choice enough to make up for the harm done to it from a
Network Structure
4
selfish choice by the other. Thus, if both cooperate, both do fairly well. But if one
defects while the other cooperates, the defecting side gets its highest payoff, and the
cooperating side is the sucker and gets its lowest payoff. This gives both sides an
incentive to defect. The catch is that if both do defect, both do poorly. (Axelrod, 1980a,
p.4)
By encompassing the motivation of both sides to be selfish, while maintaining the
circumstance of a higher payoff with mutual cooperation over mutual defection (see Table 1),
“the Prisoner’s Dilemma embodies the tension between individual rationality … and group
rationality…” (p.4).
Initial PD games, as in Axelrod’s (1980) computer tournaments, began with 2x2 games.
These allowed for only two players, each of who can choose between two actions, typically
called cooperation and defection. While this may be helpful to understand the PD itself, “rare is
the conflict in which disputants have only two strategies available to them” (Tutzauer, 1989,
p.1). From here, we progressed to the 5x5 matrix, which expands the choices from merely
cooperate, C, and defect, D, to levels of C and D on a scale of 1 to 5 with 1 being total
cooperation and 5 being total defection (To, 1988). Further expansion of the PD takes us to the
infinite-choice, continuous-time PD, in which players have an infinite number of choices
between 0 (total defection) and 1 (total cooperation) (Feeley, Tutzauer, Rosenfeld, & Young,
1997). This mimics everyday conflicts, which “evolve in continuous time” rather than taking
place at well defined moves (Tutzauer, 1994, p.6).
Chicken
Chicken game first gained recognition in the 1950’s as a game in which teenagers would drive
their cars quickly at one another. The first to swerve was considered the chicken. The payoffs in
Network Structure
5
Table 1: Payoff Matrix of the Prisoner’s Dilemma
Cooperate
Cooperate
Defect
3
3
Defect
5
0
0
5
1
1
Note. In each cell, the 1st number (row 2 and 4) is the payoff to the row player and the 2nd
number (row 1 and 3) is the payoff to the column player.
Network Structure
6
this case can be considered severe. If neither party swerves, both parties could die (or be
critically injured). If one party swerves, that party seems to ‘lose face’ or face ridicule by friends
and onlookers. In this case, it seems to be a game of pride and stupidity (Luce & Raiffa, 1957).
Chicken can also be examined from the point of view of two players who each have a certain
amount of money or power to ‘invest’, with payoffs following the Chicken payoff matrix
(Bornstein, 1997; see Table 2).
The investment that each player can choose to make is 2 (points, dollars, or other units).
If both players don’t invest, both maintain what would have been their initial investment (2
units). If one player invests while the other does not, the investing player receives the
investment of 2 units back plus a bonus of 3 units, while the non-investing player keeps his/her
initial 2 units. If both invest, both lose their 2 units and gain nothing in return.
This payoff matrix differs markedly from that of the PD. While in the PD, an evolution
of cooperation is the better choice for the collective (not the individual player), the best
collective payoff in Chicken seems to be a conciliation with the other player to invest every other
turn, thus receiving seven rather than four (by both never investing) or zero (by both always
investing) units in two moves.
Game Theory Usage
According to Fink (1998), game theory can be used to:

explore theoretical problems that arise from development of game theory.

analyze actual strategic interactions in order to predict or explain the actions of actors
involved:

How can a leader get a follower to do what he wants him to do or how effective are
international sanctions?
Network Structure
Table 2: Payoff Matrix in the Chicken Game
Do Not Invest
(Swerve)
Do Not Invest
(Swerve)
Invest
(Don’t Swerve)
Invest
(Don’t Swerve)
2
2
5
2
2
5
0
0
Note. In each cell, the 1st number (row 2 and 4) is the payoff to the row player and the 2nd
number (row 1 and 3) is the payoff to the column player.
7
Network Structure

Examine specific cases (such as the Cuban Missile Crisis or other military conflicts.)

Analyze logical consistency of certain arguments.
8
Recent articles referencing current events have referred to game theory more and more to
help predict and explain the circumstances.
The 2001 California energy crisis involved a game of chicken between the California
government and the energy producers (“California’s Giant,” 2001). The government demanded
cooperation from the energy producers in terms of a windfall-profits tax, price caps, price cuts
and payment on the $5.5 billion tab the government claimed the energy producers owed. The
energy producers wanted to avoid price caps and reductions and threatened to stop building
power plants as a result. Each “player” has a certain “unit” to invest and a certain “unit” at stake.
Each is driving head on at the other. If neither swerves, the constituents may face blackouts and
higher prices making the government look bad while the energy producers will be blamed and
also look bad to customers, lose money for employees and boards of directors and not have
further investment in additional power plants (0/0 payoff matrix). If the California government
swerves while the energy producers do not, the California government will look weak (will not
save face – payoff of 2 units) in the eyes of its constituents while the energy producers will look
good in the eyes its employees and boards of directors (payoff of 5 units). If the energy
producers swerve, they will have saved face with their customers by lowering prices and
preventing blackouts while showing cowardice to the employees and boards of directors (payoff
of 2 units) while the government will be the big winners having refused to give in (payoff of 5
units). If both swerve (each with a payoff of 2) they will both look good to customers and
constituents but one will not have an advantage over the other and neither will look as good if
one didn’t swerve while the other did.
Network Structure
9
Appendix 1 highlights additional examples of chicken played in various arenas from
economics to politics to communication.
Based on the assumptions of game theory mentioned earlier, why is it difficult to predict
behavior? According to Luce and Raiffa (1957), in bargaining, negotiating, mediating, and
arbitrating, there is a conflict or struggle between at least two interdependent parties who
perceive:

incompatible goals

scarce rewards

interference from the other party in achieving the goals
Bargaining, negotiating, mediating, and arbitrating involve a series of decisions based on
strategies. A ‘player’ must choose a strategy that covers all possible special circumstances that
may arise. It is difficult to specify strategy sets available due to possible modification of
strategies during play. There needs to be a specification of time, and also the possibility for
coalition. Neither side has complete control over the bargaining situation, and the outcomes are
determined by a series of battles, of which the timing is important (Luce & Raiffa, 1957).
Conflict can be resolved by conciliation and/or collusion (Luce & Raiffa, 1957). To date,
no computer simulations have been able to take the above situations into consideration. Most
deal with one programmed strategy that does not change. That strategy then plays another
strategy, which is programmed to be just as rigid. To be more representative of real life
negotiation circumstances, a program must encompass the ability for its strategies to change,
adapt, or evolve, depending on the situation. Examining game theory in the context of a cellular
automata matrix can do this.
Network Structure
10
Cellular Automata
A cellular automaton was first proposed by von Neumann (1966) (considered to be the
father of modern game theory) in an effort to examine “the important similarities between
computers and natural organisms” (p. 18). His “theory of automata” was developed as a result of
his knowledge of artificial and natural systems, in order to examine (von Neumann, 1966, p.21):

A coherent body of concepts and principles concerning the structure and organization of both
natural and artificial systems.

The role of language and information in such systems.

The programming and control of such systems.
By seeing economic systems as natural systems, and games as artificial systems, von
Neumann also developed a connection between his “theory of automata” and “game theory”,.
“The theory of games contains the mathematics common to both economic systems and
games just as automata theory contains mathematics common both to natural and artificial
automata (1966, p.19 or 92?):

Logical universality – when is a class of automata able to perform all logical operations
that are performable with finite means?

Constructability – can the automata be constructed by another automaton?

Construction-universality – Can one automaton construct every other automaton?

Self-reproduction – can one automaton construct an identical automaton?

Evolution – can the construction of automata by automata progress from simpler types to
increasingly complicated types (or from less efficient to more efficient)?
Network Structure
11
In keeping with game theory, he noted that conflicts between organisms lead to consequences
(natural selection), a mechanism of evolution. So, conflict brings about evolution (von
Neumann, 1966).
To simplify cellular automata and “theory of automata”, Ian Stewart uses “theory of
automata” in reference to ecology and population dynamics. “Cellular automata are composed
of a large grid of squares, called cells, and each square is governed by its own ‘laws of nature’”
(Stewart, 1998, p.36).
Axelrod developed a theory that is quite similar called the Landscape Theory of
Aggregation (Axelrod, 1997). This theory organizes elements by putting highly compatible
elements together, and less compatible elements elsewhere. He has used his theory to predict
social circumstances such as military and corporate alliances, social networks, and organizational
structures (Axelrod, 1997).
Additionally, Axelrod developed a model to discover complex and simple strategies
adapted to a complex environment. The genetic algorithm (Axelrod, 1997, p. 15) can be used in
the PD environment of his computer tournaments or the chicken games in the present study’s
cellular automata network structures. The genetic algorithm involves the following steps
(Axelrod, 1997, p. 15):

The specification of an environment in which the evolutionary process can operate.

The specification of the genetics, including the way information is translated into a
strategy.

The design of the experiment to study the effects of alternative realities.

The running of the experiment for a specified number of generations on a computer.
Network Structure
12
Cellular automata can also “tell us a lot about self-organization. …Starting from a
primordial soup of initial conditions, these rules show extremely simple ways to generate
complex structures. From a complete mess, one can watch organization emerge” (Lipkin, 1994,
p.109).
Current Study
In this paper, we use a network approach to answer similar questions about strategy
evolution in the game of Chicken, a matrix game that models phenomenon as diverse as species
competition, the California energy crisis, the NATO/Milosevic conflict, and the current war on
terrorism. We make the natural identification between a network and a cellular automaton by
equating cells of the automaton to nodes of a network, and considering two nodes to be adjacent
if they are neighboring cells in the automaton. We label the nodes of the network with the
particular strategies played, have each strategy play those strategies to which it is adjacent (in the
network sense), and then change the labels on the basis of local strategy performance.
Method
Seven general strategies were studied in a cellular automata-like network, in order to
model strategy evolution in varying network structures. Next, computer simulations were
preformed over time with competing strategies ‘evolving’ to those ‘cells’ around them that were
most successful.
The cellular automata program written by Tutzauer utilizes the computer to set up
these cells. Each cell contains one strategy and each cell ‘plays’ chicken against all cells
adjacent to it, based on the payoff matrix in Table 1. After each cell plays chicken with each of
Network Structure
Figure 1: Example of a Cellular Automaton
1
2
3
4
5
2
3
4
5
1
3
4
5
1
2
4
5
1
2
3
5
1
2
3
4
Note. Strategy 5 is playing against all strategies surrounding it (its local neighborhood). After
one generation of play (a generation is the time it takes for each cell in the matrix to play each
surrounding cell throughout the matrix), the cell containing strategy 5 will either remain with
strategy 5, or, depending on the payoff results of its neighboring cells, adopt another strategy.
This will depend on which of its neighbors, including itself, had the highest payoff.
13
Network Structure
14
its neighbors (signifying one generation), each strategy then ‘evolves’ to the strategy that earned
the most points in its Chicken game. A miniature version of the cellular automata with strategies
numbered 1 through 5 is represented in Figure 1. This program closely resembles the cellular
automata as discussed by von Neumann (1966) and the genetic algorithm as outlined by Axelrod
(1997).
Tutzauer’s program is able to determine how strategies will evolve and, if a pattern or
periodicity emerges. As well, this program may act as a precursor for further research to predict
outcomes involving military and organization conflict and other areas of communication conflict.
The cellular automata program developed by Tutzauer (19?) is written in BASIC computer
language. The following parameters can be set and adjusted:

Any number of strategies can be run in the program and any combination of strategies can be
run. For example, if 25 strategies are programmed, all 25 can be run at once or selected
strategies can be run against each other. (In the current study, seven strategies were run).

Any combination of starting strategies can be selected (In the current study, a random setup
was used).

Any percentage of starting strategies can be set (In the current study, all strategies being
played are present initially in equal proportions).

The nodes/cells in the network can be predetermined, as can the structure of the automata (In
the current study, five different structures were used).

Any number of generations can be run
For this experiment, the simulations were run using seven strategies:

Strategy 1 – Invest every turn

Strategy 2 – Invest every other turn
Network Structure
Figure 2: Representation of a Spatial Grid
15
Network Structure

Strategy 3 – Invest every third turn

Strategy 4 – Invest every fourth turn

Strategy 5 – Invest every fifth turn

Strategy 6 – Invest every sixth turn

Strategy 7 – Never Invest
16
The cellular automata matrix structure (shown in Figure 1) can easily be translated into
various network structures. The simplest, the Spatial Grid (see Figure 2), shows the
interconnectedness of the neighbors in the matrix. Note that the inner nodes of the grid play
eight neighbors while the edges play five and the four corners play only three.
The second structure examined is a Regularized Spatial Grid (see Figure 3). This structure
closely resembles the Spatial Grid, however, the edges and corners have been “regularized” to
play eight other neighbors. In this structure, all nodes are connected to eight other nodes. In both
the Spatial and Regularized Grids, the network consisted of a 200 x 200 matrix.
The Bipartite Network structure represented in Figure 4 corresponds to a “segregated”
network pattern of 100 nodes for each “side.” A side can represent gender (male/female chicken
encounters), religion (protestant/catholic chicken interactions), or bipartisan politics
(democrats/republicans), to name a few. Each player in this network plays chicken against three
members of the opposite partite, and none in its own partite.
Figure 5 represents the Star Network. It is characterized by a radial or wheel-like network in
which the central figure in the star plays Chicken against eight neighbors while the outside
nodes play only one and the inner nodes, two. The Star is composed of # of nodes and
branches.
Network Structure
Figure 3: Representation of the Regularized Spatial Grid Structure
17
Network Structure
Figure 4: Representation of the Bipartite Network Structure
18
Network Structure
Figure 5: Representation of the Star Network Structure
19
Network Structure
20
The 400-node Chain Network is the final structure and is represented in Figure 6. The ends
of the chain play only one other node, while the inner nodes play two others. This structure is
similar to the Star Network, but without the central node figure.
Each simulation was run in five different episodes (differing starting values/initial conditions)
with each of the five network structures. The simulation ran until the program detected a
periodic pattern. After each generation, the number and type of strategies still existing was
recorded for later examination. Upon completion of a run, the number of steps required to find a
period, the period itself (how long the repeating pattern/string is), and the strategies left after
periodicity is reached, were recorded.
Results
Table 3 represents a summary of the results. The Spatial Grid reached a period after 117,
84, 180, 115 and 142 steps, with periods of 12, 24, 66, 2 and 4. Strategy 1 (Always Invest)
survived in all cases with strategies 3, 4, 5, 6 and 7 surviving in different runs. A graphical
example of this periodicity for run number 1, which achieved a 12-cycle period, can be seen in
Figure 7. All strategies were replaced fairly quickly by strategies 1 and 7, which didn’t reach a
period until generation 117. At this point, it can be seen that the population size fluctuates from
larger to smaller depending on the generation and population of the other strategy.
The Regularized Spatial Grid reached periods of 2 and 12 after 97, 130, 169, 180 and 107
generations. The strategies remaining always included Strategy 1 (always invest), with Strategy
7 in the first run and Strategy 5 in all subsequent runs.
Network Structure
Figure 6: Representation of the Chain Network
21
Network Structure
22
The Bipartite Network structure took quite a number of generations (3790, 1467, 2204,
3062, and 1435), and reached a periodicity of 6 or 2. Again, Strategy 1 dominated, with Strategy
7 also present in run 2, and Strategy 5 present in all other runs. The 6-cycle period of run 5 can
be seen in Figure 8. Only strategy 5 and 1 are present. All other strategies died shortly after the
start of the simulation.
The Star Network reached a 2-cycle each time in a relatively few generations (64, 34, 58,
85, and 56). Most strategies survived in some number. Strategies 1 through 6 survived in most
cases, with Strategies 1 through 5 surviving in run 2. Figure 9 shows the 2-cycle with all
strategies or just the 2 bottom dwellers?)
The Chain Network reached a 2-cycle each time in a varied number of generations (130,
53, 132, 173, and 70). Again, most strategies were present, from Strategies 1 through 6 in runs
1, 4 and 5, and Strategies 1 through 5 in runs 2 and 3.
Conclusions
We found several patterns across all the network structures studied, as well as patterns
that were unique to each structure.
Across All Network Structures
Across all the network structures, all systems seemed to converge in “short” times with
“small” periods. Strategy 1 (Always Invest) does extremely well and is represented in all runs
for all network structures. In each case, the network system evolves to an even periodicity. No
fixed points were present in this simulation.
Network Structure
23
Table 3: Results
Number Steps
(how many steps to
find a period)
Period
(how long is the
repeating string)
Strategies Left
(what strategies
survived after
simulation complete)
Spatial Grid
1
2
3
4
5
117
84
180
115
142
12
24
66
2
4
1,7
1,3,5
1,5
1,5,6
1,5,4
1
2
3
4
5
97
130
169
180
107
2
12
2
2
12
1,7
1,5
1,5
1,5
1,5
1
2
3
4
5
3790
1467
2204
3062
1435
6
6
2
6
6
1,5
1,7
1,5
1,5
1,5
1
2
3
4
5
64
34
58
85
56
2
2
2
2
2
1-6
1-5
1-6
1-6
1-6
1
2
3
4
5
130
53
132
173
70
2
2
2
2
2
1-6
1-5
1-5
1-6
1-6
Regularized Spatial
Grid
Bipartite Network
Star Network
Chain Network
Network Structure
Figure 7: Graph of 12-Cycle in Spatial Grid
24
Network Structure
Figure 8: Graph of 6-Cycle in Bipartite Network
25
Network Structure
Figure 9: Graph of 2-Cycle in Star
26
Network Structure
27
Between Networks
The Star and Chain Network structures evolved small, even periods of a two-cycle. The
Bipartite and Regularized Spatial Grids evolved short to medium even periods (between 2 and 12
cycles). The Spatial Grids evolved even periods ranging from short to very long (between 2 and
66 cycles).
The Star Network structures evolved to periodicity in a short number of generations
(between 34 and 85 generations). Chains, Regularized and Spatial Grids reached a periodicity in
a medium number of generations (from 53 to 180). It took Bipartite Networks a large number of
generations (from 1435 to 3790) to evolve.
Star and Chain Networks managed to keep all strategies alive except Strategy 7 (never
invest). Strategy 1 (always invest) dominates while the others barely hang on. The Spatial Grid
allowed two to three strategies to challenge Strategy 1. With Regularized Spatial Grids and
Bipartite Networks, a strong competitor (usually Strategy 5) emerges to challenge Strategy 1.
Discussion
This study was a preliminary foray into chicken game and network structure simulations.
Further research will include examining different strategies, modeled after games of Chicken
represented in Appendix 1, to determine if, based on different network structures, certain
strategies bode well in bargaining/negotiating situations.
Limitations
This was a preliminary study. The network structures and strategies that were used were
not theory driven. Hence, they were primitive in their scope. However, this study was used
Network Structure
28
primarily to understand the basic operations of the cellular automata as a network structure, and
it served that purpose. Thus, the results present many avenues for future study.
Future Research
A future simulation could involve non-random starting patterns with the basic strategies
used in this paper and further strategies developed. These simulations will be run using
structures as in this paper, except the strategy pattern at the onset will vary. The configurations
of strategies will be determined in a couple of ways. First, a simulation run will begin with
clumped strategies evenly distributed and represented. Patterns can also be based on ending
patterns of this simulation in terms of strategy configurations present. For example, if Strategy 1
were dominant, then it would be a dominant cluster in this simulation with a proportional
representation as its ending pattern. With this in mind, some strategies may not be represented at
all since, presumably, strategies will have become extinct from the original simulation. Also,
any combination of ‘random’ cluster configurations can be run.
The second avenue to pursue in simulations is that of adding a threat potential. This
process would be run again, but this time a weight would be added to the strategies that are
successful. It is not often that bargaining and negotiating takes place on an even playing field
with a clean slate. The slate could be kept clean (that is all strategies begin having zero points)
or different configurations of threat potential could be used. Once one strategy reaches a certain
point value or obtains a certain percentage of a lead (say 10%) then the payoff matrix changes.
A potential change could be as follows (assuming the row player is the player in the lead giving
him/her the greater threat potential):
Network Structure
29
column player
Invest
Invest
Don’t Invest
2.5/1.5
2.5/4.5
row player
Don’t Invest 5.5/1.5
.5/-.5
This is just an example. The actual payoff matrix change could be based on percentage
of the lead, or population size of the strategies after each generation. The simulation will be run
as above with random placement and then non-random clusters placed.
Borrowing from the biological MacArthur Wilson Theory of Island Biogeography,
“barriers” can be inserted into the network structures to determine if a different evolution will
occur if an obstacle is present. It would also be valuable to examine whether of not the size of
the network structure would make a difference in terms of strategy evolution. Varying the size
of the network matrix, for example, from 200 x 200 to 100 x 100 or 400 x 400, would likely
cause a change in results. It would be ideal to try to maximize the size the matrix in order to
make the cellular automata environment, which by definition is finite, as illustrative of reality as
possible.
Finally, examining the possibility of an evolving structure within the automaton itself
would be fruitful. By expanding the interpretations of the traditional cellular automata, and
manipulating it in as many ways as theoretically possible, future work only stands to expand the
knowledge provided through simulation.
Network Structure
30
Appendix 1
Examples of Chicken Game:
California Energy Crisis (“California’s Giant,” 2001)
The 2001 California energy crisis involved a game of chicken between the California
government and the energy producers. The government demanded cooperation from the energy
producers in terms of a windfall-profits tax, price caps, price cuts and payment on the $5.5
billion tab the government claimed the energy producers owed. The energy producers wanted to
avoid price caps and reductions and threatened to stop building power plants as a result. Each
“player” has a certain “unit” to invest and a certain “unit” at stake. Each is driving head on at the
other. If neither swerves, the constituents may face blackouts and higher prices making the
government look bad while the energy producers will be blamed and also look bad to customers,
lose money for employees and boards of directors and not have further investment in additional
power plants (0/0 payoff matrix). If the California government swerves while the energy
producers do not, the California government will look weak (will not save face – payoff of 2
units) in the eyes of its constituents while the energy producers will look good in the eyes its
employees and boards of directors (payoff of 5 units). If the energy producers swerve, they will
have saved face with their customers by lowering prices and preventing blackouts while showing
cowardice to the employees and boards of directors (payoff of 2 units) while the government will
be the big winners having refused to give in (payoff of 5 units). If both swerve (each with a
payoff of 2) they will both look good to customers and constituents but one will not have an
advantage over the other and neither will look as good if one didn’t swerve while the other did.
NATO/Milosevic (Hirsh, 1999)
In the conflict between Milosevic and NATO, neither side wanted to concede/swerve (both sides
were driving very quickly and dangerously toward one another). This may be an example closest
to the original intend of the game of Chicken. In addition to the obvious loss of life by both
sides and other atrocities, to concede defeat would effectively require that one side lose face in
world politics. If neither side swerved and continued toward one another, devastation of life and
land would result and neither player would fare well in the eyes of the world (0 units). If
Milosevic surrenders, he will certainly lose face with his followers and perhaps look the coward
in the world arena (2 units) while the NATO forces would emerge victorious having defeated
such an enemy (5 units). If NATO swerves, they will look the coward and will lose much power
in the world arena (2 units) while Milosevic will have emerged victorious and look glorious to
his followers (5 units).
Game of Chicken in Detroit (Zuckerman, 1998)
In Detroit, Knight-Ridder and Gannett wanted to join the forces of their two papers into one to
save money for both corporations. This situation would have led to a monopoly and the risk of
losing an additional editorial voice (leaving Detroit a one paper city). The Attorney General
stepped in to preserve the editorial voice of the two papers by proposing a joint operating
Network Structure
31
agreement and prevent a monopoly situation. If the Attorney General and the media giants
didn’t reach an agreement, both would lose by remaining at a standstill – the media wouldn’t
consolidate and thus lose money and the ability to consolidate papers and the Attorney General
would not have reached any agreement (0 units). If Knight-Ridder and Gannett concede, they
lose the opportunity to consolidate but may operate under a joint operating agreement allowing
for some profit (2 units) while the Attorney General would have prevented a monopoly and
preserved the editorial voices of the city of Detroit (5 units). If the Attorney General swerves
while the media does not, he loses editorial voice (2 units) while Knight-Ridder and Gannett
consolidate and save money (5 points).
Network Structure
32
References
Axelrod, R. (1980a). Effective choice in the Prisoner’s Dilemma. Journal of Conflict
Resolution, 24, 3-25.
Axelrod, R. (1980b). More Effective choice in the Prisoner’s Dilemma. Journal of Conflict
Resolution, 24, 379-403.
Axelrod, R. (1984). The Evolution of Cooperation. New York, NY: Basic Books, Inc.,
Publishers.
Axelrod, R. (1997). The Complexity of Cooperation. Princeton, NJ: Princeton University Press
Bornstein, G., Budescu, D., & Shmuel, Z. (1997). Cooperation in Intergroup, N-Person, and
Two-Person Games of Chicken. Journal of Conflict Resolution, 41, 384-406.
California’s Giant Game of Chicken. (2001, June 18). Business Week, 3737 p. 40.
Feeley, T., Tutzauer, F., Rosenfeld, H.L. & Young, M.J. (1997). Cooperation in an infinitechoice continuous time Prisoner’s Dilemma. Simulation & Gaming, 28, 442-459.
Hirsh, M. (1999, July 26). NATO’s Game of Chicken: Victory over Milosevic – never really in
doubt – was actually a pretty close call. Newsweek, 134, 58.
Lipkin, R. (1994). Intricate Patterns from Cellular Automata. Science News, 146, 109.
Luce, R.D. & Raiffa, H. (1957). General Introduction to the Theory of Games. Games and
Decisions: Introduction and Critical Survey. New York: Wiley.
Stewart, I. (1998). Rules of Engagement. New Scientist, 159, 36.
To, T. (1988). More realism in the Prisoner’s Dilemma. Journal of Conflict Resolution, 32, 275292.
Tutzauer, F. (1989). Offer Dynamics in an Infinite-Choice, Continuous-Time Prisoner’s
Dilemma. Paper presented at the annual meeting of the Speech Communication
Network Structure
Association, San Francisco.
von Neumann, J. (1928)
von Neumann, J. (1937)
von Neumann, J. (1966). Theory of Self-Reproducing Automata. Urbana, IL: University of
Illinois Press.
Zuckerman, L. (1988, February 1). A game of chicken in Detroit: Knight-Ridder threatens to
close down a venerable paper. Time, 131, 60.
33