The Role of Context and Team Play in Cross

THE ROLE OF CONTEXT AND TEAM PLAY IN CROSS-GAME
LEARNING
David J. Cooper
Florida State University
John H. Kagel
Ohio State University
Abstract
One of the dividing lines between economics and psychology experiments is that economists favor
abstract context while psychologists favor meaningful context. We investigate the effects of meaningful
versus abstract context on cross-game learning in a signaling game experiment. With individual decision
makers (1x1 games) meaningful context promotes positive cross-game learning in moving from a pooling
equilibrium to a separating equilibrium, while abstract context yields negative cross-game learning. In
1x1 games a change in the (meaningful) context which accompanies “superficial” changes in the game
stalls the learning process compared to an abstract context that does not change. In contrast, with two
person teams the same change in meaningful context has no disruptive effect on strategic play, with teams
also having substantially higher levels of strategic play than the 1x1 games. We relate the effects of
meaningful versus abstract context on cross-game learning to the psychology literature on deductive
reasoning processes (JEL C72, C92, D82, L12).
The editor in charge of this paper was Patrick Bolton
Acknowledgements: Research support from the National Science Foundation (SES – 0451981) is gratefully acknowledged.
Helpful research assistance has been provided by Kirill Chernomaz, John Lightle, and Susan Rose. Jo Ducey has provided
valuable editorial assistance. We have received helpful comments from seminar participants at the ASSA meetings, the ESA
meetings in Tucson, the IMEBE meetings in Valencia, the London School of Economics, Michigan State University, the Ohio
State University, the University of Copenhagen, two referees and the editor of this journal. Any opinions, findings and
conclusions or recommendations in this material are those of the authors and do not necessarily reflect the views of the National
Science Foundation.
E-mail Addresses: [email protected] (Cooper); [email protected] (Kagel)
1
1. Introduction
Cross-game learning—the ability to take what has been learned in one game and apply it in a
related game—is an integral but largely unexplored aspect of learning in games. Virtually all
papers on learning in games employ a stationary environment in which agents have no relevant
experience with related games. However, suppose these agents must learn across a series of
related games rather than facing a sequence of identical games. Without substantial cross-game
learning, what has been learned in one game may have to be completely relearned even in a
closely related game. More generally, we cannot understand how individuals will react to new or
changed institutions, which is a critical issue in applying game theory to real-world settings,
without an understanding of how subjects draw upon experiences with games related to their new
situation.
The relevant psychology literature suggests we should be pessimistic about the possibility
of positive cross-game learning. But this issue has been little explored in economic contexts. The
experiments reported in this paper study factors that are likely to affect cross-game learning: the
use of meaningful versus abstract context and team versus individual play.
An important distinction between conventional practices in economic and psychology
experiments is the use of abstract labels in the former versus natural (meaningful) labels in the
latter. Economists’ tend to prefer abstract labels on the grounds that meaningful labels might
elicit unintended responses with a resultant loss of experimental control. There are no anticipated
costs to this methodology because, within standard economic theory, the game’s mathematical
structure is all that matters.1 However, psychologists have found that meaningful labels can have
a dramatic positive effect on subjects’ ability to solve problems that involve deductive reasoning,
even when the labels are unrelated to situations that the subjects have directly experienced. This
is attributed to individuals’ reliance on mental models rather than pure logic to solve deductive
reasoning problems. In other words: Given a set of premises, rather than fully reasoning through
to conclusions that are consistent with these premises, individuals develop a simplified model of
the situation and evaluate what conclusions are most likely given the premises and their mental
model.2 Within this framework, meaningful (rather than abstract) context can improve subjects’
ability to solve deductive reasoning problems—assuming the context is appropriate to the
situation at hand—by providing shortcuts to reaching valid conclusions (Johnson-Laird 1999).
This mental model framework suggests that meaningful context that is appropriate to the
situation at hand can speed up the emergence of strategic play across games. We report on three
experiments exploring this issue, all within a simplified version of Milgrom and Roberts’s (1982)
entry limit pricing game. Strategic play in this game involves an incumbent monopolist
attempting to deter entry by signaling it will be a tough competitor for a potential entrant. Past
experiments have shown that strategic play emerges slowly in this game, with most monopolists
initially ignoring the strategic implications of their choices on entrants’ responses (Cooper,
Garvin, and Kagel 1997a, b). The limit pricing game provides a rich environment for studying
cross-game learning because, with small changes in the payoff tables, the game converges to
1
But see Gilboa and Schmeidler (1995), Jehiel (2005), and Huck, Jehiel, and Rutter (2006) for alternative
approaches that take account of context within standard game-theoretic models.
2
Consider the following example from Johnson-Laird (1999): “All Frenchmen are wine drinkers. Some of the wine
drinkers are gourmets. Therefore some Frenchmen are gourmets.” This syllogism is invalid. If subjects relied on
pure logic then they should recognize this, but many do not. This is quite comprehensible within a mental models
framework. If I believe that Frenchmen are representative of wine drinkers, the conclusion is almost certain to be
true. This example also serves to illustrate the importance of context: if “Frenchmen” is replaced by “steelworkers”,
the example becomes far less compelling.
2
either a pooling or a separating equilibrium and in both cases strategic play is clearly identifiable.
This allows us to confront subjects with closely related games that require quite different
strategic actions.
Experiment 1 looks at the effect of meaningful versus abstract context on facilitating
transfer of strategic play between a pooling and a separating equilibrium. Subjects start out
playing a version of the game that reliably converges to a pooling equilibrium. The payoff tables
are then changed so that the only pure-strategy equilibria remaining are separating equilibria.
Strategic play in the new game requires a change in actions but draws on the same underlying
logic as in the original game. With abstract context, negative cross-game learning is observed as
less strategic play is observed following the crossover than for subjects with no previous
experience with the same game. In contrast, with meaningful context there is strong positive
cross-game learning: In the first periods following the crossover, subjects with previous
experience in the pooling equilibrium exhibit almost three times as much strategic play as
inexperienced subjects. Within the mental model framework, the use of meaningful context
provides a superior framework for thinking through the implications of the change in entrants’
payoffs. We discuss the possible basis for this effect.
Experiment 2 explores the possibility that changes in meaningful context, in conjunction
with “superficial” changes in the games structure, can adversely affect cross-game learning. We
do this by changing the labels used to frame the game along with changes in the game’s structure
that leave it isomorphic to the original game. The change in labels generates a pause in subjects’
learning compared to a control group that faces the same superficial changes in the game’s
structure but no change in the (abstract) context. This is consistent with the mental model view
of deductive reasoning, since subjects must not only adjust to the superficial changes in the
game’s payoffs but must also realize that their existing mental model of the game applies to the
new context. Changing the labels muddies the waters sufficiently that subjects have trouble
drawing on their previous experience. Experiment 2 suggests that moving from the entry limit
pricing game to, for example, Spence’s (1973) education game is unlikely to generate positive
cross-game learning (psychologists refer to this as “far transfer”), because subjects are unlikely
to realize the relevance of the first game to strategic play in the second game
Experiment 3 replicates Experiment 2 using two-subject teams as players. Previous
research shows that two-player teams are far more strategic than individuals in the limit pricing
game (Cooper and Kagel 2005). The results of Experiment 2 provide a perfect opportunity to
extend this result. Unlike individuals, we find that the change in meaningful context has no effect
on the learning process for teams. Team dialogues show that subjects see through the change in
labels as essentially producing no change in the nature of the game. Furthermore, strategic play
of teams far surpasses what was found in 1 1 games, beating the demanding “truth wins”
benchmark (Lorge and Solomon 1955) and thus indicating that team play generates significant
synergies in the development of strategic play.
The paper is organized as follows. Section 2 outlines the basic design and procedures
used in all three experiments and provides a summary of results from earlier, closely related,
signaling game experiments. This summary is designed to help the reader understand how the
present paper fits into our research agenda on learning in games. Section 3 provides the details
underlying the design of Experiment 1 along with its results and the motivation for Experiment
2. Section 4 reports the change in procedures and the design of Experiment 2 along with its
results, and Section 5 does the same for Experiment 3. Section 6 summarizes the main results
3
and discusses a variety of issues, including the applicability of the results reported here to field
settings.
2. Experimental Methods and Results from Related Experiments
The experiments all employ a stylized version of Milgrom and Roberts’s (1982) entry limit
pricing game that focuses on the signaling aspects of the game. Our game, which is played
between an incumbent monopolist (M) and a potential entrant (E), proceeds as follows.
1. M observes its type, high-cost (MH) or low-cost (ML), where the two types are realized
with equal probabilities that are common knowledge.
2. Ms choose a quantity (output) whose payoff is contingent on the entrant’s (E’s) response
(see Table 1a).
3. E sees this output, but not M’s type, and either enters or stays out.
The asymmetric information—in conjunction with the profitability of entering against MHs but
not against MLs—provides an incentive for strategic play (limit pricing).3
[Insert Table 1 about here]
The game is played with two types of Es: high-cost types (Table 1b), for which there
exist both pure-strategy pooling and separating equilibria, and low-cost types (Table 1c), for
which there exist no pure-strategy pooling equilibria. Only one type of potential E is present in
any given play of the game, and changes in the type of E are a treatment variable of primary
interest. Past research shows that, with high-cost Es, play reliably converges toward a pooling
equilibrium at output level 4 (Cooper, Garvin, and Kagel 1997a, b). This is supported by two
facts: (i) E’s expected value of OUT is greater than IN (250 versus 187), so that pooling deters
entry; and (ii) the out-of equilibrium beliefs that any deviation involves an MH type with
sufficiently high probability to induce entry.4 In games with low-cost Es, play reliably converges
to the efficient separating equilibrium where MLs distinguish themselves by choosing 6 while Es
play OUT and where MHs choose 2 while Es play IN (Cooper, Garvin, and Kagel 1997a, b).
This result is supported by out-of-equilibrium beliefs that any choice of less than 6 indicates an
MH with sufficiently high probability that entry is induced.
Experiments with individual decision makers employ between 12 and 16 subjects, with
half of them playing as Ms and the balance as Es. A round-robin format is used wherein each M
meets a different E for up to six plays of the game, after which the Es and Ms switch roles and
play another set of up to six games. For a given session with inexperienced subjects, this process
repeats itself for a minimum of 24 games.
Earlier experiments in 1 1 games (Cooper, Garvin, and Kagel 1997a, b) reveal a
consistent dynamic to the learning process regardless of whether the game starts with high- or
low-cost Es: In early periods, both MHs and MLs largely ignore the strategic possibilities of the
game, choosing their “myopic maxima”—the output that maximizes payoffs ignoring the threat
3
The original Milgrom and Roberts game has two stages. We collapse stage 2—what happens in response to Es
decision to enter (the two share the market) or stay out (M plays as an uncontested monopolist)—into the payoffs in
Table 1a. This greatly simplifies the experimental design and focuses subjects’ attention on the game’s signaling
aspects.
4
There are pooling equilibria at output levels 1–5, each of which is supported by out-of-equilibrium beliefs that any
deviation from the output level in question represents an MH with sufficiently high probability that entry is induced.
Of these, levels 4 and 5 satisfy the Cho–Kreps (1987) intuitive criteria (see also Cooper, Garvin, and Kagel 1997a,
b).
4
of entry (2 for MHs, 4 for MLs)—even though entry rates make strategic play incentive
compatible from the start. Reacting to high entry rates following a choice of 2, MHs gradually
learn to play strategically by choosing 4. In games with high-cost Es, there is no incentive to
move beyond this pooling equilibrium to separating because there is insufficient entry on 4 to
motivate MLs to choose higher output levels with their lower payoffs. In games with low-cost
Es, no such equilibrium exists. Instead, as entry rates for 4 increase, MLs gradually learn to play
strategically by choosing 5 or 6, with play converging toward the pure-strategy separating
equilibrium where MLs choose 6.5 It takes longer for this separating equilibrium to develop than
for the pooling equilibrium in games with high cost Es as (i) MHs must first learn to imitate the
MLs’ choice of 4 before there is sufficient entry on 4 for it to be incentive compatible for MLs to
choose higher output levels; and (ii) pooling requires only that MHs imitate MLs, whereas
separating requires that MLs innovate by choosing higher output levels. As a result, this
separating equilibrium is slow to develop: By the end of inexperienced subject play, MLs’ modal
choice is 4; and even at the end of experienced subject play there are often relatively large
numbers of MLs and MHs choosing 4 (30%–40%; see Figure 6 in Section 4).
This gradual learning process rules out forward induction arguments as the basis for
establishing out-of-equilibrium beliefs that support a given equilibrium outcome. Rather, out-ofequilibrium beliefs are established by the history of play, and the whole process for
inexperienced subjects can be characterized with reasonable accuracy by a modified stochastic
fictitious play learning model (Cooper, Garvin, and Kagel 1997a).6 The importance of past
history of play on development of out-of-equilibrium beliefs is reflected by the fact that subjects
who start out in games with low-cost Es do not—once they have converged to a separating
equilibrium—revert to a pooling equilibrium if they are replaced by high-cost Es; rather, play
remains stuck at the separating equilibrium. This is in contrast to play with inexperienced
subjects who start out facing high-cost Es, which invariably converges to the pooling equilibrium
(Cooper, Garvin, and Kagel 1997a).7
Reinforcing the irrelevance of forward induction arguments, one can induce subjects to
converge to an inefficient separating equilibrium with MLs choosing 7, thereby violating even
the weakest equilibrium refinement criterion: single-round elimination of dominated strategies.
This is done by introducing positive payoffs for MLs at 6, thereby “disguising” the fact that they
are dominated, while making the payoff for 7 somewhat more attractive (Cooper, Garvin, and
Kagel 1997b).8 Note that this is far from the only instance in which the most commonly accepted
equilibrium refinements (e.g., Cho–Kreps and sequential equilibria) have been violated in
signaling game experiments—largely as a result of exploiting the tendency of subjects to play
5
An intermediate step with significant play of 5 by MLs is often observed. This is consistent with a mixed-strategy
equilibrium consisting of MHs choosing 2 with probability 0.8 and 5 with probability 0.2 while MLs always choose
5. As such, the play of 5 by MLs must be classified as strategic play (Cooper, Garvin, and Kagel 1997a, b), albeit of
an inferior variety.
6
Modified in the sense that it takes a critical mass of Es to recognize that choices of 6 or 7 do not come from MHs
(because they are dominated) for the efficient separating equilibrium to emerge reliably in simulations.
7
This makes it difficult to study the reverse of the cross-game learning process in Experiment 1, i.e., going from a
separating to a pooling equilibrium.
8
Experiments with teams (where one can tell what subjects are thinking by looking at the team dialogues) suggest
that this does not result from a fear that Es will not recognize that the choice of 6 is dominated. Rather, typical
laboratory subjects with no formal training in game theory do not scan payoffs to first delete dominated strategies;
instead they rely on the negative payoffs for MHs at 6 and 7 to recognize that these choices are dominated.
5
nonstrategically at first, thereby inducing out-of-equilibrium beliefs that result in violating the
refinement in question (see Brandts and Holt 1992, 1993).9
Our subsequent research has looked at play in games with two-person teams. These
experiments show the same stylized learning process as in the 1 1 games, but with substantially
more strategic play in each phase of an experimental session and a much cleaner separating
equilibrium by the end of experienced subject play (see Cooper and Kagel 2005 as well as Figure
7 in Section 4).10 One would expect teams to learn faster than individuals because learning to
limit price is similar to solving a “common purpose” problem for which there is a clear answer
(e.g., MLs can distinguish themselves from MHs in games with low-cost Es by choosing 6 or 7),
an answer that, once discovered, is easy to explain to one’s partner. As such, a random pair of
subjects acting as a team should be able to solve the problem no more slowly than its most able
member could when acting alone. Psychologists use this reference point, referred to as the “truth
wins” benchmark, in judging whether or not teams are truly superior to individuals, since beating
this benchmark implies that team play generates positive synergies.11 Meeting or beating the
truth-wins benchmark is far from guaranteed, since there are countervailing forces present in
team play: reduced effort due to free riding, coordination problems involved in combining team
members’ contributions, and process loss (e.g., establishing effective communication between
teammates might interfere with solving the problem at hand). In fact, the prevailing wisdom in
the psychology literature holds that “freely interacting groups very rarely exceed, sometimes
match, and usually fall below” the truth-wins benchmark (Davis 1992, p. 7; emphasis in the
original). In contrast, two-person teams handily beat the truth-wins benchmark in two of three
cases in the entry limit pricing game (Cooper and Kagel 2005).12
One of the most striking results reported in Cooper and Kagel (2005) is that, when
individuals are switched from the limit pricing game with high-cost Es to the game with low-cost
Es, MLs display significantly less strategic play than MLs in control sessions (i.e., sessions
where subjects have no previous experience in the limit pricing game).13 This result is
worrisome. Given that strategic play emerges only gradually, limited cross-game learning
implies that strategic play will not develop unless exactly the same game is repeated for extended
periods of time—an unlikely event in most settings. In contrast, the majority of ML teams play
strategically even in the first cycle following this same crossover and exhibit significantly more
strategic play than in control sessions with teams. This raises a natural question: Are there other
factors beyond team play that promote (or possibly discourage) cross-game learning? The
psychology literature demonstrates that meaningful context can sometimes generate dramatic
improvements in the kind of deductive reasoning that is at the heart of game theory.14 This, in
conjunction with team dialogues suggesting that strong positive cross-game learning for teams is
9
For a similar result in a different context see Schotter, Weigelt, and Wilson (1994).
There is a small but growing literature on team versus individual play in economic experiments. For a brief review
of the literature see Cooper and Kagel (2005), Hennig-Schmidt, Li, and Yang (2008), and the references therein.
11
Mathematically: If the probability of an individual solving the problem equals p, then the probability P of a
randomly selected group with r members solving the problem is the probability that the random sample contains at
least one successful member: P = 1 (1 p)r.
12
In the remaining case, teams did better than in 1 1 games but did not beat the truth-win benchmark; possible
reasons for this are discussed in Cooper and Kagel (2005).
13
For another example in which cross-game learning is studied, see Huck et al. (2007), who examine behavior
following mergers and entry in oligopoly games. Distinguishing characteristics of the cross-game learning here are
that (i) there is a long stylized learning process before play approaches equilibrium and (ii) the nature of strategic
play before and after the crossover is quite different.
14
For references, see “Discussion of Experiment 1” following Conclusion 1.
10
6
a result of an enhanced ability to reason about the game, suggests that meaningful context could
also improve cross-game learning. Experiment 1 tests this conjecture.
3. Experiment 1: Learning Transfer from a Pooling to a Separating Equilibrium
Experiment 1 explores the effect of meaningful versus abstract context in moving from games
with high-cost Es (Tables 1a and 1b) to games with low-cost Es (Tables 1a and 1c). This
involves a relatively challenging cross-game learning environment, because limit pricing in the
game with high-cost Es involves MHs imitating MLs with a pooling equilibrium at 4, whereas
limit pricing in the game with low-cost Es involves MLs clearly distinguishing themselves from
MHs. Although the concept underlying limit pricing is the same in both games (manipulating Es’
beliefs to make it appear more likely that the monopolist is a low-cost type), the actions used to
achieve this end are quite different.
In the analysis we will refer to limit pricing by MHs (choice of output levels 3, 4, and 5)
and by MLs (choice of output levels 5, 6, or 7) as strategic play. There are other possible
definitions of strategic play, since what constitutes strategic play depends critically on Ms’
beliefs about Es’ responses to Ms’ actions. However, as already noted, the evidence accumulated
from past experiments makes it clear that Ms’ initial choices involve attempts to maximize their
payoffs ignoring the effect of their choices on Es’ responses.15
Experimental Method: Experiment 1 involved 1 1 games in which individual subjects played
against each other. Each session employed between 12 and 16 subjects.16 For inexperienced
subject sessions, a common set of instructions were read out loud to subjects, each of whom had
a written copy as well.17 Subjects had copies of both Ms’ and Es’ payoff tables and were required
to fill out short questionnaires to ensure their ability to read them. After reading the instructions,
questions were answered out loud and then play began with a single practice round followed by
more questions. At the beginning of experienced subject sessions, an abbreviated version of the
full instructions were read out loud; again, each subject had a written copy.
Before each play of the game, the computer randomly determined each M’s type and
displayed this information on Ms’ screens.18 The screen also showed the payoff tables for both
types, with the payoff table for the player’s own type displayed on the left. After Ms made their
choices, the program automatically highlighted Ms’ possible payoffs and required that the choice
be confirmed. Once all Ms had confirmed their choices, each M’s choice was sent to the E with
whom the M was paired. Then Es decided between IN and OUT given their possible payoffs,
which were highlighted on their computer screens, and were also asked to confirm their choices.
Following each play of the game, subjects learned their own payoff, the payoff for the
player they were paired with, and M’s type. In addition, the lower left-hand portion of each
player’s screen displayed the results of all pairings: M’s type, M’s output, and E’s response
ordered by output levels from highest to lowest. 19 The screen automatically displayed the three
most recent periods of play, with a scroll bar available to see all past periods.
15
Content analysis of team dialogues in Cooper and Kagel (2005) makes this clear.
A smaller session size (10 subjects) was used once to avoid losing difficult-to-obtain experienced subjects.
17
A copy of the instructions is available at http://www.econ.ohio-state.edu/kagel.
18
We employed a block random design so that the number of high- and low-cost types was equal (or as close to
equal as possible) in each round.
19
We employed this procedure on the basis of the pioneering signaling game experiment results reported in Miller
and Plott (1985), which showed that without sorting the emergence of separating equilibria is much slower and less
reliable. We are in the process of exploring the impact of “marketwide” feedback on learning.
16
7
Subjects rotated between roles with Ms (resp. Es) becoming Es (resp. Ms) every 6 games
in inexperienced subject sessions and every 4 games in experienced subject sessions. We refer to
a block of 12 (resp. 8) games in an inexperienced (resp. experienced) session as a “cycle”.
Within each half-cycle, each M was paired with a different E for each play of the game.
Inexperienced subject sessions had 24 games divided into two 12-game cycles; experienced
subject sessions had 32 games divided into four 8-game cycles. The number of games in a
session was announced in the instructions.20
Subjects were recruited through announcements in undergraduate classes, posters placed
throughout the Ohio State University campus, advertisements in the campus newspaper, and
direct e-mail contact with students. This resulted in recruiting a broad cross-section of
undergraduates along with a few graduate students. Sessions lasted a little under two hours.
Subjects were paid $6 for showing up on time. Earnings (including the appearance fee) averaged
$26 per subject in inexperienced subject sessions and $33 in experienced subject sessions; the
difference was largely a result of playing more games.
Experienced subject sessions generally took place about a week after the inexperienced
subject sessions. Subjects from different inexperienced subject sessions were mixed in the
experienced sessions. Econometric analysis indicates that there are no systematic differences
between the choices of subjects who returned for an experienced subject session and of those
who did not.
All subjects in these crossover sessions had participated in one inexperienced subject
session with high-cost Es (payoff Tables 1a and 1b). Each crossover session consisted of one
cycle of the same game followed by three cycles of the game with low-cost Es (payoff Tables 1a
and 1c). At the time of a crossover, all subjects were given written copies of the new payoff
tables. A brief set of instructions was read out loud, announcing that the basic structure of the
game was the same as before but that payoff tables had changed. The total number of additional
games to be played was also announced.21
Sessions were conducted using either “abstract” or “meaningful” context. In the abstract
context, Ms were referred to as “A players”: type “A1” for MHs and type “A2” for MLs.
Potential Es were described as “B players”. In the abstract context, Ms’ choices were referred to
solely in terms of numbers with no concrete meaning attached to them, and Es’ responses were
simply labeled as “X” or “Y”. Nothing in the instructions or payoff tables suggested a
relationship between the limit pricing game and any situation outside the lab.
In contrast, the meaningful context used natural terms and labels to frame the game: Ms
were referred to as “existing firms” with “high” or “low” costs; Es were referred to as the “other
firm” deciding between “entering this industry or some other industry”, and Ms were
characterized as choosing over different “output levels”. We purposely avoided terms that could
elicit strong emotional responses—for example, we did not refer to Ms as “monopolists” in order
to avoid the negative connotations associated with this label.
No subject was ever switched between abstract and meaningful context or vice versa. The
sole difference between the abstract and meaningful context sessions was the framing of the
game. Otherwise, the procedures, payoffs, and interfaces were identical. Table 2 reports the
number of experimental sessions and subjects in Experiment 1.
20
There are a few exceptions to these general procedures. Two inexperienced subject 1 1 sessions with the lowcost Es and abstract context used three 12-game cycles rather than two; hence some of the subjects in subsequent
experienced subject sessions are more experienced than would normally be the case.
21
There was no forewarning of the change at the beginning of the crossover sessions.
8
[Insert Table 2 about here]
Experimental Results: Our discussion focuses on MLs’ behavior for the cycles immediately
following the crossover as compared with inexperienced subject play in the control sessions.
Figure 1 displays behavior for the cycle immediately before and after the crossover (first and
second rows of data, respectively). The left-hand column shows data from sessions using abstract
context, and the right-hand column displays data from meaningful context sessions. Numbers in
parentheses on the horizontal axis are entry frequencies for each output level.
[Insert Figure 1 about here]
Looking at the first row of Figure 1, a pooling equilibrium at 4 has started to emerge for
both abstract and meaningful context. Convergence is somewhat stronger with abstract context,
where 100% of play by MLs is at 4 and 77% of MHs choose 3 or 4, both of which indicate
strategic play. With meaningful context, 91% of ML play is at output level 4 and 58% of MHs
play strategically. The lower frequency of strategic play by MHs in the meaningful context
largely reflects lower incentives: the entry-rate differential between 2 and 4 is 60% in the
abstract context versus only 38% in the meaningful context. Moreover, the few cases (less than
5%) where MLs chose 5 in the meaningful context are matched by the same frequency with
which other MLs chose 2 or 3, indicating that overall MLs were no closer to separating in the
meaningful than in the abstract context.
The difference between abstract and meaningful context following the crossover can be
seen from the second row of Figure 1. In the abstract context, play by MLs is virtually
unchanged, with 88% of MLs continuing to choose 4. Strategic play by MLs is barely more
common (7%) than play of output levels lower than 4 (5%). In the meaningful context, 4 remains
the modal choice for MLs (60%), but there has been a substantial shift toward strategic play
(35% choosing 5 or 6).
[Insert Figure 2 about here]
Figure 2 compares behavior after the crossover to the controls (inexperienced MLs) for
sessions with abstract (panel a) and meaningful (panel b) context.22 Figure 2 shows that the use
of meaningful context changes the nature of play following the crossover. In the abstract context,
previous experience with high-cost Es retards the development of strategic play by MLs
compared to the controls; but in the meaningful context, the frequency of strategic play far
exceeds the controls.
[Insert Table 3 about here]
The preceding observations are based on visual inspection of the data. Table 3 reports the
results of probit regressions examining performance by MLs following the crossover. Each
22
We report three cycles for the controls. Cycles 1 and 2 are from the inexperienced control sessions. Cycle 3 is
taken from the first cycle of experienced play by these same subjects, which serve as the controls in Experiment 2
(see Table 6). As pointed out by a referee, this results in a one-week gap between cycle 2 and 3 for the controls but
not for the crossover treatments, which means that the control for cycle 3 is less than ideal.
9
observation corresponds to a single play by an ML. The dependent variable is a dummy for
whether an ML chooses to play strategically (i.e., chooses 5, 6, or 7).23 The baseline is play in the
first cycle following the crossover for MLs with abstract context. Independent variables include
dummies for the cycle (counting from the time of the crossover), interactions between the cycle
dummies and a dummy for meaningful context, and—as a control for the incentives to play
strategically—the entry-rate differential between 4 and 6.24 To control for repeated observations,
the standard errors are corrected for clustering at the player level (cf. Liang and Zeger 1986;
Moulton 1986). Coefficient values represent marginal effects, and the standard errors of these
effects are given in parentheses.
Model 1 is a basic regression establishing the treatment effects. The variable of primary
interest is “Meaningful Context Crossover Cycle 1”, which captures the difference between
abstract and meaningful context sessions in the first cycle following the crossover. The estimated
parameter is positive and statistically significant at the 1% level. Futhermore, the difference
between abstract and meaningful context remains large and significant at the 1% level for the
second cycle following the crossover. These differences are no longer statistically significant in
cycle 3, as strategic play finally predominates in the abstract context but levels off in the
meaningful context (compare the crossover data in the two panels of Figure 2).
Model 1 can be modified by including a dummy for whether the subject played
strategically (chose 3, 4, or 5) in his last opportunity as an MH prior to the crossover. The
marginal effect for this variable is positive and statistically significant at the 1% level. Ignoring
the context, subjects who played strategically prior to the crossover are almost twice as likely to
play strategically following the crossover (27% vs. 48%). More importantly, with the addition of
this control variable the frequency of strategic play is significantly higher in the third cycle with
meaningful context; in fact, for this context, by the second cycle following the crossover the
subjects who played strategically as MHs prior to the crossover are almost always playing
strategically (84%). Any further growth in strategic play must come from those subjects who had
not learned to play strategically in the game with high-cost Es, a far easier task. In contrast, for
abstract context the subjects who played strategically as MHs prior to the crossover are generally
not playing strategically (31%) in the second cycle following the crossover. This group is
responsible for most of the growth in strategic play in the third cycle. Thus the lack of a context
effect in the third cycle probably reflects a ceiling effect: those subjects who are most capable of
learning to play strategically in the game with low-cost Es have largely done so prior to the final
cycle in the sessions with meaningful context.
Model 2 adds the entry-rate differential as an independent variable. This has a strong
positive effect that is easily significant at the 1% level. Including the entry-rate differential
reduces both the magnitude and statistical significance of the marginal effect for “Meaningful
Context Crossover Cycle 1” so that it just fails to achieve statistical significance at the 10%
level.25 The “Meaningful Context Crossover Cycle 2” variable remains statistically significant
(at the 5% level), and “Meaningful Context Crossover Cycle 3” now achieves statistical
23
The choice of 7 is quite limited, but it’s included here for completeness.
See the online Appendix at http://www.jeea.org for exactly how this entry-rate differential is calculated as well as
discussion of its relevance and robustness with respect to alternative measures of Ms’ incentives to limit price.
25
The reduced marginal effect for “Meaningful Context Crossover Cycle 1” in Model 2 is driven by the high entry
rate for output level 6 in the first cycle following the crossover for abstract context (50%) versus meaningful context
(20%). However, the entry rate for abstract context is based on only four observations, so it is probably not an
accurate proxy for subjects’ beliefs.
24
10
significance at the 10% level.26 Using a log-likelihood test for joint significance, the three
context dummies are significant at the 1% level (2 = 24.00, df = 3, p < .01). When Model 2 is
modified by including a dummy for whether a subject played strategically in her last opportunity
as an MH prior to the crossover, the three context dummies all become statistically significant at
the 1% level.
The probits reported in Table 4 test whether the learning transfer (positive or negative)
reported in Figure 2 is statistically significant. Once again, the dependent variable is a dummy
for strategic play by an ML. The independent variables are dummies for the cycle of play and
interactions between the cycle dummies and a crossover dummy, so the “Crossover Cycle 1”
variable captures the difference between the first cycle of play in inexperienced control sessions
and the first cycle following the crossover. Analogous differences are captured by “Crossover
Cycle 2” and “Crossover Cycle 3”.
[Insert Table 4 about here]
Absent controls for entry-rate differences, the marginal effect for the “Crossover Cycle
1” variable—the time period of greatest interest—is positive and statistically significant at the
1% level for meaningful context and is negative and statistically significant at the 5% level for
abstract context. The negative cross-game learning for abstract context continues in the second
cycle following the crossover, with the positive crossover effect for meaningful context still
present in cycles 2 and 3 following the crossover. Controlling for entry-rate differences, the
results are essentially unchanged with abstract context, but the coefficient value for “Crossover
Cycle 1” with meaningful context, while still positive, is no longer statistically significant (its
value has been halved). The results for cycles 2 and 3 following the crossover are largely
unaffected.27
Conclusion 1: The use of meaningful context fosters positive cross-game learning for 1 1
sessions in moving from a pooling to a separating equilibrium. The difference between abstract
and meaningful context in 1 1 games is both qualitative and quantitative, since abstract
context results in negative cross-game learning. These conclusions are robust to controls for
entrants’ behavior.
Discussion of Experiment 1: There are a number of reference points in both the psychology and
economics literature against which to compare the results reported here. The psychology
literature on learning transfer—the ability to take knowledge acquired in one setting and
successfully apply it in another, related setting—suggests that the ability to generalize across
games cannot be taken for granted. The consensus from this literature is that positive transfer
usually fails except when environments are perceived to be quite similar (Gick and Holyoak
1980; Perkins and Salomon 1988; Salomon and Perkins 1989).
However, there are significant differences between the underlying structure and
procedures of the experiment reported here and psychology experiments used to study learning
transfer. The latter tend to be “one shot” in terms of what was initially learned and of the new
learning environment. In contrast, as economists we are concerned with whether agents, having
adjusted over time to an equilibrium in one game, will adjust more quickly over time to a new
26
27
Qualitative results for Models 1 and 2 are robust to clustering at the session level (see the online Appendix).
The results in Table 4 are robust to clustering at the session level (see the online Appendix).
11
equilibrium in a related game. Additionally, what subjects have learned in many psychology
experiments is algorithmic in nature (e.g., what is the best method of solving a logic problem)
because such experiments typically involve solving puzzles. In contrast, our experiment involves
strategic interactions in which successful play involves psychological insight (e.g., is my
opponent trying to fool me).
This helps explain why we observe greater learning transfer (with meaningful context)
than the psychology literature would lead us to expect. But these differences do not account for
the difference in cross-game learning between abstract and meaningful context. For an
explanation of this phenomenon we turn to the psychology literature on deductive reasoning.
This literature provides clear evidence for the effects of context on deductive reasoning. Perhaps
the most famous example is Wason’s four-card selection problem (1966), in which subjects are
shown four cards lying on a tabletop. They are told (for example) that each card has either an A
or a K on one side and either a 4 or a 7 on the other, with the cards arranged so that each of the
four possibilities (A, K, 4, or 7) is facing up on one card. Subjects are then asked to select two
cards to determine whether the statement “all cards with an A on one side must have a 4 on the
other side” is true or false. A “rational” subject would select the cards showing an A and a 7,
since the other two cards are useless in verifying the truth of the statement. In fact, only about
10% of subjects select the correct cards in this abstract version of the problem.28 However, when
Wason changed the content of the problem to a sensible everyday generalization, a majority of
people made the correct selection (Wason and Johnson-Laird 1972), with subsequent research
demonstrating that these increases can be demonstrated for a variety of meaningful contexts (for
reviews of this literature, see Dominowski 1995 and Johnson-Laird 1999).
Data from Wason’s four-card selection problem (as well as other psychology
experiments) is totally inconsistent with the notions that deductive reasoning depends only on
formal rules of inference and that semantic content has no role to play in deductive reasoning
(Johnson-Laird 1999). As an alternative, psychologists posit that reasoning is mediated by
manipulation of mental models—simplified representations of the problem at hand (JohnsonLaird and Byrne 1991; Polk and Newell 1995). Given a set of premises (e.g., known facts about
a setting), individuals reach conclusions not through formal reasoning but rather by determining
which possibilities are most consistent with the premises given their mental model.29 The use of
mental models provides shortcuts to thinking about problems. Mental models often have a very
limited domain, one that is determined as much by context as by the mathematical structure of a
problem.30 Within the mental models framework, the use of meaningful context can improve
deductive reasoning either by stimulating subjects to establish a better mental model of the
28
The most frequent error is selection of the card with a 4 rather than the card with a 7.
This mental model approach is similar to case-based reasoning (Gilboa and Schmeidler 1995). In that approach,
deductive reasoning has nothing to do with logic but instead is based on memories of previous performance in
similar situations. This approach is also relevant to explaining the positive cross-game learning found here on the
grounds that meaningful context facilitates making such connections. Johnson-Laird (1999) argues that the
drawback to case-based reasoning is that it offers no immediate explanation of the ability to reason about the
unknown. However, it is not our goal here to distinguish between the mental model approach and case-based
reasoning; we merely remark that both have similar implications for the cross-game learning reported in this paper.
30
For example, successful executives in the construction industry presumably have a useful mental model of how to
bid on projects, since competitive forces would quickly eliminate those who don’t. This mental model need not draw
on the underlying theory of common-value auctions, nor does it need to be applicable (in the minds of executives) to
any setting other than bidding on construction projects. Thus, as observed by Dyer and Kagel (1996), construction
executives have no advantage when faced with a common-value auction that is framed in abstract terms rather than
in terms of the concrete conditions under which they usually operate.
29
12
situation or by helping them realize that the domain of an existing model can be extended to the
current situation. We hypothesize that a meaningful context that is relevant to the task at hand
increases strategic play in Experiment 1 by helping subjects establish a better mental model of
the situation. The precise nature of the mental model that is at work here is discussed shortly. But
before doing so, it is worthwhile to compare the strength of the positive cross-game learning that
we observe for meaningful context with the cross-game learning observed in other settings.
The economics literature on learning transfer is quite sparse. Ho, Camerer, and Weigelt
(1998) explicitly test for transfer in two closely related dominant solvable games, finding no
transfer from the first game to the second.31 Results from common-value auctions are somewhat
mixed. Kagel (1995) finds that prior experience with first-price sealed-bid common-value
auctions reduces the severity of the “winner’s curse” compared to inexperienced bidders in an
ascending-price common-value auction. Yet bidders with experience in ascending-price auctions
do no better than inexperienced subjects in first-price sealed-bid auctions. Kagel and Levin
(1986) find that subjects who have learned to avoid the winner’s curse in auctions with small
numbers of bidders succumb once again when playing with larger numbers of bidders. However,
fully absorbing the adverse selection effects inherent to common-value auctions is no mean task
and, it would seem, considerably harder than identifying strategic responses in signaling games.
As noted previously, Cooper and Kagel (2005) find strong positive cross-game learning
with teams even in an abstract context. Controlling for entry-rate differences, we cannot reject
the null hypothesis that the extent of the positive cross-game learning found in individual
crossover sessions with meaningful context is at the same level as in the team crossover sessions
with abstract context.32 This suggests that meaningful context substitutes for the synergistic
effect of having a teammate in the cross-game learning of Experiment 1.
We think the strong positive cross-game learning in Experiment 1 occurs because
meaningful context provides players with a relatively good mental model of the likely effects of
(i) the changes in Es’ payoffs on Es’ choices and (ii) the impact these changes will have on Ms’
payoffs. A big advantage of the team experiments is that the dialogues provide insight into the
learning process. Analysis of the dialogues in Cooper and Kagel (2005) links positive transfer to
the development of strategic empathy: the ability of Ms to think from Es’ point of view and
therefore anticipate that the change in Es’ payoffs will increase entry rates substantially at the old
pooling output level.33 This is the first step in the adjustment process needed to generate a
separating equilibrium after the crossover in Experiment 1. The similarity of results found here
for meaningful context with individuals suggests that a similar mechanism is at play. Namely,
meaningful context—when it is relevant to the situation—enhances this kind of strategic
empathy.
4. Experiment 2: The Effect of Changes in Structure and Context on Cross-Game Learning
31
Weber and Rick (2008) report somewhat more success in generating positive transfer in this game when subjects
play the game a number of times without feedback instead of with feedback. The authors relate this to differences
between “implicit” learning from feedback alone versus “explicit” learning whereby individuals come to obtain
meaningful cognitive representations of underlying concepts and relationships.
32
See the probits reported in the online Appendix.
33
A similar point is made by Cooper and Kagel (in press), where a formal model of learning is developed that
includes “sophisticated” learners: subjects who explicitly model the learning and decision-making processes of other
subjects. A substantial and growing proportion of sophisticated learners is sufficient to explain the positive crossgame learning observed in that experiment.
13
Experiment 2 explores the effect of superficial changes in the structure of the game, which leave
the equilibrium outcomes unaltered, along with a change in the (meaningful) context used to
describe the game. From a mental model perspective, the change in context along with the
“superficial” changes in the game’s structure is expected to adversely affect the learning process
if the change in context obscures the relevance of past experience.
Experimental Design and Procedures: Subjects in the crossover treatment in this game start by
playing the quantity game with low-cost Es (Tables 1a and c). Inexperienced subjects play 24 of
these games in the same way as described before, switching every 6 plays of the game between
their roles as Es and Ms. They are then brought back as experienced subjects and play one full
cycle of the same game, switching roles after every 4 plays of the game (just as in the
experienced subject sessions in Experiment 1). The subjects are then introduced to the “price”
game shown in Table 5, where Ms’ payoffs are constructed from Table 1a by adding 50 to all
payoffs and then multiplying by 0.86.34 Payoffs are then flipped from top to bottom, with the
location of MLs’ payoffs switched from right to left.35 Similarly, Es’ payoffs are obtained from
Table 1c by adding 25 to all values and then multiplying by 0.8, and again the positions of the
two columns are flipped.
[Insert Table 5 about here]
From a game-theoretic perspective, the price and quantity games are identical. None of
the equilibrium predictions are affected by this transformation (once we control for the flipping
of the payoff tables), nor are the incentives that are necessary to induce strategic play. Hence the
price game is theoretically isomorphic to the quantity game with low-cost Es, but this is unlikely
to be immediately obvious to the subjects.
Experiment 2 employed both abstract and meaningful context in the crossover treatments.
For the abstract context, there was no need to change the labels or the framing of the game
following the crossover. For the meaningful context, we changed both the labels and the framing
of the game so that “existing firms” were choosing over “prices” rather than “quantities.”
Our basic empirical strategy for identifying learning transfer here is to compare the
growth of strategic play following the crossover between the abstract and meaningful context
treatments. As controls we use experienced subject data for the game in the abstract context with
no change in payoffs or the screen layout.36 In all other respects, Experiment 2 uses the same
procedures as Experiment 1.
Table 6 shows the number of experimental sessions and subjects per session for the
various treatments. To ease the exposition, in the analysis Ms’ choices in the price game have
been set equal to the corresponding output levels in the quantity game (e.g., price level 6 is
transformed to output level 2).
34
MHs’ payoffs for choosing 6 were further reduced to ensure they were negative, since previous work indicates
that this is an important element in the development of an efficient separating equilibrium (see Section 1 as well as
Cooper, Garvin, and Kagel 1997b).
35
A linear transformation without flipping doesn’t require players to change actions in order to continue playing
strategically. Without forcing changes in actions, it would be difficult to determine whether any meaningful transfer
occurs, as opposed to simple inertia.
36
These consist of experienced subjects from the control sessions in Experiment 1.
14
[Insert Table 6 about here]
Experimental Results: Figure 3 displays the relevant data for Experiment 2: the first and second
cycles of experienced subject play for the controls (leftmost column, top and bottom rows,
respectively) along with the corresponding data for the crossover sessions with abstract and
meaningful context (middle and rightmost columns, respectively). The crossover sessions with
abstract context show much the same pattern as the controls immediately following the
crossover: The level of strategic play by MLs increases and, more noticeably, strategic play
shifts from 5 to 6. The MLs’ increased choice of 6 is particularly important because it
definitively distinguishes them from MLs and, being dominated, is virtually never chosen by
MHs.
[Insert Figure 3 about here]
The same cannot be said for the crossover sessions with the change in meaningful
context. Following the crossover, the proportion of strategic play remains almost unchanged and,
although there is some movement toward 6, it is not as extreme as in either the control or
crossover sessions for abstract context.
[Insert Figure 4 about here]
Figure 4 graphs the frequency of strategic play by MLs in the two crossover treatments
along with the controls. In the crossover sessions with the change in meaningful context, MLs’
frequency of strategic play is essentially flat (38.5% versus 39.1%) before and after the crossover
(cycles 1 and 2, respectively). In contrast, there is modest growth in strategic play in the controls,
with the abstract context crossover sessions showing the largest growth in strategic play. Part of
these differences can be accounted for by differences in entry rates between 4 and 6, which are
largest in the abstract crossover sessions: 52% versus 47% in the meaningful context crossover
sessions and 38% in the control sessions. Finally, observe that, by cycle 3, strategic play in the
meaningful context crossover sessions has almost completely caught up to the controls and to the
abstract context crossover treatment.37 Thus, the effect of the change in meaningful context on
learning transfer between the quantity and price games can best be described as a stall in the
learning process.
[Insert Table 7 about here]
Table 7 reports the results of probit regressions dealing with these crossover effects. The
data set includes all observations from the experienced subject sessions with a crossover as well
as the experienced subject control sessions. Each observation corresponds to a single play by an
ML. The dependent variable is a dummy for whether an ML chose to limit price. The baseline is
the first cycle of play in the abstract context crossover sessions. In order to capture changes over
time, dummies for cycles 2–4, cycles 3–4, and cycle 4 are included as independent variables.
Because of the overlapping structure of these dummies, the resulting estimates measure
differences between cycles. For example, the dummy for cycles 3–4 measures how much
37
The entry-rate differential between 4 and 6 is essentially the same in cycle 3: 59% for the meaningful context
crossover case, 55% for the abstract crossover case, and 52% for the controls.
15
strategic play by MLs increases between cycle 2 and cycle 3. These time dummies are also
interacted with dummies for the meaningful context crossover sessions and the abstract context
control sessions. The entry-rate differential between 4 and 6 constitutes a final independent
variable, capturing MLs incentives to play strategically. Standard errors are corrected for
clustering at the individual subject level.
Model 1 does not control for entry-rate differences. The critical variable identifying a
possible stall in the learning process is “Meaningful Context Crossover Cycles 2–4”, which
captures a difference in differences: how does the difference between the first and second cycles’
levels of strategic play differ between abstract and meaningful context crossover sessions? The
estimate is negative, as expected, but fails to achieve statistical significance at standard levels.
Moreover, the estimate for “Abstract Context Control Cycles 2–4”, which measures the
difference between the crossover with abstract context and the control sessions, is almost as large
(although again not statistically significant).
Model 2 adds the entry-rate differential; it has a large, positive, statistically significant,
marginal effect on strategic play.38 Now the estimate for “Meaningful Context Crossover
Cycles 2–4” is easily significant at the 5% level. Although not statistically significant, the
difference between this estimate and the estimate for “Abstract Context Control Cycles 2–4”
has increased substantially. Thus, once we control for the differing incentives to play
strategically, statistical analysis supports the conclusion that the change in meaningful context
generates a stall in the learning process compared to the case of superficial changes in payoffs
and no change in context.39
In Figure 3, the effect of context on the crossover is as large (if not larger) on whether
MLs choose 5 or 6 as on the overall level of strategic play. As an alternative specification, we
ran probit models identical to Models 1 and 2 except with a dummy for choice of 6 or 7 as the
dependent variable. We also ran ordered probit models with the same specifications as Models 1
and 2 but with the output level itself as the dependent variable. For both models—absent controls
for the entry-rate differences—the estimate for “Meaningful Context Crossover Cycles 2–4” is
negative and statistically significant.40 The size and statistical significance of the marginal effect
increases in both cases when the entry-rate differential is included. Thus, the effect of abstract
versus meaningful context has a greater impact on how MLs play strategically than on whether
they play strategically.41
Conclusion 2: In 1 1 games, a change in meaningful context in conjunction with a change in
the superficial structure of the game generates a stall in the learning process compared to when
only the structure changes. This negative effect is weaker than the positive effect of meaningful
context on cross-game learning found in Experiment 1. The change in meaningful context has a
38
This marginal effect is about a third larger than the one reported for Model 2 in Table 3, which probably reflects
lower measurement error. The entry rate should be a better proxy for beliefs in Experiment 2 than in Experiment 1,
since play of 6 is reasonably frequent throughout. An increase in the measurement error will bias the parameter
estimate downward, which is consistent with the observed difference in estimates.
39
These results are not robust to clustering at the session level. Nevertheless we have confidence that they are
capturing a real phenomenon because we observed similar results in an earlier experiment at the University of
Pittsburgh (see the online Appendix for a summary of the Pittsburgh results).
40
For the probit model with choice of 6 or 7 as the dependent variable, z = 1.65 (p <0.10); in the ordered probit, z =
2.13 (p < 0.05).
41
The difference in how MLs played strategically, choosing 5 or 6, is robust to clustering at the session level and is
found the Pittsburgh experiment as well.
16
greater effect on how MLs play strategically than on whether or not they play strategically, given
that the development of a clear distinguishing signal (6) is delayed with the change in
meaningful context. An earlier experiment yielded similar results, strengthening our confidence
in these conclusions.
Discussion: The results of Experiment 2 indicate that the use of meaningful context does not
necessarily lead to greater cross-game learning, especially if the context is changing. In the
meaningful context, crossover subjects must adjust to two differences—the change in labels and
the superficial changes in the payoff table—as opposed to only one difference in the abstract
context’s change in the payoff tables. If the change in labels increases the perceived difference
between the quantity and price games, then learning could be slowed either because subjects try
to formulate a new mental model of the game or because they are less likely to treat their
previous experience as relevant. Either interpretation suggests that truly radical differences
between signaling games—say, from the industrial organization context used here to a labor
model context as in Spence’s (1973) education game—are unlikely to generate much in the way
of positive cross-game learning because both the underlying structure and the entire context of
the game have changed.42 Examination of this conjecture is a subject for another paper.
5. Experiment 3: Team versus Individual Play in Experiment 2
The results from Experiment 2 provide an opportunity to check the robustness of our earlier
team-play results showing that strategic play develops much more rapidly in two-person teams
than in individuals. Will teams suffer a similar stall in the learning process with meaningful
context? Will they continue to beat the demanding truth-wins benchmark relative to the 1 1
games? What insights into the learning process will the team dialogues provide in this case?
Experimental Design and Procedures: In what follows, the term “player” refers to an agent in
the limit pricing game. A player is a single subject in Experiments 1 and 2 but consists of two
subjects in the team (2 2) sessions. Team sessions used between 20 and 28 subjects.43 Pairings
were determined randomly by the computer at the beginning of each session. Matches could not
be preserved between inexperienced and experienced subject sessions owing to attrition and
mixing of subjects from different sessions.44
Teammates were able to communicate and coordinate their decisions using an instant
messaging system, where no other team had access to their messages. In addition to instructing
subjects that the instant messaging system should be used for coordinating their decisions, they
were told to be civil to each other and not to identify themselves. The message system was open
continuously; messages were time-stamped so we can match messages with the actions taken by
players.
Exactly the same payoff tables (and essentially the same computer interface) were
employed as in Experiment 2. For teams, the payoff table on the screen had a column labeled
42
This is what psychologists would refer to as “far transfer”, in contrast to the “close transfer” between the price and
quantity games in Experiment 2 and the change from high- to low-cost Es in Experiment 1; see Salomon and Perkins
(1989).
43
The smaller number of players in the team sessions was unavoidable because the 2 2 treatment must be run in
multiples of four subjects (two pairs) and the lab had only 30 workstations.
44
In three of the inexperienced subject team sessions the software had to be restarted, which necessitated new team
pairings. Two of these restarts were due to software crashes; the third was due to the session running beyond its
advertised time (and so some subjects had to be released).
17
“partner’s choice” on the left and a column labeled “my choice” on the right. When a subject
entered a choice, the possible payoffs were highlighted in blue. When their partner entered a
choice, possible payoffs were highlighted in pink. Once the choices coincided, possible payoffs
were highlighted in red and teammates had 4 seconds to change their choice before it became
binding. Teams started out with 3 minutes to coordinate their choices. If a team failed to
coordinate within this time constraint, the dialogue box was closed and one teammate was
randomly selected as the “leader” whose choice was implemented unilaterally. Disagreements of
this sort were rare. The financial incentives were the same as in the 1 1 games: each team
member received the full payoff associated with their team’s choice, with the same conversion
rate as in the 1 1 games. Everything else was the same as in the 1 1 sessions.
Team sessions were limited to the meaningful context. After playing a full session of the
quantity game, subjects were brought back for experienced subject sessions in which they played
a single cycle of the quantity game followed by three cycles of the price game, just as in
Experiment 2. Table 8 reports the number of sessions and subjects in each session. Our reasons
for not conducting sessions in the abstract context will become apparent shortly.
[Insert Table 8 about here]
Experimental Results: Figure 5 shows the level of strategic play in the crossover sessions for
teams versus the corresponding 1 1 games from Experiment 2. First, note the huge difference
between the 2 2 and 1 1 games in the level of strategic play prior to the crossover. Virtually
all teams master strategic play prior to the crossover, with 94% of MLs playing strategically
versus only 49% in the 1 1 games. Following the crossover to the price game, the proportion of
strategic play increases to 99% for the 2 2 games; the pause in the development of strategic
play observed in 1 1 sessions with meaningful context does not materialize.
[Insert Figure 5 about here]
Given that strategic play with teams was close to 100% prior to the crossover and that
there was no downturn following the crossover, there was no scope in the abstract context for
facilitating adjustment to superficial changes in the payoff tables. Hence we did not conduct a
parallel series of abstract crossover sessions. Rather, in the analysis that follows we compare the
team sessions with the corresponding 1 1 sessions, reporting what the team dialogues tell us
about how teams viewed the changes in the game’s structure. Before doing this, we summarize
what we have found so far.
Conclusion 3: Teams are close to 100% strategic play prior to the crossover from the quantity
to the price game, and teams do not exhibit any pause in the development of strategic play
following this crossover.
Figures 6 and 7 compare the detailed evolution of play between the 1 1 and 2 2
games. Recall that, in both cases, the change from the quantity to price game occurs between the
first and second cycle of experienced subject play. In the cycle just prior to the crossover, play
by MLs in the 2 2 games is consistent with a separating equilibrium in that 6 is chosen more
than 90% of the time. However, many MHs choose 4 rather than 2, in part because the entry-rate
18
differential between 2 and 4 makes it more profitable to do so.45 Despite the change in context
and payoff values, play continues to move smoothly toward the efficient separating equilibrium
in the first cycle following the crossover, with the choice of 2 increasing for MHs. This evolution
continues in experienced cycles 3 and 4, leading to an extremely clean separating equilibrium. In
contrast, learning in the 1 1 games stalls following the crossover, and a clean separating
equilibrium has not emerged by the last cycle of experienced subject play.
[Insert Figure 6 about here]
[Insert Figure 7 about here]
Table 9 reports a sampling of team dialogues during the first full cycle of play following
the crossover to the price game, primarily from the first period following the crossover. These
dialogues fall into two main categories: (1) teams that explicitly recognize the superficial nature
of the change in the game’s structure (50% of all teams); and (2) teams that lock into the new
equilibrium without any discussion of the change in payoff tables (44.1% of all teams). Of the
remaining two teams (out of 34), one did not figure things out immediately and so failed to play
strategically in their first opportunity, and one included a member who was confused by the
change in payoffs for several periods and so that team’s play was dictated by the member who
understood the changes.46
[Insert Table 9 about here]
Figure 8 compares team play against the truth-wins benchmark, which is shown by filled
diamonds (the bars to either side mark the 90% confidence interval). Strategic play in teams
equals the truth-wins benchmark in the first cycle of inexperienced subject play (falling within
the 90% confidence interval) and exceeds the benchmark thereafter.47 These results replicate
those previously reported for games with low-cost Es and abstract context (Cooper and Kagel
2005)—except that, in the experiment reported here, teams beat the truth-wins benchmark more
consistently and by a larger margin.
[Insert Figure 8 about here]
Conclusion 4: Team dialogues suggest that almost all teams saw completely through the change
from quantity to price games. Comparing strategic play by MLs in the 2 2 and 1 1 games,
teams meet or beat the demanding truth-wins benchmark throughout, a result that is consistent
with team play generating positive synergies.
6. Summary and General Conclusions
45
The minimum entry-rate differential that makes 4 more profitable than 2 is 13%, which is much less than the
entry-rate differential of 32.5% observed in the first cycle of experienced subject play.
46
Reading the dialogues, we identified at least one case (possibly two, depending on the interpretation) in which one
member of the team appeared to be confused at first but was immediately enlightened by the partner.
47
Because of clustering in the data, simulations are needed to correctly calculate the error bars. The simulation
procedures are described in the online Appendix along with probits showing that the differences in the level of
strategic play for MLs between the 2 2 and 1 1 games are statistically significant in all cycles with and without
controls for entry-rate differentials.
19
This paper looks at cross-game learning or what psychologists refer to as learning transfer. The
psychology literature on learning transfer is quite discouraging in that it typically shows zero (or
even negative) transfer except for situations that are quite similar (see e.g. Salomon and Perkins
1989). Our experiments are designed to explore when positive cross-game learning will and will
not occur. Experiment 1 finds that meaningful context can play an important role in promoting
strong positive cross-game learning. This role for context falls completely outside the typical
domain of inquiry in economics, for which the only thing that matters is the underlying
mathematical structure of the problem. However, it is consistent with research in psychology
showing that meaningful context can substantially improve deductive reasoning (Wason and
Johnson-Laird 1972).
The psychology literature offers several models of deductive reasoning processes, with
mental models being the currently favored hypothesis. Within this approach, meaningful context
can foster positive cross-game learning by facilitating shortcuts in the reasoning process that are
appropriate to the question at hand. The results of previous studies (Cooper and Kagel 2005, in
press) indicate that the positive cross-game learning in Experiment 1 is linked to strategic
empathy—the ability to think from another’s point of view—which enables Ms to anticipate
change in Es’ behavior following a change in Es’payoffs. Considering strategic empathy as a
mental model, meaningful context that is appropriate to the situation at hand appears to stimulate
its development and thereby improve cross-game learning.
Experiment 2 shows that meaningful context is not a panacea for fostering cross-game
learning. A change in context, in conjunction with changes in the superficial structure of the
game, generates a stall in the learning process compared to sessions in which only the superficial
structure changes, holding the (abstract) context constant. From the mental model point of view,
the change in context leaves subjects unaware of (or confused about) the relevance of their precrossover model of the situation; the result is a pause in learning while subjects adjust to both the
change in payoffs and the change in context.
Repeating Experiment 2 with two-person teams does not result in a stall in the learning
process, as virtually all teams see directly through the change in context. We are unaware of any
work on cross-game learning in the psychology literature for teams. The results of Experiment
3, along with those reported in Cooper and Kagel (2005) for cross-game learning by teams, show
that team play can contribute to positive cross-game learning. Experiment 3 provides additional
evidence of the superiority of strategic play in teams compared to individuals in signaling games.
There are a number of open questions regarding the role of teams in fostering strategic
play in games: the scope of these effects, the mechanism underlying them (e.g., are there positive
synergies between the cognitive processes of the two team members, or is the positive effect of
team play driven by the fact that team members must articulate their thought process and thus
think harder about the problem at hand), the impact of increasing team size, and the role of
changing team membership. All of these issues are currently under investigation.
Our results on the effects of meaningful context in cross-game learning also raise a
number of new questions. First, meaningful context leads to a large increase in strategic play
following the crossover in Experiment 1, yet it results in only marginally increased levels of
strategic play for inexperienced subjects in Cooper and Kagel (2003). What accounts for the far
stronger context effect in the crossover treatment? We suspect there is little scope for differential
effects with inexperienced subjects in 1 1 games because strategic play by low-cost
monopolists develops slowly and emerges only, with repeated interactions, through a trial-anderror learning process. For inexperienced subjects, there is so much to learn that is foreign to
20
them that there is little scope for deductive reasoning processes to play a role. In the crossover
treatment in Experiment 1, subjects should have already formed a working mental model of the
game that can be deployed in response to the change in entrants’ payoffs, with meaningful
context both improving the quality of the mental model and increasing the chance that it will be
accessed following the crossover.
Second, we would be remiss if we left the reader with the impression that meaningful
context (as long as it is not changed) will necessarily improve performance in games. One can
readily imagine situations where the use of meaningful context triggers an inappropriate mental
model that short-circuits the reasoning process, resulting in suboptimal actions. Indeed,
something like this seems to have been at work in Burns (1985), who compared professional
wool buyers with student subjects in a market game. In that experiment, the wool buyers appear
to have used the meaningful context to trigger rules of thumb that were appropriate in their field
experience but not in the laboratory game. Further exploration of the scope of meaningful
context’s positive impact on cross-game learning is needed.
As noted in the Introduction, cross-game learning plays a critical role in determining
whether convergence results developed for stable environments (e.g., identical games played
repeatedly) will apply to settings where the game changes over time. Because play in field
settings is inherently framed within a meaningful context, the results of Experiment 1 suggest
that changes in the underlying structure of the game should result in a decent level of cross-game
learning as long as the context remains the same and the mental model developed remains
relevant. However, the results of Experiment 2 suggest limits on this ability to learn across
games. “Far transfer” is unlikely to occur: individuals who have gained experience with one
game are unlikely to be any better than novices at a different game framed in a different context,
even if the same strategic insights are relevant. Thus, understanding basic concepts does little
good when individuals fail to realize their continued relevance. The failure of cross-game
learning in Experiment 2 is rather mild, but we conjecture it would be worse for subjects who
were switched between games where both the context and the mathematical structure were
changed in nontrivial ways—even if similar concepts applied. For example, we hypothesize that
giving subjects a lecture about Akerlof’s (1970) lemons model would not reduce the incidence of
the winner’s curse in common-value auctions even though the concept of adverse selection is
relevant in both settings. This is a topic of our ongoing research.
The results from our experiments have interesting, albeit speculative, implications for a
number of important phenomena outside the laboratory. For example, what role do consultants
and experts have to play in alleviating real-world cross-game learning problems? For example,
consultants could provide agents with a relevant mental model of the environment that
transcends the past experience of agents, mitigating many of the problems associated with crossgame learning in field settings. For this to work, several assumptions must be satisfied. First, the
stakes must be sufficiently large that it pays to hire experts; although there are probably many
situations in which this is true, there are also many cases where it is not. Second, the experts
must make the correct mapping from the particulars of the field setting to models and situations
with which they are familiar, which cannot be taken for granted. Third, games are much more
complicated when played outside the lab, since there are competing motivations for the
principals who hire the consultants; as a result, even valid expert advice can easily fall on deaf
ears.
Finally, in light of the varied effects of meaningful context, we are frequently asked what
format we advise experimenters to employ: meaningful or abstract context? There is
21
unfortunately no pat answer to this. As already noted, there are only small differences between
the use of meaningful and abstract context for inexperienced subjects playing their first signaling
games (Cooper and Kagel 2003).48 Our conjecture at this point is that meaningful context plays
its greatest role when subjects have already had an opportunity to develop a mental model and
must then apply that model to a new situation. Thus, exercises designed to test comparative static
prediction are particularly likely to be sensitive to context effects.
Beyond this, our research unambiguously shows that experimenters and economists in
general need to be aware that semantic content can affect behavioral outcomes—contrary to the
usual assumption that only the problem’s mathematical structure matters. The potential
sensitivity of subject behavior to context may play a role in generating unanticipated outcomes in
how games are played. For those using meaningful context, we do have one clear suggestion: use
as neutral a formulation as possible in order to avoid unwanted meaning responses stemming
from experience outside the laboratory. Before we can confidently offer further methodological
advice, we need a firmer grasp of why and in what settings the use of meaningful context is
likely to matter.
References
Akerlof, George A. (1970). “The Market for Lemons: Quality Uncertainty and the Market Mechanism.”
Quarterly Journal of Economics, 89, 488-500.
Brandts, Jordi and Charles A. Holt (1992). “An Experimental Test of Equilibrium Dominance in
Signaling Games.” American Economic Review, 82(5), 1350-1365.
Brandts, Jordi and Charles A. Holt (1993). “Adjustment Patterns and Equilibrium Selection in
Experimental Signaling Games.” International Journal of Game Theory, 22(3), 279–302.
Burns, Penny (1985). “Experience and Decision Making: A Comparison of Students and Businessmen in
a Simulated Progressive Auction.” in Vernon L. Smith (ed.) Research in Experimental
Economics, Vol 3, Greenwich: JAI Press, 139-53.
Cho, In-Koo and David Kreps (1987). “Signaling Games and Stable Equilibria.” Quarterly Journal of
Economics, 102, 179-221.
Cooper, David J., Susan Garvin and John Kagel (1997a). "Adaptive Learning vs. Equilibrium
Refinements in an Entry Limit Pricing Game." Economic Journal, 107, 553-575.
Cooper, David J., Susan Garvin and John Kagel (1997b). "Signaling and Adaptive Learning in an Entry
Limit Pricing Game." Rand Journal of Economics, 28, 662-683.
Cooper, David J. and John H. Kagel (2003). “The Impact of Meaningful Context on Strategic Play in
Signaling Games.” Journal of Economic Behavior and Organization, 50, 311-37.
Cooper, David J. and John H. Kagel (2005). “Are Two Heads Better than One? Team versus Individual
Play in Signaling Games.” American Economic Review, 95(3), 477-509
Cooper, David J. and John H. Kagel (in press). “Learning and Transfer in Signaling Games.” Economic
Theory.
Davis, James, H. (1992). “Some Compelling Intuitions About Group Consensus Decisions, Theoretical
and Empirical Research, and Interpersonal Aggregation Phenomena: Selected Examples, 19501990.” Organizational Behavior and Human Decision Processes, 52(1), 3-38.
Dominowski, Roger L. (1995). “Content Effects in Wason’s Selection Task.” in S.E. Newstead and J. St.
B.E. Evans, (Eds.), Perspectives on Thinking and Reasoning. Lawrence Erlbaum Associates,
Hillsdale, N. J., 41-65.
48
For a somewhat larger reported effect of meaningful versus abstract context absent any cross-game learning
issues, see Huck, Normann, and Oechssler (2004), who find that meaningful context leads to greater collusion in a
two-player oligopoly experiment.
22
Dyer, Douglas and John H. Kagel (1996). “Bidding in Common Value Auctions: How the Commercial
Construction Industry Corrects for the Winner’s Curse.” Management Science, 42(10), 14631475.
Gick, Mary L. and Keith K. Holyoak (1980). "Analogical Problem Solving." Cognitive Psychology, 12,
306-355.
Gilboa, Itzhak and David Schmeidler (1995). “Case Based Decision Theory.” Quarterly Journal of
Economics, 110, 605-639.
Hennig-Schmidt, Heike, Zhu-Yu Li, and Chaoliang Yang (2008). “Why People Reject Advantageous
Offers – Non-monotonic Strategies in Ultimatum Bargaining: Evaluating a Video Experiment
Run in PR China.” Journal of Economic Behavior and Organization, 65, 373-384.
Ho, Teck-Hua, Colin Camerer, and Keith Weigelt (1998). “Iterated Dominance and Iterated BestResponse in p-Beauty Contests.” American Economic Review, 88(4), 947-69.
Huck, Steffen, Philippe Jehiel, and Tom Rutter (2006). “Information Processing, Learning and AnalogyBased Expectations: An Experiment.” unpublished manuscript. University College London.
Huck, Steffen, Kai A. Konrad, Wieland Müller and Hans-Theo Normann (2007). “The Merger Paradox
and Why Aspiration Levels Let It Fail in the Laboratory.” The Economic Journal, 117 (7), 1073–
1095.
Huck, Steffen, Hans-Theo Normann and Jörg Oechssler (2004). “Two Are Few and Four Are Many:
Number Effects in Experimental Oligopolies.” Journal of Economic Behavior and Organization,
53, 435-446.
Jehiel, Philippe (2005). “Analogy-Based Expectation Equilibrium.” Journal of Economic Theory, 123,
81-104.
Johnson-Laird, Philip N. (1999). “Deductive Reasoning.” Annual Review of Psychology, 50, 109-135.
Johnson-Laird, Philip N. and Ruth M.J. Byrne (1991). Deduction. Hillsdale, NJ: Lawrence Erlbaum
Associates, Hillsdale, N. J.
Kagel, John H. (1995). “Cross-Game Learning: Experimental Evidence from First-Price and English
Common Value Auctions.” Economic Letters, 49, 163-70.
Kagel, John H. and Dan Levin (1986). “The Winner’s Curse and Public Information in Common Value
Auctions.” American Economic Review, 76(5), 894-920.
Liang, Kung-Yee, and Scott L. Zeger (1986). “Longitudinal Data Analysis Using Generalized Linear
Models.” Biometrika, 73(1), 13-22.
Lorge, Irving and Herbert Solomon (1955). “Two models of group behavior in the solution of eureka-type
problems.” Psychometrika, 20(2), 139-48.
Milgrom, Paul and John Roberts (1982). "Limit Pricing and Entry Under Incomplete Information: An
Equilibrium Analysis." Econometrica, 50, 443-459.
Miller, Ross M. and Charles R. Plott (1985). “Product Quality Signaling in Experimental Markets.”
Econometrica, 53(4), 837-872.
Moulton, Brent (1986). "Random Group Effects and the Prevision of Regression Estimates." Journal of
Econometrics, 32, 385-397.
Perkins, David N. and Gavriel Salomon (1988). "Teaching for Transfer." Educational Leadership, 46, 2232.
Polk, Thad A. and Allen Newell (1995). “Deduction as Verbal Reasoning.” Psychological Review, 102,
533-566.
Salomon, Gavriel and David N. Perkins (1989). "Rocky Roads to Transfer: Rethinking Mechanisms of a
Neglected Phenomenon." Education Psychologist, 24, 113-142.
Schotter, Andrew, Keith Weigelt, and Charles Wilson (1994). “A Laboratory Investigation of Multiperson Rationality and Presentation Effects.” Games and Economic Behavior, 6(3), 445-68.
Spence, A. Michael (1973). "Job Market Signaling." Quarterly Journal of Economics, 87(3), 355-374.
Wason, Peter C. (1966). “Reasoning.” in B.M. Foss, (ed.), New Horizons in Psychology, I. Penguin
Books, Baltimore, MD., 135-151.
23
Wason, Peter C. and Philip N. Johnson-Laird (1972). The Psychology of Deduction: Structure and
Content. Cambridge, MA: Harvard University Press.
Weber, Roberto A. and Scott Rick (2008). “Meaningful Learning and Transfer of Learning in Games
Played Repeatedly Without Feedback.” unpublished manuscript. University of Pennsylvania, The
Wharton School.
24
Table 1a - Quantity Game
Existing Firm’s Payoffs as a Function of Other Firm’s Choice
(A Player’s Payoffs as a Function of B Player’s Choice)
Output
(A’s choice)
1
2
3
4
5
6
7
High Cost
(A1)
Enter
(B’s choice)
This
Other
(x)
(y)
150
426
168
444
150
426
132
408
56
182
-188
-38
-292
-126
Output
(A’s choice)
1
2
3
4
5
6
7
Low Cost
(A2)
Enter
(B’s choice)
This
Other
(x)
(y)
250
542
276
568
330
606
352
628
334
610
316
592
213
486
Table 1b
Other Firm’s Payoffs
(B’s Payoffs)
Other Firm Enters
(B’s Choice)
This (x)
Other (y)
Existing Firm’s Type
(A’s Type)
High Cost
Low Cost
(A1)
(A2)
300
74
250
250
Expected
Payoffa
187
250
Note: Information on expected payoffs was not displayed on subjects’ payoff tables.
Table 1c
Other Firm’s Payoffs
(B’s Payoffs)
Other Firm Enters
(B’s Choice)
This (x)
Other (y)
Existing Firm’s Type
(A’s Type)
High Cost
Low Cost
(A1)
(A2)
500
200
250
250
Expected
Payoffa
350
250
Labeling in bold applies to meaningful context. Labeling in italics and in parentheses applies to abstract
context.
a
This information was not provided as part of payoff tables.
Table 2
Summary of Sessions Included in Experiment 1 1
Session Type
Inexperienced
Low Cost Entrants
Inexperienced
Low Cost Entrants
Experienced
Low Cost Entrants
Experienced
Low Cost Entrants
Experienced High
& Low Cost
Entrants
Experienced High
& Low Cost
Entrants
1
Games
Rounds
Payoffs
Session Type
Context
Previous
Controls
Abstract
None
1 – 24
Controls
Meaningful
None
Controls
Abstract
Controls
Meaningful
Crossover
High → Low Cost
Abstract
Quantity Game
High Cost Entrants
Crossover
High → Low Cost
Meaningful
Quantity Game
High Cost Entrants
Quantity Game
Low Cost Entrants
Quantity Game
Low Cost Entrants
# Sessions
# Subjects
1a & 1c
8
128
1 – 24
1a & 1c
4
62
1–8
1a & 1c
6
83
1–8
1a & 1c
4
52
1–8
1a & 1b
9 – 32
1a & 1c
4
50
1–8
1a & 1b
9 – 32
1a & 1c
3
44
For most of the experienced control sessions, subjects were crossed to another game after the eighth round as part of Experiment 2.
Table 3
Comparing Crossover Effects Going from High to Low Cost Entrants
Probit Regressions: Dependent Variable is Strategic Choice by MLs
(816 obs, 136 players)
Variable
Crossover
Cycle 2
Crossover
Cycle 3
Meaningful Context * Crossover
Cycle 1
Meaningful Context * Crossover
Cycle 2
Meaningful Context * Crossover
Cycle 3
Model 1
.174***
(.060)
.479***
(.072)
.282***
(.077)
.414***
(.097)
.120
(.107)
Entry Rate Differential
Log Likelihood
-320.25
Model 2
.078
(.081)
.373***
(.080)
.179
(.098)
.254**
(.111)
.186*
(.102)
.761***
(.219)
-296.25
Notes: Standard errors corrected for clustering at the player level. This table reports marginal effects.
Marginal effects for binary variables are the predicted difference in the probability of strategic play
between setting the variable equal to one and zero, evaluated at average values of the other variables
(provided a change in the binary variable is possible; e.g., for the variable “Meaningful Context *
Crossover * Cycle 2” to switch between zero and one, the variable “Crossover * Cycle 2” must be equal
to one.)
*
**
***
statistically significant at the 10% level
statistically significant at the 5% level
statistically significant at the 1% level
Table 4
Crossover from High to Low Cost Entrants: Comparing Outcomes Against within Treatment Controls
Probit Regressions: Dependent Variable is Strategic Choice by MLs
Variable
Data Set
# Subjects
# Observations
Inexperienced
Cycle 2
Experienced
Cycle 1
Crossover
Cycle 1
Crossover
Cycle 2
Crossover
Cycle 3
Entry Rate Differential
Log Likelihood
Model 1
Model 2
Abstract Context, 1x1
178
1239
.198***
.112***
(.038)
(.042)
***
.271
.068
(.054)
(.064)
-.112**
-.161**
(.041)
(.052)
*
-.135
-.188***
(.077)
(.064)
.097
.123
(.094)
(.088)
.400***
(.080)
-705.37
-661.81
Model 3
Model 4
Meaningful Context, 1x1
106
744
.076
.047
(.048)
(.053)
.251***
.145*
(.071)
(.082)
.218***
.125
(.078)
(.089)
.448***
.318***
(.086)
(.114)
.285***
.329***
(.103)
(.099)
.352**
(.155)
-410.69
-405.01
Notes: Standard errors corrected for clustering at the player level. This table reports marginal effects. Marginal effects for binary variables are
the predicted difference in the probability of strategic play between setting the variable equal to one and zero (see notes to Table 3).
*
**
***
statistically significant at the 10% level
statistically significant at the 5% level
statistically significant at the 1% level
Table 5a - Price Game
Existing Firm’s Payoffs as a Function of Other Firm’s Choice
(A Player’s Payoffs as a Function of B Player’s Choice)
Price
(A’s Choice)
1
2
3
4
5
6
7
Low Cost
(A2)
Enter
(B’s choice)
This
Other
(x)
(y)
226
461
315
552
330
568
346
583
327
564
280
531
258
529
Price
(A’s Choice)
1
2
3
4
5
6
7
High Cost
(A1)
Enter
(B’s choice)
This
Other
(x)
(y)
-218
-75
-129
-22
91
200
157
394
172
409
187
425
172
409
Table 5b
Other Firm’s Payoffs
(B’s Payoffs)
Other Firm Enters
(B’s Choice)
This (x)
Other (y)
Existing Firm’s Type
(A’s Type)
Low Cost
High Cost
(A2)
(A1)
420
180
220
220
Expected
Payoffa
300
220
Labeling in bold applies to meaningful context. Labeling in italics and in parentheses applies to abstract
context.
a
This information was not provided as part of payoff tables.
Table 6
Summary of Sessions Included in Experiment 2
Session Type
Inexperienced
Low Cost Entrants
Inexperienced
Low Cost Entrants
Experienced
Low Cost Entrants
Experienced
Low Cost Entrants
Experienced
Low Cost Entrants
Experienced
Low Cost Entrants
Games
Rounds
Payoffs
Session Type
Context
Previous
Controls
Abstract
None
1 – 24
Controls
Meaningful
None
Controls
Abstract
Controls
Abstract
Crossover
Quantity → Price
Crossover
Quantity → Price
Abstract
Meaningful
Quantity Game
Low Cost Entrants
Quantity Game
Low Cost Entrants
Quantity Game
High Cost Entrants
Quantity Game
High Cost Entrants
# Sessions
# Subjects
1a & 1c
8
128
1 – 24
1a & 1c
4
62
1 – 32
1a & 1c
??
??
1 – 32
1a & 1c
3
42
1–8
9 – 32
1–8
9 – 32
1a & 1c
5a & 5b
1a & 1c
5a & 5b
4
50
4
54
Table 7
Crossover from Quantity to Price Game: 1x1 Sessions
Probit Regressions: Dependent Variable is Strategic Choice by MLs
(1116 obs, 140 players)
Variable
Cycles 2 – 4
Cycles 3 – 4
Cycle 4
Meaningful Context * Crossover
Cycles 1 – 4
Meaningful Context * Crossover
Cycles 2 – 4
Meaningful Context * Crossover
Cycles 3 – 4
Meaningful Context * Crossover
Cycle 4
Abstract Context * Control
Cycles 1 – 4
Abstract Context * Control
Cycles 2 – 4
Abstract Context * Control
Cycles 3 – 4
Abstract Context * Control
Cycle 4
Model 1
.141**
(.063)
.020
(.063)
.057
(.056)
-.109
(.104)
-.137
(.087)
.187*
(.096)
-.030
(.074)
-.082
(.108)
-.101
(.094)
.104
(.092)
.026
(.087)
Entry Rate Differential
Log Likelihood
-739.25
Model 2
.339***
(.072)
-.038
(.062)
-.107
(.087)
-.071
(.108)
-.197**
(.082)
.095
(.110)
.056
(.095)
.030
(.115)
-.113
(.082)
.012
(.103)
.067
(.108)
1.028***
(.233)
-685.21
Note: The base is 1x1 abstract context crossover sessions. Standard errors corrected for clustering at the
player level. This table reports marginal effects. Marginal effects for binary variables are the predicted
difference in the probability of strategic play between setting the variable equal to one and zero (see notes
to Table 3).
*
**
statistically significant at the 10% level
statistically significant at the 5% level
***
statistically significant at the 1% level
Table 8
Summary of Sessions Included Experiment 3
Session Type
Inexperienced
Low Cost Entrants
Inexperienced
Low Cost Entrants
Experienced
Crossover
Quantity → Price
Experienced
Crossover
Quantity → Price
Games
Rounds
Payoffs
Player Type
Context
Previous
# Sessions
# Subjects
Individual
Meaningful
None
1 – 24
1a & 1c
4
62
Team
Meaningful
None
1 – 24
1a & 1c
3
68
Individual
Meaningful
Quantity Game
Low Cost Entrants
1–8
1a & 1c
9 – 32
5a & 5b
4
54
Team
Meaningful
Quantity Game
Low Cost Entrants
1–8
1a & 1c
9 – 32
5a & 5b
3
68
Table 9
Sample Quotes from Team Dialogues
1.
Explicitly recognize that these are essentially the same payoff tables, only the numbers
and layout have changed: 50.0% (17/34)
16: “it looks like about the same deal.”
6: “yeah, they just changed “output” with “price” and the numbers finally changed”
19: “this is almost the reverse as the other table “
9: “yea it’s the same idea just different numbers”
8: “ok 2 is the new 6 lol (laugh out loud)”
15: “well this is certainly a twist”
17: “haha yeah ill (sic) have to be careful to realize the numbers are switched around”
16: “if low cost, choose 2”
16: “it’s the same thing pretty much, they’re just trying to throw us off”
11: “I am guessing everything will flip around, high costs will pick 6 and low costs will pick 2”
2.
Play strategically from the start but without any explicit reference to the payoff tables
being essentially the same: 44.1% (15/34)
5: “what do you think we should do?”
24: “its best no matter what to do 6 if you are high”
5: “yeah”
24: “but if we can trick them into thinking we’re low by picking 4 then they will pick other”
2: “if we pick 2 for every low maybe they will catch on”
17: “do you wanna stick to 2 then?”
2: “yeah”
17: “if we get high cost we could go with 4 and then maybe they would choose other since 4 is
the highest on the low side”
2: “that’ll work”
1: “low we want to pick 2” “high pick 4”
…..
19: “we’ll see how soon other firms start catching on to high’s picking 4” “pretty quickly, I
guess”
1: “haha we should try 3 or 6” [after being entered on when high cost and picking 4]
19: “2 for low, 6 for high”
3: “yeah”
Figure 1
Experiment 1: High Cost Entrant to Low Cost Entrant Crossovers
Crossover, Meaningful
Experienced Cycle 1 (Before Crossover)
1
1
0.8
0.8
0.6
0.6
Proportion
Proportion
Crossover, Abstract
Experienced Cycle 1 (Before Crossover)
0.4
0.2
0.4
0.2
0
0
1
(.00)
2
(.71)
3
(.60)
4
(.11)
5
(---)
6
(---)
7
(---)
1
(.50)
2
(.75)
3
(.62)
Output Level
5
(.20)
6
(---)
7
(---)
6
(.20)
7
(---)
Output Level
Crossover, Abstract
Experienced Cycle 2 (After Crossover)
Crossover, Meaningful
Experienced Cycle 2 (After Crossover)
1
1
0.8
0.8
0.6
0.6
Proportion
Proportion
4
(.37)
0.4
0.2
0.4
0.2
0
0
1
(1.00)
2
(.89)
3
(1.00)
4
(.66)
Output Level
MH Choices
ML Choices
5
(.33)
6
(.50)
7
(---)
1
(.75)
2
(.76)
3
(.50)
4
(.61)
Output Level
5
(.37)
Figure 2
Strategic Play by MLs in High to Low Cost Entrant Crossovers
Abstract Context
% ML Strategic Play
1
0.8
0.6
0.4
0.2
0
Cycle 1
Cycle 2
Control
Cycle 3
Crossover
Meaningful Context
% ML Strategic Play
1
0.8
0.6
0.4
0.2
0
Cycle 1
Cycle 2
Control
Crossover
Cycle 3
Figure 3
Quantity to Price Game Crossovers
Crossover, Abstract
Experienced Cycle 1 (Before Crossover)
1
0.8
0.8
0.8
0.6
0.6
0.6
0.4
Proportion
1
0.2
0.4
0.2
0
2
(.78)
3
(.86)
4
(.56)
5
(.55)
6
(.08)
7
(---)
0.4
0.2
0
1
(.60)
0
1
(1.00)
2
(.87)
3
(.89)
Output Level
4
(.78)
5
(.56)
6
(.14)
7
(---)
1
(1.00)
Crossover, Abstract
Experienced Cycle 2 (After Crossover)
0.8
0.8
0.6
0.6
0.6
0.2
Proportion
0.8
Proportion
1
0.4
0.4
0.2
0
4
(.44)
Output Level
5
(.50)
6
(.06)
7
(---)
5
(.33)
6
(.05)
7
(---)
6
(.13)
7
(0.00)
0.4
0.2
0
3
(.78)
4
(.65)
Crossover, Meaningful
Experienced Cycle 2 (After Crossover)
1
2
(.88)
3
(.89)
Output Level
1
1
(1.00)
2
(.87)
Output Level
No Crossover, Abstract
Experienced Cycle 2
Proportion
Crossover, Meaningful
Experienced Cycle 1 (Before Crossover)
1
Proportion
Proportion
No Crossover, Abstract
Experienced Cycle 1
0
1
(.50)
2
(.97)
3
(.80)
4
(.68)
Output Level
5
(.60)
6
(.16)
7
(---)
1
(1.00)
2
(.93)
3
(.73)
4
(.60)
Output Level
5
(.47)
Figure 4: Effect of Context on Quantity-Price Crossover
1 x 1 Treatment Data
0.8
% ML Strategic Play
0.7
0.6
0.5
0.4
0.3
Cycle 1
Cycle 2
Control, Generic
Cycle 3
Crossover, Generic
Crossover, Meaningful
Cycle 4
Figure 5
Strategic Play by MLs in Quantity to Price Game Crossovers -- Individuals vs. Teams
1
0.9
% ML Strategic Play
0.8
0.7
0.6
0.5
0.4
0.3
Cycle 1
Cycle 2
Meaningful Context, 1x1, Crossover
Cycle 3
Cycle 4
Meaningful Context, 2x2, Crossover
Inexperienced, Cycle 1
1.0
0.4
0.2
0.6
0.4
0.2
0.0
1
2
3
4
5
6
7
(0.813) (0.871) (0.750) (0.446) (0.412) (0.429) (0.333)
Output Level
(Entry Rate)
1.0
1.0
0.4
0.2
0.0
0.0
1
2
3
4
5
6
7
(1.000) (0.879) (0.909) (0.664) (0.346) (0.048) ( .)
1.0
0.4
Output Level
(Entry Rate)
0.6
0.4
0.2
1
2
3
4
5
6
7
(1.000) (0.919) (0.732) (0.587) (0.524) (0.129) (0.000)
Output Level
(Entry Rate)
MH
Experienced, Cyc le 4
0.8
0.6
0.0
1
2
3
4
5
6
7
(0.500) (0.923) (0.909) (0.575) (0.565) (0.038) ( .)
Output Level
(Entry Rate)
Experienced, Cyc le 2
0.2
1
2
3
4
5
6
7
(0.900) (0.892) (0.818) (0.571) (0.583) (0.231) (0.000)
0.4
0.2
0.8
Percentage
Percentage
0.8
0.6
0.6
Output Level
(Entry Rate)
Inexperienced, Cycle 2
Experienced, Cyc le 3
0.8
Percentage
0.6
0.0
1.0
0.8
Percentage
Percentage
0.8
Experienced, Cyc le 1
Percentage
1.0
0.0
1
2
3
4
5
6
7
(1.000) (0.929) (1.000) (0.691) (0.500) (0.019) ( .)
Output Level
(Entry Rate)
ML
FIGURE 6. POOLED DATA FOR 1X1 SESSIONS WITH LOW COST ENTRANTS AND MEANINGFUL CONTEXT.
Inexperienced, Cyc le 1
1.0
0.4
0.2
0.6
0.4
0.2
0.0
1
2
3
4
5
6
7
(1.000) (0.942) (0.957) (0.540) (0.667) (0.182) ( .)
Output Level
(Entry Rate)
1.0
1.0
0.4
0.2
0.0
0.0
1
2
3
4
5
6
7
(1.000) (0.977) (0.857) (0.652) (0.333) (0.055) ( .)
1.0
0.4
Output Level
(Entry Rate)
0.6
0.4
0.2
1
2
3
4
5
6
7
(0.800) (1.000) (1.000) (0.933) (0.500) (0.013) ( .)
Output Level
(Entry Rate)
MH
Experienced, Cycle 4
0.8
0.6
0.0
1
2
3
4
5
6
7
(1.000) (1.000) (1.000) (0.750) (0.462) (0.039) ( .)
Output Level
(Entry Rate)
Experienced, Cyc le 2
0.2
1
2
3
4
5
6
7
(1.000) (1.000) (1.000) (0.824) (0.444) (0.196) ( .)
0.4
0.2
0.8
Percentage
Percentage
0.8
0.6
0.6
Output Level
(Entry Rate)
Inexperienced, Cyc le 2
Experienced, Cycle 3
0.8
Percentage
0.6
0.0
1.0
0.8
Percentage
Percentage
0.8
Experienced, Cyc le 1
Percentage
1.0
0.0
1
2
3
4
5
6
7
( .) (1.000) (1.000) (0.800) (0.500) (0.036) ( .)
Output Level
(Entry Rate)
ML
FIGURE 7. POOLED DATA FOR 2X2 SESSIONS WITH LOW COST ENTRANTS AND MEANINGFUL CONTEXT.
Proportion of Strategic Play by MLs
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
Inexperienced
Cycle 1
Inexperienced
Cycle 2
1 x 1 Treatment
Experienced
Cycle 1
Experienced
Cycle 2
2 x 2 Treatment
Experienced
Cycle 3
Experienced
Cycle 4
Simulated Teams, Median
FIGURE 8. COMPARING THE DEVELOPMENT OF STRATEGIC PLAY FOR MLS IN 2X2 WITH 1X1 SESSIONS IN GAMES WITH LOW
COST ENTRANTS AND MEANINGFUL CONTEXT