Cognitive hierarchies and emotions in behavioral game theory

Cognitive
hierarchies
and
emotions
in
behavioral
game
theory
Colin
F.
Camerer1,
2
Alec
Smith1
1Division
of
Humanities
and
Social
Sciences
2Computation
and
Neural
Systems
California
Institute
of
Technology
4/25/11
11:15
am.
Comments
welcome.
Prepared
for
Oxford
Handbook
of
Thinking
and
Reasoning
(K.
J.
Holyoak
&
R.
G.
Morrison,
Editors).
This
research
was
supported
by
The
Betty
and
Gordon
Moore
Foundation
and
by
National
Science
Foundation
grant
NSF‐SES
0850840.
Correspondence
to:
[email protected].
ABSTRACT
Until
recently,
game
theory
was
not
focussed
on
cognitively‐plausible
models
of
choices
in
human
strategic
interactions.
This
chapter
describes
two
new
approaches
that
do
so.
The
first
approach,
cognitive
hierarchy
modeling,
assumes
that
players
have
different
levels
of
partially
accurate
representations
of
what
others
are
likely
to
do,
which
vary
from
heuristic
and
naïve
to
highly
sophisticated
and
accurate.
There
is
reasonable
evidence
that
this
approach
explains
choices
(better
than
traditional
equilibrium
analysis)
in
dozens
of
experimental
games
and
some
naturally‐occurring
games
(e.g.,
a
Swedish
lottery,
auctions,
and
consumer
reactions
to
undisclosed
quality
information
about
movies).
Measurement
of
eyetracking
and
fMRI
activity
during
games
is
also
suggestive
of
a
cognitive.
The
second
approach,
psychological
games,
allows
value
to
depend
upon
choice
consequences
and
on
beliefs
about
what
will
happen.
This
modeling
framework
can
link
cognition
and
emotion,
and
express
social
emotions
such
as
“guilt”.
In
a
psychological
game,
guilt
is
modeled
as
the
negative
emotion
of
knowing
that
another
person
is
unpleasantly
surprised
that
your
choice
did
not
benefit
them
(as
they
had
expected).
Our
hope
is
that
these
new
developments
in
a
traditionally
cognitive
field
(game
theory)
will
engage
interest
of
psychologists
and
others
interested
in
thinking
and
social
cognition.
KEYWORDS
Bounded
rationality,
cognitive
hierarchy,
emotions,
game
theory,
psychological
games,
strategic
neuroscience
1
I.
Introduction
This
chapter
is
about
cognitive
processes
in
strategic
thinking.
The
theory
of
games
provides
the
most
comprehensive
framework
for
thinking
about
the
valued
outcomes
that
result
from
strategic
interactions.
The
theory
specifies
how
“players”
(that’s
game
theory
jargon)
might
choose
high‐value
strategies
to
guess
likely
choices
of
other
players.
Traditionally,
game
theory
has
been
focused
on
finding
“solutions”
to
games
based
on
highly
mathematical
conceptions
of
rational
forecasting
and
choice.
More
recently
(starting
with
Camerer,
1990),
behavioral
game
theory
models
have
extended
the
rational
theories
to
include
stochastic
response,
limits
on
inferring
correctly
what
other
players
will
do,
social
emotions
and
considerations
such
as
guilt,
anger,
reciprocity,
or
social
image,
and
modulating
factors
including
inferences
about
others’
intentions.
Two
general
behavioral
models
that
might
interest
cognitive
psychologists
are
the
focus
of
this
chapteri:
Cognitive
hierarchy
modeling,
and
psychological
game
theory.
Conventional
game
theory
is
typically
abstract,
mathematically
intimidating,
computationally
implausible,
and
algorithmically
incomplete.
It
is
therefore
not
surprising
that
conventional
tools
have
not
gained
traction
in
cognitive
psychology.
Our
hope
is
that
the
more
psychologically
plausible
behavioral
variants
could
interest
cognitive
psychologists.
Once
limited
strategic
thinking
is
the
focus,
questions
of
cognitive
representation,
categorization
of
different
strategic
structures,
and
the
nature
of
social
cognition,
and
how
cooperation
is
achieved
all
become
more
interesting
researchable
questions.
The
question
of
whether
or
not
people
are
using
the
decision‐making
algorithms
2
proposed
by
these
behavioral
models
can
also
be
addressed
with
observables
(such
as
response
times
and
eyetracking
of
visual
attention)
familiar
in
cognitive
psychology.
Numerical
measures
of
value
and
belief
derived
in
these
theories
can
also
be
used
as
parametric
regressors
to
identify
candidate
brain
circuits
that
appear
to
encode
those
measures.
This
general
approach
has
been
quite
successful
in
studying
simpler
nonstrategic
choice
decisions
(Glimcher,
Camerer,
Fehr,
&
Poldrack
,2008)
but
has
been
applied
infrequently
to
games
(see
Bhatt
&
Camerer,
in
press).
What
is
a
game?
Game
theory
is
the
mathematical
analysis
of
strategic
interaction.
It
has
become
a
standard
tool
in
economics
and
theoretical
biology,
and
is
increasingly
used
in
political
science,
sociology,
and
computer
science.
A
game
is
mathematically
defined
as
a
set
of
players,
descriptions
of
their
information,
a
fixed
order
of
the
sequence
of
choices
by
different
players,
and
a
function
mapping
players’
choices
and
information
to
outcomes.
Outcomes
may
include
tangibles
like
corporate
profits
or
poker
winnings,
as
well
as
intangibles
like
political
gain,
status,
or
reproductive
opportunities
(in
biological
and
evolutionary
psychology
models).
The
specification
of
a
game
is
completed
by
a
payoff
function
that
attaches
a
numerical
value
or
“utility”
to
each
outcome.
The
standard
approach
to
the
analysis
of
games
is
to
compute
an
equilibrium
point,
a
set
of
strategies
for
each
player
which
are
simultaneously
best
responses
to
one
another..
This
approach
is
due
originally
to
John
Nash
(1950),
building
on
earlier
work
by
Von
Neumann
and
Morgenstern
(1947).
Solving
for
equilibrium
mathematically
requires
solving
simultaneous
equations
in
which
each
player's
strategy
is
an
input
to
the
other
3
player's
calculation
of
expected
payoff.
The
solution
is
a
collection
of
strategies,
one
for
each
player,
where
each
player’s
strategy
maximizes
his
expected
payoff
given
the
strategies
of
the
other
players.
From
the
beginning
of
game
theory,
how
equilibrium
might
arise
has
been
the
subject
of
ongoing
discussion.
Nash
himself
suggested
that
equilibrium
beliefs
might
resolve
from
changes
in
“mass
action”
as
populations
learn
about
what
others
do
and
adjust
their
strategies
toward
optimization.ii
More
recently
game
theorists
have
considered
the
epistemic
requirements
for
Nash
equilibrium
by
treating
games
as
interactive
decision
problems
(cf.
Brandenburger
1992).
It
turns
out
that
Nash
equilibrium
for
n‐player
games
requires
very
strong
assumptions
about
the
players’
mutual
knowledge:
that
all
players
share
a
common
prior
belief
about
chance
events,
know
that
all
players
are
rational,
and
know
that
their
beliefs
are
common
knowledge
(Aumann
&
Brandenburger
1995).iii
The
latter
requirement
implies
that
rational
players
be
able
to
compute
beliefs
about
the
strategies
of
coplayers
and
all
states
of
the
world,
beliefs
about
beliefs,
and
so
on,
ad
infinitum.
Two
Behavioral
Approaches:
Cognitive
Hierarchy
and
Psychological
Games
Cognitive
hierarchy
(CH)
and
psychological
games
(PG)
models
both
modify
assumptions
from
game
theory
to
capture
behavior
more
realistically.
The
CH
approach
assumes
that
boundedly
rational
players
are
limited
in
the
number
of
interpersonal
iterations
of
strategic
reasoning
they
can
(or
choose)
to
do.
There
are
five
elements
to
any
CH
predictive
model:
4
1. A
distribution
of
the
frequency
of
level
types
f(k)
2. Actions
of
level
0
players;
3. Beliefs
of
level‐k
players
(for
k=1,
2,…)
about
other
players;
4. Assessing
expected
payoffs
based
on
beliefs
in
(3).
5. A
stochastic
choice
response
function
based
on
the
expected
payoffs
in
(4)
The
typical
approach
is
to
make
precise
assumptions
about
elements
(1‐5)
and
see
how
well
that
specific
model
fits
experimental
data
from
different
games.
Just
as
in
testing
a
cooking
recipe,
if
the
model
fails
badly
then
it
can
be
extended
and
improved.
In
Camerer,
Ho
and
Chong
(2004),
the
distribution
of
level
k
types
is
assumed
to
follow
a
Poisson
distribution
with
a
mean
value
τ.
Once
the
value
of
τ
is
chosen,
the
complete
distribution
is
known.
The
Poisson
distribution
has
the
sensible
property
that
the
frequencies
of
very
high
level
types
k
drops
off
quickly
for
higher
values
of
k.
(For
example,
if
the
average
number
of
thinking
steps
τ=1.5,
then
less
than
2%
of
players
are
expected
to
do
five
or
more
steps
of
thinking.)
To
further
specify
the
model,
level
0
types
are
usually
assumed
to
choose
each
strategy
equally
often.iv
In
the
CH
approach,
level
k
players
know
the
correct
proportions
of
lower‐level
players,
but
do
not
realize
there
are
other
even
higher‐level
players
(perhaps
reflecting
overconfidence
in
relative
ability).
An
alternative
assumption
(called
“level
k”
modeling)
is
that
a
level
k
player
thinks
all
other
players
are
at
level
k‐1.
Under
these
assumptions,
each
level
of
player
in
a
hierarchy
can
then
compute
the
expected
payoffs
to
different
strategies:
Level
1’s
compute
their
expected
payoff
(knowing
5
what
level
0’s
will
do);
level
2’s
compute
the
expected
payoff
given
their
guess
about
what
level
1’s
and
0’s
do,
and
how
frequent
those
level
types
are;
and
so
forth.
In
the
simplest
form
of
the
model,
players
choose
the
strategy
with
the
highest
expected
payoff
(the
“best
response”);
but
it
is
also
easy
to
use
a
logistic
or
power
stochastic
“better
response”
function
(e.g.,
Luce,
1959).
Because
the
theory
is
hierarchical,
it
is
easy
to
program
and
solve
numerically
using
a
“loop”.
Psychological
games
models
assume
that
players
are
rational
in
the
sense
that
they
maximize
their
expected
utility
given
beliefs
and
the
utility
functions
of
the
other
players.
However,
in
psychological
games
models,
payoffs
are
allowed
to
depend
directly
upon
player’s
beliefs,
their
beliefs
about
their
coplayers’
beliefs,
and
so
on,
a
dependence
that
is
ruled
out
in
standard
game
theory.
The
incorporation
of
belief‐dependent
motivations
makes
it
possible
to
capture
concerns
about
intentions,
social
image,
or
even
emotions
in
a
game‐theoretic
framework.
For
example,
in
psychological
games
one
person,
Conor(C)
might
be
delighted
to
be
surprised
by
the
action
of
another
player,
Lexie
(L).
This
is
modeled
mathematically
as
C
liking
when
L’s
strategy
is
different
than
what
he
(C)
expected
L
to
do.
Some
of
these
motivations
are
naturally
construed
as
social
emotions,
such
as
guilt
(e.g.,
a
person
feels
bad
choosing
a
strategy
which
harmed
another
person
P
who
did
not
expect
it,
and
feels
less
bad
if
P
did
expect
it).
Of
the
two
approaches,
CH
and
level‐k
modeling
are
easy
to
use
and
apply
to
empirical
settings.
Psychological
games
are
more
general,
applying
to
a
broader
class
of
games,
but
are
more
difficult
to
adapt
to
empirical
work.
II.
The
CH
model
6
The
next
section
will
gives
some
motivating
empirical
examples
of
the
wide
scope
of
games
to
which
the
theory
has
been
applied
with
some
success
(including
two
kinds
of
field
data),
and
consistency
with
data
on
visual
fixation
and
fMRI.
The
CH
approach
is
appealing
as
a
potential
cognitive
algorithm
for
four
reasons:
1. It
appears
to
fit
a
lot
of
experimental
data
from
many
different
games
better
than
equilibrium
predictions
do
(e.g.,
Camerer
et
al.,
2004;
Crawford,
Costa‐
Gomes,
&
Iriberri,
2010).
2. The
specification
of
how
thinking
works
and
creates
choices
invites
measurement
of
the
thinking
process
with
response
times,
visual
fixations
on
certain
payoffs,
and
transitions
between
particular
payoffs.
3. The
CH
approach
introduces
a
concept
of
skill
into
behavioral
game
theory.
In
the
CH
model,
the
players
with
the
highest
thinking
levels
(higher
k)
and
most
responsive
choices
(higher
λ)
are
implicitly
more
skilled.
(In
equilibrium
models,
all
players
are
perfectly
and
equally
skilled.)
Next
we
will
describe
several
empirical
games
that
illustrate
how
CH
reasoning
works.
Example
1:
p‐beauty
contest
A
simple
game
that
illustrates
apparent
CH
thinking
has
come
to
be
called
the
“p‐
beauty
contest
game”
(or
PBC).
The
name
comes
from
a
famous
passage
in
John
Maynard
Keynes’s
book
The
General
Theory
of
Employment,
Interest
and
Money.
Keynes
wrote:
7
“Professional
investment
may
be
likened
to
those
newspaper
competitions
in
which
the
competitors
have
to
pick
out
the
six
prettiest
faces
from
a
hundred
photographs,
the
prize
being
awarded
to
the
competitor
whose
choice
most
nearly
corresponds
to
the
average
preferences
of
the
competitors
as
a
whole;
so
that
each
competitor
has
to
pick,
not
those
faces
which
he
himself
finds
prettiest,
but
those
which
he
thinks
likeliest
to
catch
the
fancy
of
the
other
competitors,
all
of
whom
are
looking
at
the
problem
from
the
same
point
of
view.
It
is
not
a
case
of
choosing
those
which,
to
the
best
of
one's
judgment,
are
really
the
prettiest,
nor
even
those
which
average
opinion
genuinely
thinks
the
prettiest.
We
have
reached
the
third
degree
where
we
devote
our
intelligences
to
anticipating
what
average
opinion
expects
the
average
opinion
to
be.
And
there
are
some,
I
believe,
who
practise
the
fourth,
fifth
and
higher
degrees.”
In
the
experimental
PBC
game
people
choose
numbers
from
0
to
100
simultaneously
without
talking.v
The
person
whose
number
is
closest
to
p
times
the
average
wins
a
fixed
price.
A
typical
interesting
value
of
p
is
2/3.
Then
the
winner
wants
to
be
two‐thirds
of
the
way
between
the
average
and
zero.
But
of
course,
the
players
all
know
the
other
players
want
to
pick
2/3
of
the
average.
In
a
Nash
equilibrium,
everyone
accurately
forecasts
that
the
average
will
be
X,
and
also
chooses
a
number
which
is
(2/3)X.
This
implies
X=(2/3)X
or
X*=0.
8
Intuitively,
suppose
you
had
no
idea
what
other
people
would
do,
so
you
chose
2/3
of
50
=33.
This
is
a
reasonable
choice
but
is
not
an
equilibrium,
since
choosing
33
while
anticipating
50
leaves
a
gap
between
expected
behavior
of
others
and
likely
behavior
by
oneself.
So
a
person
who
thinks
“Hey!
I’ll
pick
33”
should
then
think
(to
adhere
to
the
equilibrium
math)
“Hey!
They’ll
pick
33”
and
then
pick
22.
This
process
of
imagining,
choosing
and
revising
does
not
stop
until
everyone
expects
0
to
be
chosen,
and
also
picks
0.
Figure
1
shows
some
data
from
this
game
played
with
experimental
subjects
and
in
newspaper
and
magazine
contests
(where
large
groups
play
for
a
single
large
prize).
There
is
some
evidence
of
“spikes”
in
numbers
corresponding
to
50p,
50p2
and
so
on.
Example
2:
Betting
on
selfish
rationality
of
others
Another
simple
illustration
of
the
CH
theory
is
shown
in
Table
1.
In
this
game
a
row
and
column
player
choose
from
one
of
two
strategies,
T
or
B
(for
row)
or
L
or
R
(for
column).
The
column
player
always
gets
20
for
choosing
L
and
18
for
choosing
R.
The
row
player
gets
either
30
or
10
from
T,
and
a
sure
20
from
B.
If
the
column
player
is
trying
to
get
the
largest
payoff,
she
should
always
choose
L
(it
guarantees
20
instead
of
18).
The
strategy
L
is
called
a
“strictly
dominant
strategy”
because
it
has
the
highest
payoff
for
every
possible
choice
by
the
row
player.
The
row
player’s
choice
is
a
little
trickier.
She
can
get
20
for
sure
by
choosing
B.
Choosing
T
is
taking
a
social
gamble.
If
she
is
confident
the
column
player
will
try
to
get
20
and
choose
L,
she
should
infer
that
P(L)
is
high.
Then
the
expected
value
of
T
is
high
and
she
should
choose
T.
However,
this
inference
is
essentially
a
bet
on
the
selfish
rationality
of
the
9
other
player.
The
row
player
might
think
the
column
player
will
make
a
mistake,
or
is
spiteful
(and
prefers
the
(10,18)
cell
because
she
gets
less
absolute
payoff
but
a
higher
relative
payoff
compared
to
the
row
player).
There
is
a
crucial
cognitive
difference
in
playing
L—which
is
the
right
strategy
if
you
want
the
most
money—and
playing
T—which
is
the
right
strategy
if
you
are
willing
to
bet
that
other
players
are
very
likely
to
choose
L
because
they
want
to
earn
the
most
money.
What
does
the
CH
approach
predict
here?
Suppose
level
0
players
randomize
between
the
two
strategies.
If
τ=1.5,
then
f(0|τ=1.5)=.22.
Then
half
of
the
level
0
players
will
choose
column
R
and
row
B,
which
is
.11%
of
the
whole
group.
Level
1
players
always
choose
weakly
dominant
strategies,
so
they
pick
column
L
(in
fact,
all
higher
level
column
players
do
too).
Since
level
1
row
players
think
L
and
R
choices
are
equally
likely,
their
expected
payoff
from
T
is
30(.5)+10(.5)=20,
which
is
the
same
as
the
B
payoff;
so
we
assume
they
randomize
equally
between
T
and
B.
Since
f(1|τ=1.5)=.33,
this
means
the
unconditional
total
frequency
of
B
play
for
the
first
two
levels
is
.11+.33/2=.27.
Level
2
row
players
think
the
relative
proportions
of
lower
types
are
g2(0)=.22/(.22+.33)=.40
and
g2(1)=.33/(.22+.33)=.60.
They
also
think
the
level
0’s
play
either
L
or
R,
but
the
level
1’s
choose
L
for
sure.
Together,
this
implies
that
they
believe
there
is
a
.20
chance
the
other
person
will
choose
R
(
=
.5(.40)+0(.60))
and
an
.80
chance
they
will
choose
L.
With
these
odds,
they
prefer
to
choose
T.
That
is,
they
are
sufficiently
confident
the
other
player
will
“figure
it
out”
and
choose
the
self‐serving
L
that
T
becomes
a
good
bet
to
yield
the
higher
payoff
of
30.
10
Putting
together
all
the
frequencies
f(k)
and
choice
percentages,
the
overall
expected
proportion
of
column
R
play
is
.11
and
row
B
play
is
.27.
Note
that
these
proportions
go
in
the
direction
of
the
Nash
prediction
(which
is
zero
for
both),
but
account
more
precisely
for
the
chance
of
mistakes
and
misperceptions.
Importantly,
choices
of
R
should
be
less
common
than
choices
of
B.
R
choices
are
just
careless,
while
B
choices
might
be
careless
or
might
be
sensible
responses
to
thinking
there
are
a
lot
of
careless
players.
Table
1
shows
that
some
(unpublished)
data
from
Caltech
undergraduate
classroom
games
(for
money)
over
three
years
are
generally
close
to
the
CH
prediction.
The
R
and
B
choice
frequencies
are
small
(as
both
Nash
and
CH
predict)
but
B
is
more
common
than
R.
[Insert
Table
1
about
here]
One
potential
advantage
of
CH
modeling
is
that
the
same
general
process
could
apply
to
games
with
different
economic
structures.
In
both
of
the
two
examples
above,
a
Nash
equilibrium
choice
can
be
derived
by
repeated
application
of
the
principle
of
eliminating
“weakly
dominated”
strategies
(i.e.,
strategies
which
are
never
better
than
another
dominating
strategy,
for
all
choices
by
other
people,
and
is
actually
worse
for
some
choices
by
others).
Hence,
these
are
called
“dominance
solvable”
games.
Indeed,
the
beauty‐contest
example
is
among
those
that
motivated
CH
modeling
in
the
first
place,
since
each
step
of
reasoning
corresponds
to
one
more
step
in
deletion
of
dominated
strategies.
Here
is
an
entirely
different
type
of
game,
called
“asymmetric
matching
pennies”.
In
this
game
the
row
player
earns
points
if
the
choices
match
(H,H)
or
(T,T).
The
column
11
player
wins
if
they
mismatch.
There
is
no
pair
of
strategies
that
are
best
responses
to
each
other,
so
the
equilibrium
requires
choosing
a
probabilistic
“mixture”
of
strategies.
Here,
equilibrium
analysis
makes
a
bizarre
prediction:
The
row
player
should
choose
H
and
T
equally
often,
while
the
column
player
should
shy
away
from
H
(as
if
preventing
Row
from
getting
the
bigger
payoff
of
2)
and
choose
T
2/3
of
the
time.
(Even
more
strangely:
If
the
2
payoff
is
x>1
in
general,
then
the
mixture
is
always
50‐50
for
the
row
player,
and
is
x/(x+1)
on
T
for
the
column
player!
That
is,
in
theory
changing
the
payoff
of
2
only
affects
the
column
player,
and
does
not
affect
the
row
player
who
might
earn
that
payoff.
The
CH
approach
works
differentlyvi.
The
lower
level
row
players
(1‐2)
are
attracted
to
the
possible
payoff
of
2,
and
choose
H.
However,
the
low
level
column
players
switch
to
T,
and
higher
level
row
players
(levels
3‐4)
figure
this
out
and
switch
to
T.
The
predicted
mixture
(for
tau=1.5)
is
actually
rather
close
to
the
Nash
prediction
for
the
column
player
(P(T)=.74
compared
to
Nash
.67),
since
the
higher‐level
types
choose
T
more
and
not
H.
And
indeed,
data
from
column
player
choices
in
experiments
are
close
to
both
predictions.
The
CH
mixture
of
row
play,
averaged
across
type
frequencies,
is
P(H)=.68,
close
to
the
data
average
of
.72.
Thus,
the
reasonable
part
of
the
Nash
prediction,
which
is
lopsided
play
of
T
and
H
by
column
players,
is
reproduced
by
CH
and
is
consistent
with
the
data.
The
unreasonable
part
of
the
Nash
prediction,
that
row
players
choose
H
and
T
equally
often,
is
not
reproduced
and
the
differing
CH
prediction
is
more
empirically
accurate.
[Insert
Table
2
about
here]
Entry
games
12
In
simple
“entry”
games,
N
players
simultaneously
choose
whether
to
enter
a
market
with
demand
C,
or
not.
If
they
stay
out,
they
earn
a
fixed
payoff
($.50).
If
they
enter,
then
all
the
entrants
earn
$1
if
there
are
C
or
fewer
entrants,
and
earn
0
if
there
are
more
than
C
entrants.
It
is
easy
to
see
that
the
equilibrium
pattern
of
play
is
for
exactly
C
people
to
enter;
then
they
each
earn
$1
and
those
who
stay
out
earn
$.50.
If
one
of
the
stayer‐
outers
switched
and
entered,
she
would
tip
the
market
and
cause
the
C+1
entrants
to
earn
0.
Since
this
would
lower
her
own
payoff,
she
will
stay
put.
So
the
pattern
is
an
equilibrium.
However,
there
is
a
problem
remaining
(it’s
a
common
one
in
game
theory):
How
does
the
group
collectively
decide,
without
talking,
which
of
the
C
people
enter
and
earn
$1?
Everybody
would
like
to
be
in
the
select
group
of
C
entrants
if
they
can;
but
if
too
many
enter
they
all
suffer.vii
This
is
a
familiar
problem
of
“coordinating”
to
reach
one
of
many
different
equilibria.
The
first
experiments
on
this
type
of
entry
game
were
done
by
a
team
of
economists
(James
Brander
and
Richard
Thaler)
and
a
psychologist,
Daniel
Kahneman.
They
were
never
fully
published
but
were
described
in
a
chapter
by
Kahneman
(1988).
Kahneman
says
they
were
amazed
how
close
the
number
of
total
entrants
was
to
the
announced
demand
C
(which
varied
over
trials).
“To
a
psychologist”,
he
wrote,
“it
looked
like
magic”.
Since
then,
a
couple
of
dozen
studies
have
explored
variants
of
these
games
and
reported
similar
degrees
of
coordination
(e.g.,
Duffy
&
Hopkins,
2005).
Let’s
see
if
cognitive
hierarchy
can
produce
the
magic.
Suppose
level
0
players
enter
and
stay
out
equally
often,
and
ignore
C.
If
level
1
players
anticipate
this,
they
will
think
there
are
too
many
entrants
for
C<(N/2)
and
too
few
if
C>(N/2)‐1.
Level
1
players
will
13
therefore
enter
at
high
values
of
C.
Notice
that
level
1
players
are
helping
the
group
move
toward
the
equilibrium.
Level
1’s
undo
the
damage
done
by
the
level
0’s,
who
over‐enter
at
low
C,
by
staying
out
which
reduces
the
overall
entry
rate
for
low
C.
They
also
exploit
the
opportunity
that
remains
for
high
C,
by
entering,
which
increases
the
overall
entry
rate.
Combining
the
two
levels,
there
will
be
less
entry
at
low
C
and
more
entry
at
high
C
(it
will
look
like
a
step
function;
see
Camerer
et
al.,
2004).
Furthermore,
it
turns
out
that
adding
higher‐level
thinkers
continues
to
push
the
population
profile
toward
an
overall
entry
level
that
is
close
to
C.
The
theory
makes
three
sharp
predictions:
(1)
Plotting
entry
rates
(as
a
%
of
N)
against
C/N
should
yield
a
regressive
line
which
crosses
at
(.5,
.5).
(2)
Entry
rates
should
be
too
high
for
C/N<.5
and
too
low
for
C/N>.5.
(3)
Entry
should
be
increasing
in
C,
and
relatively
close,
even
without
any
learning
at
all!
(e.g.,
in
the
first
period
of
the
game).
Figure
3
illustrates
a
CH
model
prediction
with
τ=1.25,
single‐period
data
with
no
feedback
from
Camerer
et
al.
(2004),
and
the
equilibrium
(a
45‐degree
line).
Except
for
some
nonmonotonic
dips
in
the
experimental
data
(easily
accounted
for
by
sampling
error),
the
predictions
are
roughly
accurate.
The
point
of
this
example
is
that
approximate
equilibration
can
be
produced,
as
if
by
“magic”,
purely
from
cognitive
hierarchy
thinking
without
any
learning
or
communication
needed.
These
data
are
not
solid
proof
that
cognitive
hierarchy
reasoning
is
occurring
in
this
game,
but
does
show
how,
in
principle,
the
cognitive
hierarchy
approach
can
explain
both
deviations
from
Nash
equilibrium
(in
the
beauty
contest,
betting,
and
matching
14
pennies
games
that
were
described
above),
and
also
surprising
conformity
to
Nash
equilibrium
(in
this
entry
game).
Private
information
The
trickiest
class
of
games
we
will
discuss,
briefly,
involve
“private
information”.
The
standard
modeling
approach
is
to
assume
there
is
a
hidden
variable,
X,
which
has
a
possible
distribution
p(X)
that
is
commonly
known
to
both
playersviii.
The
informed
player
I
knows
the
exact
value
x
from
the
distribution
and
both
players
know
that
only
I
knows
the
value.
For
example,
in
card
games
like
poker,
players
know
the
possible
set
of
cards
their
opponent
might
have,
and
know
that
the
opponent
knows
exactly
what
the
cards
are.
The
cognitive
challenge
that
is
special
to
private
information
games
is
to
infer
what
a
player’s
actions,
whether
they
are
actually
taken
or
hypothetical,
might
reveal
about
their
information.
Various
experimental
and
field
data
indicate
that
some
players
are
not
very
good
at
inferring
hidden
information
from
observed
action
(or
anticipating
the
inferable
information).
A
simple
and
powerful
example
is
the
“acquire‐a‐company”
problem
introduced
in
economics
by
Akerlof
(1970)
and
studied
empirically
by
Bazerman
and
Samuelson
(1983).
In
this
game,
a
privately‐held
company
has
a
value
which
is
perceived
by
outsiders
to
be
uniformly
distributed
from
0
to
100
(i.e.,
all
values
in
that
range
are
equally
likely).
The
company
knows
its
exact
value,
and
outsiders
know
that
the
company
knows
(due
to
the
common
prior
assumption).
15
A
bidder
can
operate
the
company
much
better,
so
that
whatever
the
hidden
value
V
is,
it
is
worth
1.5V
to
them.
They
make
a
take‐it‐or‐leave‐it
(Boulwarean)
offer
of
a
price
P.
The
bargaining
could
hardly
be
simpler:
The
company
sells
if
the
price
P
is
above
“hidden”
value
V—which
the
bidder
knows
that
the
company
knows—and
keeps
the
company
otherwise.
The
bidder
wants
to
maximize
the
expected
“surplus”
gain
between
the
average
of
the
values
1.5V
they
are
likely
to
receive
and
the
price.
What
would
you
bid?
The
optimal
bid
is
surprising,
though
the
algebra
behind
the
answer
is
not
too
hard.
The
chance
of
getting
the
company
is
the
chance
that
V
is
less
than
P,
which
is
P/100
(e.g.,
if
P=60
then
60%
of
the
time
the
value
is
below
P
and
the
company
changes
hands).
If
the
company
is
sold,
then
the
value
must
be
below
P,
so
the
expected
,
value
to
the
seller
is
the
average
of
the
values
in
the
interval
[0,P],
which
is
P/2.
The
net
expected
value
is
therefore
(P/100)
times
expected
profit
if
sold,
which
is
1.5*(P/2)‐P=
‐1/4P.
There
is
no
way
to
make
a
profit
on
average.
The
optimal
bid
is
zero!
However,
typical
distributions
of
bids
are
between
50
and
75.
This
results
in
a
“winner’s
curse”
in
which
bidders
“win”
the
company,
but
fail
to
account
for
the
fact
that
they
only
won
because
the
company
had
a
low
value.
This
phenomenon
was
first
observed
in
field
studies
of
oil‐lease
bidding
(Capen
et
al
1971)
and
has
been
shown
in
many
lab
and
field
data
sets
since
then.
The
general
principle
that
people
have
a
hard
time
guessing
the
implications
of
private
information
for
actions
others
will
take
shows
up
in
many
economic
settings
(a
kind
of
strategic
naivete;
e.g.
Brocas
et
al.,
2009).
16
The
CH
approach
can
easily
explain
strategic
naivete
as
a
consequence
of
level
1
behavior.
If
level
1
players
think
that
level
0
players’
choices
do
not
depend
on
private
information,
then
they
will
ignore
the
link
between
choices
and
information.
Eyetracking
evidence
A
potential
advantage
of
cognitive
hierarchy
approaches
is
that
cognitive
measures
associated
with
the
algorithmic
steps
players
are
assumed
to
use,
in
theory,
could
be
collected
along
with
choices.
For
psychologists
this
is
obvious
but,
amazingly,
it
is
a
rather
radical
position
in
economics
and
most
areas
of
game
theory!
The
easiest
and
cheapest
method
is
to
record
what
information
people
are
looking
at
as
they
play
games.
Eyetracking
measures
visual
fixations
using
video‐based
eyetracking,
typically
every
5‐50
msec.
Cameras
look
into
the
eye
and
adjust
for
head
motion
to
guess
where
the
eyes
are
looking
(usually
with
excellent
precision).
Most
eyetrackers
record
pupil
dilation
as
well,
which
is
useful
as
a
measure
of
cognitive
difficulty
or
arousal.
Since
game
theory
is
about
interactions
among
two
or
more
people,
it
is
especially
useful
to
have
a
recording
technology
that
scales
up
to
enable
recording
of
several
people
at
the
same
time.
One
widely‐used
method
is
called
“Mouselab”.
In
Mouselab,
information
that
is
used
in
strategic
computations,
in
theory,
is
hidden
in
labeled
boxes,
which
“open
up”
when
a
mouse
is
moved
into
them.ix
17
Several
studies
have
shown
that
lookup
patterns
often
correspond
roughly,
and
sometimes
quite
closely,
to
different
numbers
of
steps
of
thinking.
We’ll
present
one
example
(see
also
Crawford
et
al.,
2010).
Example
1:
Alternating‐offer
bargaining
A
popular
approach
to
modeling
bargaining
is
to
assume
that
players
bargain
over
a
known
sum
of
joint
gain
(sometimes
called
“surplus”,
like
the
valuable
gap
between
the
highest
price
a
buyer
will
pay
and
the
lowest
price
a
seller
will
accept).
However,
as
time
passes
the
amount
of
joint
gain
“shrinks”
due
to
impatience
or
other
costs.
Players
alternate
making
offers
back
and
forth
(Rubinstein,
1982).
A
three‐period
version
of
this
game
has
been
studied
in
many
experiments.
The
amount
divided
in
the
first
round
is
$5
,
which
then
shrinks
to
$2.50,
$1.25,
and
0
in
later
rounds
(the
last
round
is
an
“ultimatum
game”).
If
players
are
selfish
and
maximize
their
own
payoffs,
and
believe
that
others
are
too,
the
“subgame
perfect”
equilibrium
(SPE)
offer
by
the
first
person
who
offers
(player
1),
to
player
2,
should
be
$1.25.
However,
deriving
this
offer
either
requires
some
process
of
learning
or
communication,
or
an
analysis
using
“backward
induction”
to
deduce
what
offers
would
be
made
and
accepted
in
all
future
rounds,
then
working
back
to
the
first
round.
Early
experiments
showed
conflicting
results
in
this
game.
Neelin
et
al.
(1988)
found
that
average
offers
were
around
$2,
and
many
were
equal
splits
of
$2.50
each.
Earlier,
Binmore
et
al.
(1985)
found
similar
results
in
the
first
round
of
choices,
but
also
found
that
a
small
amount
of
experience
with
“role
reversal”
(player
2’s
switching
to
the
player
1
first‐offer
position)
moved
offers
sharply
toward
the
SPE
offer
of
$1.25.
Other
evidence
from
simpler
ultimatum
games
showed
that
people
seem
18
to
care
about
fairness,
and
are
willing
to
reject
a
$2
offer
out
of
$10
about
half
the
time,
to
punish
a
bargaining
partner
they
think
has
been
unfair
and
greedy
(Camerer,
2003).
So
are
the
offers
around
$2
due
to
correct
anticipation
of
fairness‐influenced
behavior,
or
to
limited
understanding
of
how
the
future
rounds
of
bargaining
might
shape
reactions
in
the
first
round?
To
find
out,
Camerer
et
al.
(1993)
and
Johnson
et
al.
(2002)
did
the
same
type
of
experiment,
but
hid
the
amounts
being
bargained
over
in
each
round
in
boxes
that
could
be
opened,
or
“looked
up”,
by
moving
a
mouse
into
those
boxes
(an
impoverished
experimenter’s
version
of
video‐based
eyetracking).
They
found
that
most
people
who
offered
amounts
between
the
equal
split
of
$2.50
and
the
SPE
of
$1.25
were
not
looking
ahead
at
possible
future
payoffs
as
backward
induction
requires.
In
fact,
in
10‐20%
of
the
trials
the
round
2
and
round
3
boxes
were
not
opened
at
all!
Figure
4
illustrates
the
basic
results.
The
top
rectangular
“icon
graphs”
visually
represent
the
relative
amounts
of
time
bargainers
spent
looking
at
each
payoff
box
(the
shaded
area)
and
numbers
of
different
lookups
(rectangle
width).
The
bold
arrows
indicate
the
relative
number
of
transitions
from
one
box
to
the
next
(with
averages
of
less
than
one
transition
omitted).
Each
column
represents
a
group
of
trials
that
are
pre‐classified
by
lookup
patterns.
The
first
column
(N=129
trials)
averages
people
who
looked
more
often
at
the
period
1
box
than
at
the
future
period
boxes
(indicating
“level‐0”
planning).
The
second
column
(N=84)
indicates
people
who
looked
longer
at
the
second
box
than
the
first
and
third
(indicating
“level
One”
planning
with
substantial
focus
one
step
ahead).
The
third
column
(N=27)
indicates
the
smaller
number
of
“equilibrium”
trials
in
which
the
third
box
is
looked
at
the
19
most.
Note
that
in
the
level
One
and
Equilibrium
trials,
there
are
also
many
transitions
between
boxes
one
and
two,
and
boxes
two
and
three,
respectively.
Finally,
the
fourth
and
last
column
shows
people
who
were
briefly
trained
in
backward
induction,
then
played
a
computerized
opponent
that
(they
were
told)
planned
ahead,
acted
selfishly
and
expected
the
same
from
its
opponents.
The
main
pattern
to
notice
is
that
offer
distributions
(shown
at
the
bottom
of
each
column)
shift
from
right
(fairer,
indicated
by
the
right
dotted
line)
to
left
(closer
to
selfish
SPE,
the
left
dotted
line)
as
players
literally
look
ahead
more.
The
link
between
lookups
and
higher‐than‐predicted
offers
clearly
shows
that
offers
above
the
SPE,
in
the
direction
of
equal
splits
of
the
first
round
amount,
are
partly
due
to
limits
on
attention
and
computation
about
future
values.
Even
in
the
few
equilibrium
trials,
offers
are
bimodal,
clustered
around
$1.25
and
$2.20.
However,
offers
are
rather
tightly
clustered
around
the
SPE
prediction
of
$1.25
in
the
“trained”
condition.
This
result
indicates,
importantly,
that
backward
induction
is
not
actually
that
cognitively
challenging
to
execute
(after
instruction,
they
can
easily
do
it),
but
instead
is
an
unnatural
heuristic
that
does
not
readily
spring
to
the
minds
of
even
analytical
college
students.
fMRI
evidence
Several
neural
studies
have
explored
which
brain
regions
are
most
active
in
different
types
of
strategic
thinking.
The
earliest
studies
showed
differential
activation
when
playing
a
game
against
a
computer
compared
to
a
randomized
opponent
(e.g.,
(Gallagher
et
al.,
2002;
McCabe
et
al.,
2001;
Coricelli
&
Nagel,
2009).
20
One
of
the
cleanest
results,
and
an
exemplar
of
the
composite
picture
emerging
from
other
studies,
is
from
Coricelli
&
Nagel’s
(2009)
study
of
the
beauty
contest
game.
Their
subjects
played
13
different
games
with
different
target
multipliers
p
(e.g.,
p=2/3,
1,
3/2
etc.).
On
each
trial,
subjects
chose
numbers
in
the
interval
[0,100]
playing
against
either
human
subjects
or
against
a
random
computer
opponent.
Using
behavioral
choices,
most
subjects
can
be
classified
into
either
level
1
(n=10;
choosing
p
times
50)
or
level
2
(n=7;
choosing
p
times
p
times
50,
as
if
anticipating
the
play
of
level
1
opponents).
Figure
7
shows
brain
areas
that
were
differentially
active
when
playing
human
opponents
compared
to
computer
opponents,
and
in
which
that
human‐computer
differential
is
larger
in
level
2
players
compared
to
level
1
players.
The
crucial
areas
are
bilateral
temporo‐parietal
junction
(TPJ),
MPFC/paracingulate
and
VMPFC.x
These
regions
are
thought
to
be
part
of
a
general
mentalizing
circuit,
along
with
posterior
cingulate
regions
(Amodio
&
Frith,
2006).
In
recent
studies,
at
least
four
areas
are
reliably
activated
in
higher‐level
strategic
thinking:
dorsomedial
prefrontal
cortex
(DMPFC),
precuneus/posterior
cingulate,
insula,
and
dorsolateral
prefrontal
cortex
(DLPFC).
Next
we
summarize
some
of
the
simplest
results.
DMPFC
activity
is
evident
in
Figure
7.
It
is
also
active
in
response
to
nonequilibrium
choices
(where
subjects’
guesses
about
what
others
will
do
are
wrong;
Bhatt
&
Camerer,
2005),
and
uncertainty
about
strategic
sophistication
of
an
opponent
(Yoshida
et
al.,
2009).
In
addition,
DMPFC
activity
is
related
to
the
“influence
value”
of
current
choices
on
future
rewards,
filtered
through
the
effect
of
a
person’s
future
choices
on
an
opponent’s
future
21
choices
(Hampton,
Bossaerts,
&
O’Doherty,
2009;
a/k/a
“strategic
teaching”
Camerer,
Ho
and
Chong
2002).
Amodio
&
Frith
(2006)
suggest
an
intriguing
hypothesis:
that
mentalizing‐value
activation
for
simpler
to
more
complex
action
value
computations
are
differentially
located
along
a
posterior‐to‐anterior
(back‐to‐front)
gradient
in
DMPFC.
Indeed,
the
latter
three
studies
show
activation
roughly
in
a
posterior‐to‐anterior
gradient
(Tailarach
y=36,
48,
63;
and
y=48
in
Coricelli
&
Nagel,
2009)
that
corresponds
to
increasing
complexity.
Activity
in
the
precuneus
(adjacent
to
posterior
cingulate)
is
associated
with
economic
performance
in
games
(“strategic
IQ”;
Bhatt
&
Camerer
2005)
and
difficulty
of
strategic
calculations
(Kuo
et
al.
2009).
Precuneus
is
a
busy
region,
with
reciprocal
connections
to
MPFC,
cingulate,
and
DLPFC.
It
is
also
activated
by
a
wide
variety
of
higher‐
order
cognitions,
including
perspective‐taking
and
attentional
control
(as
well
as
the
“default
network”
active
at
rest;
see
Bhatt
&
Camerer,
in
press).
It
is
likely
that
precuneus
is
not
activated
in
strategic
thinking,
per
se,
but
only
in
special
types
of
thinking
which
require
taking
unusual
perspectives
(e.g.,
thinking
about
what
other
people
will
do)
and
shifting
mental
attention
back
and
forth.
The
insula
is
known
to
be
involved
in
interoceptive
integration
of
bodily
signals
and
cognition.
Disgust,
physical
pain,
empathy
for
others
in
pain,
and
pain
from
social
rejection
activate
insula
(Eisenberger
et
al.
2003,
Kross
et
al.
2011).
Financial
uncertainty
(Preuschoff,
Quartz,
Bossaerts,
2008),
interpersonal
unfairness
(Sanfey
et
al.,
2003;
Hsu
et
al.,
2008),
avoidance
of
guilt
in
trust
games
(Chang
et
al.,
2011),
and
“coaxing”
or
second‐
try
signals
in
trust
games
also
activate
insula.
In
strategic
studies,
Bhatt
and
Camerer
22
(2005)
found
that
higher
insula
activity
is
associated
with
lower
strategic
IQ
(performance).
The
DLPFC
is
involved
in
working
memory,
goal
maintenance,
and
inhibition
of
automatic
prepotent
responses.
Differential
activity
there
is
also
associated
with
the
level
of
strategic
thinking
(Yoshida
et
al.,
2009)
with
stronger
response
to
human
opponents
in
higher‐level
strategic
thinkers
(Coricelli
&
Nagel,
2009),
and
with
maintaining
level‐2
deception
in
bargaining
games
(Bhatt
et
al.,
2009).xi
Do
thinking
steps
vary
with
people
or
games?
To
what
extent
do
steps
of
thinking
vary
systematically
across
people
or
game
structures?
From
a
cognitive
point
of
view,
it
is
likely
that
there
is
some
intrapersonal
stability
because
of
differences
in
working
memory,
strategic
savvy,
exposure
to
game
theory,
experience
in
sports
betting
or
poker,
task
motivation,
etc.
However,
it
is
also
likely
that
there
are
differences
in
the
degree
of
sophistication
(measured
by
τ)
across
games
because
of
an
interaction
between
game
complexity
and
working
memory,
or
how
well
the
surface
game
structure
maps
onto
evolutionarily
familiar
gamesxii.
To
date,
these
sources
of
level
differences
have
not
been
explored
very
much.
Chong,
Ho
and
Camerer
(2005)
note
some
educational
differences
(Caltech
students
are
estimated
to
do
.5
steps
of
thinking
more
than
subjects
from
a
nearby
community
college)
and
an
absence
of
a
gender
effect.
Other
studies
have
showed
modest
associations
(r=.3)
between
strategic
levels
and
working
memory
(digit
span;
Devetag
&
Warglien,
2003)
and
the
“eyes
of
the
mind”
test
of
emotion
detection
(Georganas,
Healy,
&
Weber
2010).
23
Many
papers
have
reported
some
degree
of
cross‐game
type
stability
in
level
classification.
Studies
that
compare
a
choice
in
one
game
with
one
different
game
report
low
stability
(Georganas
et
al.,
2010;
Burchardi
&
Penczynski
2010).
However,
as
is
well‐
known
in
personality
psychology
and
psychometrics,
intrapersonal
reliability
typically
increases
with
the
number
of
items
used
to
construct
a
scale.
Other
studies
using
more
game
choices
to
classify
report
much
higher
correlations
(comparable
to
Big
5
personality
measures)
(Bhui
&
Camerer,
2011).
As
one
illustration
of
potential
type‐stability,
Figure
5
below
shows
estimated
types
for
individuals
using
the
first
11
games
in
a
22‐game
series
(x‐axis)
and
types
for
the
same
individuals
using
the
last
11
games.
The
correlation
is
quite
high
(r=.61).
There
is
also
a
slight
upward
drift
across
the
games
(the
average
level
is
higher
in
the
last
11
games
compared
to
the
first),
consistent
with
a
transfer
or
practice
effect,
even
though
there
is
no
feedback
during
the
22
games
(see
also
Weber,
2003).
Field
data
Since
controlled
experimentation
came
late
to
economics
(c.
1960)
compared
to
psychology,
there
is
a
long‐standing
skepticism
about
whether
theories
that
work
in
simple
lab
settings
generalize
to
naturally‐ocurring
economic
activity.
Five
studies
have
applied
CH
or
level‐k
modeling
to
auctions
(Gillen,
2009),
strategic
thinking
in
managerial
choices
(Goldfarb
and
Yang,
2009;
Goldfarb
and
Xiao,
in
press),
and
box
office
reaction
when
movies
are
not
shown
to
critics
before
release
(Brown,
Camerer
&
Lovallo,
2011).
One
study
is
described
here
as
an
example
(Ostling
et
al.,
2011).
In
2007
the
Swedish
Lottery
created
a
game
in
which
people
pay
1
euro
to
enter
a
lottery.
Each
paying
24
entrant
chooses
an
integer
1‐99,999.
The
lowest
unique
positive
integer
(hence,
the
acronym
LUPI)
wins
a
large
prize.
The
symmetric
equilibrium
is
a
probabilistic
profile
of
how
often
different
numbers
are
chosen
(a
“mixed”
equilibrium).
The
lowest
numbers
are
always
chosen
more
often
(e.g.,
1
is
chosen
most
often);
the
rate
of
decline
in
the
frequency
of
choice
is
accelerating
up
to
a
sharp
inflection
point
(number
5513);
and
the
rate
of
decline
slows
down
after
5513.
Figure
6
shows
the
data
from
only
the
lowest
10%
of
the
number
range,
from
1‐
10,000
(higher
number
choices
are
rare,
as
the
theory
predicts).
The
predicted
Nash
equilibrium
is
shown
by
a
dotted
line—a
flat
“shelf”
of
choice
probability
from
1
to
5513,
then
a
sharp
drop.
A
fitted
version
of
the
CH
model
is
indicated
by
the
solid
line.
CH
can
explain
the
large
frequency
of
low
number
choices
(below
1500),
since
these
correspond
to
low
levels
of
strategic
thinking
(i.e.,
people
don’t
realize
everyone
else
is
choosing
low
numbers
too).
Since
level‐0
types
randomize,
their
behavior
produces
too
many
high
numbers
(above
5000).
Since
the
lowest
and
highest
numbers
are
chosen
too
often
according
to
CH,
compared
to
the
equilibrium
mixture,
CH
also
implies
a
gap
between
predicted
and
actual
choices
in
the
range
2500‐5000.
This
basic
pattern
was
replicated
in
a
lab
experiment
with
a
similar
structure.
While
there
are
clear
deviations
from
Nash
equilibrium,
consistent
with
evidence
of
limited
strategic
thinking,
in
our
view
the
Nash
theory
prediction
is
not
bad
considering
that
uses
no
free
parameters,
and
comes
from
an
equation
which
is
elegant
in
structure
but
difficult
to
derive
and
solve.
25
The
LUPI
game
was
played
in
Sweden
for
49
days
in
a
row,
and
results
were
broadcast
on
a
nightly
TV
show.
Analysis
indicates
an
imitate‐the‐winner
fictive
learning
process,
since
choices
on
one
day
move
in
the
direction
of
600‐number
range
around
the
previous
day’s
winner.
The
result
of
this
imitation
is
that
every
statistical
feature
of
the
numbers
chosen
moves
toward
the
equilibrium
across
the
seven
weeks.
For
example,
in
the
last
week
the
average
number
is
2484,
within
4%
of
the
predicted
value
of
2595.
III
Psychological
games
In
many
strategic
interactions,
our
own
beliefs
or
beliefs
of
other
people
seem
to
influence
how
we
value
consequences.
For
example,
surprising
a
person
with
a
wonderful
gift
that
is
perfect
for
them
is
more
fun
for
everyone
than
if
the
person
had
asked
for
it.
Some
of
that
pleasure
comes
from
the
surprise
itself.
This
type
of
pattern
can
be
modeled
as
a
“psychological
game”
(Geanakoplos,
Pearce
&
Stacchetti,
1989
and
Battigalli
&
Dufwenberg,
2009).
PGs
are
an
extension
of
standard
games
in
which
the
utility
evaluations
of
outcomes
can
depend
on
beliefs
about
what
was
thought
to
be
likely
to
happen
(as
well
as
typical
material
consequences).
This
approach
requires
thinking
and
reasoning
since
the
belief
is
derived
from
analysis
of
the
other
person’s
motives.
Together
these
papers
provide
tools
for
incorporating
motivations
such
as
intentions,
social
norms,
and
emotions
into
game‐theoretic
models.
Emotions
are
an
important
belief
dependent
motivation.
Anxiety,
disappointment,
elation,
frustration,
guilt,
joy,
regret,
and
shame,
among
other
emotions,
can
all
be
conceived
of
as
belief‐dependent
incentives
or
motivations
and
26
incorporated
into
models
of
behavior
using
tools
from
psychological
game
theory.
One
example
is
guilt:
Baumeister,
Stillwell,
and
Heatherton
(1994)
write:
“If
people
feel
guilty
for
hurting
their
partners…
and
for
failing
to
live
up
to
their
expectations,
they
will
alter
their
behavior
(to
avoid
guilt).”
Battigalli
and
Dufwenberg
(2007)
operationalize
the
notion
that
people
will
anticipate
and
avoid
guilt
in
their
model
of
guilt
aversion.
In
their
model,
players
derive
positive
utility
from
both
material
payoffs
and
negative
utility
from
guilt.
Players
feel
guilty
if
their
behavior
disappoints
a
co‐player
relative
to
his
expectations.xiii
Consider
Figure
8,
which
illustrates
a
simple
trust
game.
Player
1
may
choose
either
“Trust”
or
“Don’t.”
In
the
first
case
player
1
gets
the
move,
while
after
a
choice
of
“Don’t”
the
game
ends
and
each
player
gets
payoff
1.
If
player
2
gets
the
move,
she
chooses
between
“Grab”
and
“Share.”
The
payoffs
to
Grab
are
0
for
player
1
and
4
for
player
2.
The
subgame
perfect
equilibrium
of
this
game
for
selfish
players
is
for
Player
2
to
choose
Grab
if
she
gets
the
move
since
it
results
in
a
higher
payoff
for
her
than
choosing
Share.
Player
1
anticipates
this
behavior
and
chooses
Don’t
to
avoid
receiving
0.
Both
players
receive
a
payoff
of
1,
which
is
inefficient.
Now
suppose
that
Player
2
is
guilt
averse.
Then
her
utility
depends
not
only
on
her
material
payoff,
but
also
on
how
much
she
“lets
down”
player
1
relative
to
his
expectations.
Let
p
be
the
probability
that
player
1
assigns
to
“Share.”
Let
p’
represent
Player
2’s
(point)
belief
regarding
p,
and
suppose
that
2’s
payoff
from
Grab
is
then
4‐θp,
where
theta
27
represents
player
2’s
sensitivity
to
guilt.
If
player
1
chooses
Trust
it
must
be
that
p
is
greater
than
½
‐
otherwise
player
1
would
choose
Don’t.
Then
if
θ≥
2,
player
2
will
choose
Share
to
avoid
the
guilt
from
letting
down
Player
1.
Knowing
this,
player
1
will
choose
Trust.
In
this
outcome
both
players
receive
2
(instead
of
1
in
the
selfish
subgame
perfect
equilibrium),
illustrating
how
guilt
aversion
can
foster
trust
and
cooperation
where
selfish
behavior
leads
to
inefficiency.
A
number
of
experiments
have
studied
guilt
aversion
in
the
context
of
trust
games,
including
Dufwenberg
&
Gneezy
(2000),
Charness
&
Dufwenberg
(2006,
2011),
and
Reuben
et
al.
(2009).
All
of
these
papers
find
evidence
that
a
desire
to
avoid
guilt
motivates
players
to
behave
unselfishly
by
reciprocating
trust
(for
a
contrary
opinion
see
Ellingsen
et
al.,
2010).
Recent
fMRI
evidence
(Chang
et
al,
in
press)
suggests
that
avoiding
guilt
in
trust
games
is
associated
with
increased
activity
in
the
anterior
insula.
Psychological
game
theory
also
may
be
employed
to
model
other
social
emotions
such
as
shame
(Tadelis,
2008)
or
anger
(Smith,
2009)
or
to
import
existing
models
of
emotions
such
as
disappointment,
elation,
regret,
and
rejoicing
(Bell,
1982,
1986;
Loomes
&
Sugden,
1982,
1985)
into
games.xiv
Battigalli
&
Dufwenberg
(2009)
provide
some
examples
of
these
applications.
These
models
are
just
a
glimpse
of
the
potential
applications
of
psychological
game
theory
to
the
interaction
of
emotion
and
cognition
in
social
interactions.
Another
important
application
of
psychological
game
theory
is
sociological
concerns,
such
as
reciprocity
(which
may
be
driven
by
emotions).
In
an
important
work,
Rabin
(1993)
models
reciprocity
via
functions
that
capture
a
player’s
“kindness”
to
his
28
coplayer
and
the
other
player’s
kindness
to
him.
These
kindness
functions
depend
on
the
players’
beliefs
regarding
each
other’s
actions,
and
their
beliefs
about
each
other’s
beliefs.
Dufwenberg
&
Kirchsteiger
(2004)
and
Falk
&
Fischbacher
(2006)
extend
Rabin’s
model
to
sequential
games.
Psychological
game
theory
provides
a
useful
toolkit
for
incorporating
psychological,
social,
and
cultural
factors
into
formal
models
of
decision‐making
and
social
interactions.
Many
applications
remain
to
be
discovered
and
tested
via
experiment.
Conclusions
Compared
to
its
impact
on
other
disciplines,
game
theory
has
had
less
impact
in
cognitive
psychology
so
far.
This
is
likely
because
many
of
the
analytical
concepts
used
to
derive
predictions
about
human
behavior
do
not
seem
to
correspond
closely
to
cognitive
mechanisms.
Some
game
theorists
have
also
complained
about
this
unrealism.
Eric
Van
Damme
(1999)
wrote:
Without
having
a
broad
set
of
facts
on
which
to
theorize,
there
is
a
certain
danger
of
spending
too
much
time
on
models
that
are
mathematically
elegant,
yet
have
little
connection
to
actual
behavior.
At
present
our
empirical
knowledge
is
inadequate
and
it
is
an
interesting
question
why
game
theorists
have
not
turned
more
frequently
to
psychologists
for
information
about
the
learning
and
information
processes
used
by
humans.
29
But
recently,
an
approach
called
behavioral
game
theory
has
been
developed
which
uses
psychological
ideas
to
explain
both
choices
in
many
different
games,
and
associated
cognitive
and
biological
(Camerer
2003;
Bhatt
&
Camerer,
2011)
This
chapter
discussed
two
elements
of
behavioral
game
theory
that
might
be
of
most
interest
to
cognitive
psychologists:
The
cognitive
hierarchy
approach;
and
psychological
games
in
which
outcome
values
can
depend
on
beliefs,
often
accompanied
by
emotions
(e.g.,
a
low
bargaining
offer
could
create
anger
if
you
expected
more,
or
joy
if
you
expected
less).
The
cognitive
hierarchy
approach
assumes
that
some
players
choose
rapidly
and
heuristically
(“level
0”)
and
higher‐level
players
correctly
anticipate
what
lower‐level
players
do.
The
theory
has
been
used
to
explain
behavior
in
lab
games
which
is
both
far
from
and
close
to
equilibjrium
in
different
games,
is
supported
by
evidence
from
visual
eyetracking
and
Mouselab,
is
evident
in
“theory
of
mind”
circuitry
during
fMRI,
and
also
can
explain
some
patterns
in
field
data
(such
as
the
Swedish
LUPI
lottery).
Research
on
psychological
games
is
less
well
developed
empirically,
but
has
much
promise
for
understanding
phenomena
like
“social
image”,
norm
enforcement,
how
emotions
are
created
by
surprises,
and
the
relationship
between
emotion,
cognition,
and
strategic
behavior.
Future
Directions
There
are
a
lot
of
open
research
questions
in
which
combining
cognitive
science
and
game
theory
would
be
useful.
Here
are
a
few:
30
1. Can
the
distribution
of
level
types
be
derived
endogeneously
from
more
basic
principles
of
cognitive
difficulty
and
perceived
benefit,
or
perhaps
from
evolutionary
constraint
on
working
memory
and
theory
of
mind
(e.g.
Stahl,
1993).
2. CH
models
have
the
potential
to
describe
differences
in
skill
or
experience.
Skill
arises
in
everyday
discussions
about
even
simple
games
like
rock,
paper
and
scissors,
in
games
with
private
information
such
as
poker,
and
games
that
tax
working
memory
such
as
chess.
Are
skill
differences
general
or
domain‐specific?
Can
skill
be
taught?
How
does
skill
development
change
cognition
and
neural
activity?
3. The
computational
approach
to
strategic
thinking
in
behavioral
game
theory
could
be
useful
for
understanding
the
symptoms,
etiology
and
treatment
of
some
psychiatric
disorders.
Disorders
could
be
conceptualized
as
failures
to
correctly
anticipate
what
other
people
do
and
feel
in
social
interactions,
or
to
make
good
choices
given
sensible
beliefs.
For
example,
in
repeated
trust
games
King‐Casas
et
al.,
(2008)
found
that
borderline
personality
disorder
(BPD)
did
not
have
typical
activity
in
insula
cortex
in
response
to
being
mistrusted,
and
earned
less
money
because
of
the
inability
to
maintain
steady
reciprocal
trust
behaviorally.
Chiu
(2008)
found
that
autism
patients
had
less
activity
in
a
region
of
anterior
cingulate
that
typically
encodes
signals
of
valuation
during
one’s
own
strategic
choices
(compared
to
choices
of
others).
4. A
small
emerging
approach
in
the
study
of
literature
focuses
on
the
number
of
mental
states
that
readers
can
track
and
their
effect
(e.g.,
Zunshine,
2006).
One
theory
is
that
three
mental
states
are
a
socially
important
reasonable
number
(e.g.,
31
love
triangles)
and
are
therefore
narratively
engaging.
Work
on
cognitive
schemas,
social
categorization,
computational
linguistics,
and
game
theory
could
therefore
be
of
interest
in
the
study
of
literature.
5. Formal
models
connecting
emotions
with
beliefs,
actions,
and
payoffs
can
illuminate
the
relationships
between
affective
states
and
behavior.
The
utility
function
approach
to
modeling
emotions
makes
clear
that
emotions
influence
behavior
only
when
the
hedonic
benefits
of
emotional
behavior
outweigh
the
costs.
This
approach,
which
considers
even
emotion‐driven
behavior
as
the
outcome
of
an
optimization
problem
(perhaps
sculpted
by
human
evolution
rather
than
conscious
cost‐benefit,
of
course).,
promises
to
open
up
new
avenues
of
research
studying
the
relationship
between
emotion
and
strategic
choices.
32
Table
captions
Table
1:
Payoffs
in
betting
game,
predictions
(Nash
and
CH),
and
results
from
classroom
demonstrations
in
2006‐08.
Upper
left
is
the
unique
Nash
equilibrium.
Table
2:
Payoffs
from
H
and
T
choice
in
a
“matching
pennies”
game,
predictions,
and
data.
Table
1:
predictions
Data
L
R
Nash
CH
2006+07+08
Average
T
30, 20
10, 18
1.00
.73
.81+.86+.78
.82
B
20, 20
20, 18
.00
.27
.19+.14+.22
.18
Nash
1.00
0
CH
.89
.11
2006+07+08
.95+.95+.75
.05+.05+.25
average
.88
.12
Table
2:
predictions
H
T
Nash
CH
Levels
Levels
1-2
3-4
data
H
2,0
0,1
.50
.68
1
0
.72
T
0,1
1,0
.50
.32
0
1
.28
33
Nash
.33
.67
CH
.26
.74
data
.33
.67
Figure
captions
Figure
1:
Choices
in
“2/3
of
the
average”
game
(Nagel,
2005?)
Figure
2:
Predicted
and
observed
behavior
in
entry
games
Figure
3:
The
game
board
from
Hedden
and
Zhang
(2002)
Figure
4:
An
icon
graph
of
visual
attention
in
three
rounds
of
bargaining
(1,
2
and
3)
and
corresponding
distributions
of
offers.
Each
column
represents
a
different
“type”
of
person‐
trial
classified
by
visual
attention.
Figure
5:
Estimated
strategic
level
types
for
each
individual
in
two
sets
of
11
different
games
(Chong,
Camerer,
Ho
&
Chong,
2005).
Estimated
types
are
correlated
in
two
sets
(r=.61)
Figure
6:
Numbers
chosen
in
week
1
of
Swedish
LUPI
lottery
(N
approximately
350,000).
Dotted
line
indicates
mixed
Nash
equilibrium.
Solid
line
indicate
stochastic
cognitive
hierarchy
(CH)
model
with
two
free
parameters.
Best‐fitting
average
steps
of
thinking
is
τ
=1.80
and
λ=.0043
(logit
response).
34
Figure
7:
Brain
regions
more
active
in
level
2
reasoners
compared
to
level
1
reasoners
(classified
by
choices),
differentially
in
playing
human
compared
to
computer
opponents
(from
Coricelli
and
Nagel,
2009,
Figure
S2a).
Figure
8:
A
simple
trust
game
(Dufwenberg
and
Gneezy,
2008)
35
Figure
1
relative
frequencies
Beauty contest results (Expansion,
Financial Times, Spektrum)
average 23.07
0.20
0.15
0.10
0.05
0.00
numbers
0
22
33
100
50
36
How entry varies with demand (D),
experimental data and thinking model
1
0.9
0.8
% entry
0.7
entry=demand
experimental data
0.6
0.5
!=1.25
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Demand (as % of # of players)
Figure
2
Figure
3
37
Figure
4
38
Figure
5
39
Figure
6
40
Figure
7
Figure
8
41
References
Addis,
D.R.,
McIntosh,
A.R.,
Moscovitch,
M.,
Crawley,
A.P.
&
McAndrews,
M.P.
(2004).
Characterizing
spatial
and
temporal
features
of
autobiographical
memory
retrieval
networks:
a
partial
least
squares
approach.
NeuroImage,
23,
1460‐1471.
Addis,
D.R.,
Wong,
A.T.
&
Schacter,
D.L.
(2007),
Remembering
the
past
and
imagining
the
future:
common
and
distinct
neural
substrates
during
event
construction
and
elaboration
,
Neuropsychologia,
45,
1363‐1377.
Akerlof,
G.
A.
(1970).
“The
Market
for
"Lemons":
Quality
Uncertainty
and
the
Market
Mechanism.
The
Quarterly
Journal
of
Economics,
84,
488‐500.
Amodio,
D.M.
&
Frith,
C.D.
(2006),
Meeting
of
minds:
the
medial
frontal
cortex
and
social
cognition,
Nature
Reviews
Neuroscience,
7,
268‐277.
Aumann,
R.
and
Brandenburger,
A.
(1995),
“Epistemic
conditions
for
Nash
Equilibrium,”
Econometrica,
63,
1161‐1180.
Baker,
S.C.,
Rogers,
R.D.,
Owen,
A.M.,
Frith,
C.D.,
Dolan,
R.J.,
Frackowiak,
R.S.
&
Robbins,
T.W.
(1996),
Neural
systems
engaged
by
planning:
a
PET
study
of
the
Tower
of
London
task
,
Neuropsychologia,
34,
515‐526.
Baumeister,
R.,
Stillwell,
A.
&
Heatherton,
T.
(1994),
“Guilt:
An
Interpersonal
Approach”,
Psychological
Bulletin,
115,
243‐67.
Battigalli,
P.,
and
Dufwenberg,
M.
(2007),
“Guilt
in
Games”,
American
Economic
Review,
42
97,
170‐176.
Battigalli,
P.,
and
Dufwenberg,
M.
(2009),
“Dynamic
Psychological
Games”,
Journal
of
Economic
Theory,
144,
1‐35.
Bazerman,
M.
H.,
&
Samuelson,
W.
F.
(1983).
I
Won
the
Auction
But
Don't
Want
the
Prize.
Journal
of
Conflict
Resolution,
27,
618‐634.
Benabou,
R.
and
Tirole,
J.
(2006),
“Incentives
and
Prosocial
Behavior”,
American
Economic
Review,
96,
1652‐1678.
Bell,
D.E.
(1982),
Regret
in
Decision
Making
under
Uncertainty,
Operations
Research
30,
961‐981.
Bell
D.
E.
(1985),
Disappointment
in
Decision
Making
under
Uncertainty,
Operations
Research,
33,
1‐27.
Bernheim,
B.D.
(1994),
“A
Theory
of
Conformity”,
The
Journal
of
Political
Economy,
102,
841‐877.
Bernheim,
B.D.
&
Andreoni,
J.
(2009),
“Social
Image
and
the
50‐50
Norm:
Theory
and
Experimental
Evidence.”
Econometrica,
77,
1607–1636.
Bhatt,
M.
&
Camerer,
C.F.
(2005),
Self‐referential
thinking
and
equilibrium
as
states
of
mind
in
games:
fMRI
evidence,
Games
and
Economic
Behavior,
52,
424.
Bhatt,
M.
&
Camerer,
C.F.
In
press.
“The
cognitive
neuroscience
of
strategic
thinking”,
in
J.
Cacciopo
and
J.
Decety
(Eds.),
Handbook
of
Social
Neuroscience.
Oxford,
England:
Oxford
University
Press.
43
Bhui,
R
&
Camerer,
C.
F.,
(2011),
“The
psychometrics
of
measuring
stability
of
cognitive
hierarchy
types
in
experimental
games”,
working
paper.
Binmore,
K.,
Shaked,
A.,
&
Sutton,
J.
(1985).
Testing
Noncooperative
Bargaining
Theory:
A
Preliminary
Study.
The
American
Economic
Review,
75,
1178‐1180.
Burchardi,
K.B.,
and
Penczynski,
S.P.,
(2010),
“Out
Of
Your
Mind:
Eliciting
Individual
Reasoning
in
One
Shot
Games”,
working
paper.
Brandenburger,
A.
(1992),
“Knowledge
and
Equilibrium
in
Games”,
Journal
of
Economic
Perspectives,
6(4),
83–101.
Brocas,
I.,
Carrillo,
J.D.,
Wang,
S.
&
Camerer,
C.F.
(2009),
Measuring
attention
and
strategic
behavior
in
games
with
private
information,
Mimeo
edn,
Pasadena.
Brown,
A.
L.,
Camerer,
C.
F.,
&
Lovallo,
D.
(2011).
“To
Review
or
Not
Review?
Limited
Strategic
Thinking
at
the
Movie
Box
Office”,
working
paper.
Camerer,
C.,
(1990).
Behavioral
game
theory.
In
R.
Hogarth(Ed.),
Insights
in
Decision
Making:
A
Tribute
to
Hillel
J.
Einhorn.
Chicago:
University
of
Chicago
Press.
Camerer,
C.F.
(2003),
Behavioral
game
theory,
Princeton:
Princeton
University
Press.
Camerer,
C.
&
Ho,
T.H.
(1999),
Experience‐weighted
attraction
learning
in
normal
form
games,
Econometrica,
67,
827‐874.
Camerer,
C.
F.,
Ho,
T.‐H.,
&
Chong,
J.‐K.
(2002).
Sophisticated
Experience‐Weighted
Attraction
Learning
and
Strategic
Teaching
in
Repeated
Games.
Journal
of
Economic
Theory,
104,
137‐188.
44
Camerer,
C.F.,
Ho,
T.H.
&
Chong,
J.K.
(2004),
A
cognitive
hierarchy
model
of
games,
Quarterly
Journal
of
Economics,
119,
861‐898.
Camerer,
C.F.,
Johnson,
E.,
Rymon,
T.
&
Sen,
S.
(1993),
“Cognition
and
Framing
in
Sequential
Bargaining
for
Gains
and
Losses.”
In
Frontiers
of
Game
Theory,
eds.
K.G.
Binmore,
A.P.
Kirman
&
P.
Tani,
MIT
Press,
Cambridge,
27‐47.
Capen,
E.
C.,
R.
V.
Clapp,
and
W.
M.
Campbell,
(1971).
Competitive
Bidding
in
High‐Risk
Situations.
Journal
of
Petroleum
Technology,
23,
641‐653.
Caplin,
A.
and
Leahy,
J.
(2004).
The
Supply
of
Information
by
a
Concerned
Expert.
The
Economic
Journal,
114,
487‐505
Carter,
C.S.
(1998).
Anterior
Cingulate
Cortex,
Error
Detection,
and
the
Online
Monitoring
of
Performance
,
Science,
280,
747
‐749.
Cavanna,
A.E.
&
Trimble,
M.R.
(2006),
The
precuneus:
a
review
of
its
functional
anatomy
and
behavioural
correlates
,
Brain
:
A
journal
of
neurology,
129(3),
564‐583.
Chang,
L.,
Smith,
A.,
Dufwenberg,
M.,
and
Sanfey,
A.
(2011).
Triangulating
the
neural,
psychological,
and
economic
bases
of
guilt
aversion.
Neuron,
in
press.
Chapman,
H.A.,
Kim,
D.
A.,
Susskind,
J.M.,
Anderson,
A.K.
(2009).
In
bad
taste:
Evidence
for
the
oral
origins
of
moral
disgust.
Science,
323,
1222‐1226.
Charness,
G.
&
Rabin,
M.
(2002).
Understanding
social
preferences
with
simple
tests.
Quarterly
Journal
of
Economics,
117,
817‐869.
45
Charness,
G.,
and
Dufwenberg,
M.
(2006).
Promises
and
Partnership.
Econometrica
74,
1579‐1601.
Charness,
G.,
and
Dufwenberg,
M.
(2011).
Participation.
American
Economic
Review,
forthcoming.
Cheng,
P.
W.,
&
Holyoak,
K.
J.
(1985).
Pragmatic
reasoning
schemas.
Cognitive
Psychology,
17,
391‐416.
Chiu,
P.H.,
Kayali,
M.A.,
Kishida,
K.T.,
Tomlin,
D.,
Klinger,
L.G.,
Klinger,
M.R.
&
Montague,
P.R.
(2008).
Self
responses
along
cingulate
cortex
reveal
quantitative
neural
phenotype
for
high‐functioning
autism.
Neuron,
57,
463‐473.
Chong,
J.,
Camerer,
C.F.
&
Ho,
T.H.
(2006).
A
learning‐based
model
of
repeated
games
with
incomplete
information.
Games
and
Economic
Behavior,
55,
340‐371.
Coricelli,
G.
&
Nagel,
R.
(2009).
Neural
correlates
of
depth
of
strategic
reasoning
in
medial
prefrontal
cortex.
PNAS,
106,
9163‐8.
Costa‐Gomes,
M.,
Crawford,
V.P.
&
Broseta,
B.
(2001).
Cognition
and
behavior
in
normal‐
form
games:
An
experimental
study.
Econometrica,
69,
1193‐1235.
Costa‐Gomes,
M.A.
&
Crawford,
V.P.
(2006).
Cognition
and
Behavior
in
Two‐Person
Guessing
Games:
An
Experimental
Study.
American
Economic
Review,
96,
1737‐1768.
Chiu,
P.
H.,
Kayali,
M.
A.,
Kishida,
K.
T.,
Tomlin,
D.,
Klinger,
L.
G.,
Klinger,
M.
R.,
et
al.
(2008).
Self
Responses
along
Cingulate
Cortex
Reveal
Quantitative
Neural
Phenotype
for
High‐
Functioning
Autism.
Neuron,
57,
463‐473.
46
Craig,
A.D.
(2002).
How
do
you
feel?
Interoception:
the
sense
of
the
physiological
condition
of
the
body.
Nature
Reviews
Neuroscience,
3,
655‐666.
Crawford,
V.P,
Costa‐Gomes,
M.A.,
and
Iriberri,
N.,
(2010),
“Strategic
Thinking,
”
working
paper.
Critchley,
H.D.
(2005).
Neural
mechanisms
of
autonomic,
affective,
and
cognitive
integration.
The
Journal
of
comparative
neurology,
493,
154‐166.
Crockett,
M.J.,
Clark,
L.,
Tabibnia,
G.,
Lieberman,
M.D.,
Robbins,
T.W.
(2008).
Serotonin
modulates
behavioral
reactions
to
unfairness.
Science,
320,
1739.
Culham,
J.C.,
Brandt,
S.A.,
Cavanagh,
P.,
Kanwisher,
N.G.,
Dale,
A.M.
&
Tootell,
R.B.
(1998).
Cortical
fMRI
activation
produced
by
attentive
tracking
of
moving
targets.
Journal
of
neurophysiology,
80,
2657‐2670.
D'Argembeau,
A.,
Ruby,
P.,
Collette,
F.,
Degueldre,
C.,
Balteau,
E.,
Luxen,
A.,
Maquet,
P.
&
Salmon,
E.
(2007).
"Distinct
regions
of
the
medial
prefrontal
cortex
are
associated
with
self‐referential
processing
and
perspective
taking.
Journal
of
cognitive
neuroscience,
19,
935‐944.
Devetag,
G.,
&
Warglien,
M.
(2003).
Games
and
phone
numbers:
Do
short‐term
memory
bounds
affect
strategic
behavior?
Journal
of
Economic
Psychology,
24,
189‐202.
Duffy,
J.,
&
Hopkins,
E.,
(2005).
Learning,
information,
and
sorting
in
market
entry
games:
theory
and
evidence.
Games
and
Economic
Behavior,
51,
31‐62.
47
Dufwenberg,
M.
and
Gneezy,
U.
(2000).
Measuring
Beliefs
in
an
Experimental
Lost
Wallet
Game.
Games
and
Economic
Behavior,
30,
163‐182
Dufwenberg,
M.
&
Kirchsteiger,
G.
(2004).
A
Theory
of
Sequential
Reciprocity.
Games
and
Economic
Behavior.
47,
268‐298.
Duffy,
J.,
and
Hopkins,
E.
(2005).
Learning,
information,
and
sorting
in
market
entry
games:
theory
and
evidence.
Games
and
Economic
Behavior,
51,
31‐62.
Eisenberger,
N.
I.,
Lieberman,
M.
D.,
&
Williams,
K.
D.
(2003).
Does
Rejection
Hurt?
An
fMRI
Study
of
Social
Exclusion.
Science,
302,
290‐292.
Ellingsen,
T.,
Johannesson,
M.,
Tjotta,
S.
Gaute
Torsvik,G.
(2010).
Testing
guilt
aversion.
Games
and
Economic
Behavior,
68,
95‐107.
Farrer,
C.
&
Frith,
C.D.
(2002).
Experiencing
oneself
vs.
another
person
as
being
the
cause
of
an
action:
the
neural
correlates
of
the
experience
of
agency.
NeuroImage,
15,
596‐603.
Fehr,
E.
&
Camerer,
C.F.
(2007).
Social
neuroeconomics:
the
neural
circuitry
of
social
preferences.
Trends
in
Cognitive
Sciences,
11,
419‐27.
Fehr,
E.
&
Schmidt,
K.M.
(1999).
A
Theory
of
Fairness,
Competition,
and
Cooperation.
Quarterly
Journal
of
Economics,
114,
817‐868.
Fiddick,
L.,
Cosmides,
L,
&
Tooby,
J.
(2000).
No
interpretation
without
representation:
the
role
of
domain‐specific
representations
and
inferences
in
the
Wason
selection
task.
Cognition,
77,
1‐79.
48
Fletcher,
P.C.,
Frith,
C.D.,
Baker,
S.C.,
Shallice,
T.,
Frackowiak,
R.S.
&
Dolan,
R.J.
(1995).
The
mind's
eye‐‐precuneus
activation
in
memory‐related
imagery.
NeuroImage,
2,
,
195‐
200.
Fudenberg,
D.
&
Levine,
D.
(1998).
Theory
of
Learning
in
Games.
MIT
Press,
Cambridge,
MA.
Gallagher,
H.L.,
Jack,
A.I.,
Poepstorff,
A.
&
Frith,
C.D.
(2002).
Imaging
the
Intentional
Stance
in
a
Competitive
Game.
NeuroImage,
16,
814‐821.
Geanakoplos,
D.,
Pearce,
D.,
and
Stacchetti,
E.
(1989).
Psychological
games
and
sequential
rationality.
Games
and
Economic
Behavior,
1,
60‐79.
Gill,
D.
and
Stone,
R.
(2010).
Fairness
and
desert
in
tournaments.
Games
and
Economic
Behavior,
69,
346‐364.
Gillen,
B.,
(2009).
Identification
and
Estimation
of
Level‐k
Auctions.
Working
paper.
Glimcher,
P.W.,
Colin
Camerer,
Ernst
Fehr,
Russell
Poldrack
(Eds.)
(2008).
Neuroeconomics:
decision
making
and
the
brain.
Academic,
London.
Georganas,
S.,
Healy,
P.J.,
and
Weber,
R.,
(2010).
On
the
persistence
of
strategic
sophistication.
Working
paper.
Goldfarb,
A.,
and
Xiao,
M.,
(2011).
Who
Thinks
about
the
Competition:
Managerial
Ability
and
Strategic
Entry
in
US
Local
Telephone
Markets.
Forthcoming,
American
Economic
Review.
Goldfarb,
A.,
and
Yang,
B.
(2009).
Are
All
Managers
Created
Equal?
Journal
of
Marketing
Research,
46,
612‐622.
49
Hampton,
A.,
Bossaerts,
P.
&
O’Doherty,
J.
(2008).
Neural
correlates
of
mentalizing‐related
computations
during
strategic
interactions
in
humans.
PNAS,
105,
6741‐6746.
Harsanyi,
J.C.
(1967).
Games
with
Incomplete
Information
Played
by
‘Bayesian’
Players,
I‐
III.
Part
I.
The
Basic
Model.
Management
Science,
14,
159‐182
Hayden,
B.,
Pearson,
M.
&
Platt,
M.
L.
(2009).
Fictive
learning
signals
in
anterior
cingulate
cortex.
Science,
324,
948‐950.
Hsu,
M.,
Anen,
C.,
&
Quartz,
S.
R.
(2008).
The
Right
and
the
Good:
Distributive
Justice
and
Neural
Encoding
of
Equity
and
Efficiency.
Science,
320,
1092‐1095.
Izuma,
K.,
Saito,
D.N.
&
Sadato,
N.
(2010).
Processing
of
the
incentive
for
social
approval
in
the
ventral
striatum
during
charitable
donation.
Journal
of
cognitive
neuroscience,
22,
621‐631.
Izuma,
K.,
Saito,
D.N.
&
Sadato,
N.
(2008).
Processing
of
social
and
monetary
rewards
in
the
human
striatum.
Neuron,
58,
284‐294.
Johnson,
E.
J.,
Camerer,
C.,
Sen,
S.,
&
Rymon,
T.
(2002).
Detecting
Failures
of
Backward
Induction:
Monitoring
Information
Search
in
Sequential
Bargaining.
Journal
of
Economic
Theory,
104,
16‐47.
Kahneman,
Daniel,
(1988).
"Experimental
Economics:
A
Psychological
Perspective,"
in
R.
Tietz,
W.
Albers,
and
R.
Selten,
eds.,
Bounded
rational
behavior
in
experimental
games
and
markets
(pp.
11‐18).
New
York:
Springer‐Verlag.
50
Keynes,
J.M.,
(1936).
The
General
Theory
of
Employment,
Interest,
and
Money.
London:
Macmillan.
Keysers,
C.
&
Gazzola,
V.
(2007).
Integrating
simulation
and
theory
of
mind:
from
self
to
social
cognition.
Trends
in
cognitive
sciences,
11,
194‐196.
King‐Casas,
B.,
Sharp,
C.,
Lomax‐Bream,
L.,
Lohrenz,
T.,
Fonagy,
P.
&
Montague,
P.R.
(2008).
The
rupture
and
repair
of
cooperation
in
borderline
personality
disorder.
Science,
321,
806‐810.
Knoch,
D.,
Pascual‐Leone,
A.,
Meyer,
K.,
Treyer,
V.
&
Fehr,
E.
(2006).
Diminishing
Reciprocal
Fairness
by
Disrupting
the
Right
Prefrontal
Cortex.
Science,
314,
829–832.
Kross,
E.,
Berman,
M.
G.,
Mischel,
W.,
Smith,
E.
E.,
&
Wager,
T.
D.
(2011).
Social
rejection
shares
somatosensory
representations
with
physical
pain.
Proceedings
of
the
National
Academy
of
Sciences,
108(15),
6270‐6275.
Kuo,
W.J.,
Sjostrom,
T.,
Chen,
Y.P.,
Wang,
Y.H.
&
Huang,
C.Y.
(2009).
Intuition
and
deliberation:
two
systems
for
strategizing
in
the
brain.
Science
324,
519‐522.
Le,
T.H.,
Pardo,
J.V.
&
Hu,
X.
(1998).
4
T‐fMRI
study
of
nonspatial
shifting
of
selective
attention:
cerebellar
and
parietal
contributions.
Journal
of
neurophysiology,
79,
1535‐
1548.
Li,
C.S.,
Huang,
C.,
Constable,
R.T.
&
Sinha,
R.
(2006).
Imaging
response
inhibition
in
a
stop‐
signal
task:
neural
correlates
independent
of
signal
monitoring
and
post‐response
processing.
The
Journal
of
neuroscience
:
the
official
journal
of
the
Society
for
Neuroscience,
26,
186‐192.
51
Lohrenz,
T.,
McCabe,
K.,
Camerer,
C.F.
&
Montague,
P.R.
(2007).
Neural
signature
of
fictive
learning
signals
in
a
sequential
investment
task.
PNAS,
104,
9493‐9498.
Loomes,
G.
and
Sugden,
R.
(1982).
Regret
Theory:
An
Alternative
Theory
of
Rational
Choice
Under
Uncertainty.
The
Economic
Journal,
92,
805‐824
Loomes,
G.
and
Sugden,
R.
(1986).
Disappointment
and
Dynamic
Consistency
in
Choice
under
Uncertainty.
The
Review
of
Economic
Studies,
53,
271‐282.
Luce
R.D.
(1959).
Individual
choice
behavior.
Oxford,
England:
John
Wiley.
Luhmann,
C.C.,
Chun,
M.M.,
Yi,
D.J.,
Lee,
D.
&
Wang,
X.J.
(2008).
Neural
Dissociation
of
Delay
and
Uncertainty
in
Intertemporal
Choice.
The
Journal
of
neuroscience,
28,
14459‐
14466.
Lundstrom,
B.,
Petersson,
K.M.,
Andersson,
J.,
Johansson,
M.,
Fransson,
P.
&
Ingvar,
M.
(2003).
Isolating
the
retrieval
of
imagined
pictures
during
episodic
memory:
activation
of
the
left
precuneus
and
left
prefrontal
cortex.
NeuroImage,
20,
1934.
McCabe,
K.,
Houser,
D.,
Ryan,
L.,
Smith,
V.
&
Trouard,
T.
(2001).
A
Functional
Imaging
Study
of
Cooperation
in
Two‐Person
Reciprocal
Exchange.
Proceedings
of
the
National
Academy
of
Sciences
of
the
United
States
of
America,
98,
11832‐11835.
Mobbs,
D.,
Yu,
R.,
Meyer,
M.,
Passamonti,
L.,
Seymour,
B.,
Calder,
A.J.,
Schweizer,
S.,Frith,
C.D.,
Dalgleish,
T.
(2009).
A
key
role
for
similarity
in
vicarious
reward.
Science,
324,
p,
900,
Nagahama,
Y.,
Okada,
T.,
Katsumi,
Y.,
Hayashi,
T.,
Yamauchi,
H.,
Sawamoto,
N.,
Toma,
K.,
Nakamura,
K.,
Hanakawa,
T.,
Konishi,
J.,
Fukuyama,
H.
&
Shibasaki,
H.
(1999).
Transient
52
neural
activity
in
the
medial
superior
frontal
gyrus
and
precuneus
time
locked
with
attention
shift
between
object
features.
NeuroImage,
10,
193‐199.
Nagel,
R.
(1995).
Unraveling
in
Guessing
Games:
An
Experimental
Study.
The
American
Economic
Review,
85,
1313‐1326.
Nash,
J.
F.
(1950).
Equilibrium
Points
in
n‐Person
Games.
Proceedings
of
the
National
Academy
of
Sciences
of
the
United
States
of
America,
36,
48‐49.
Neelin,
J.,
Sonnenschein,
H.,
&
Spiegel,
M.
(1988).
A
Further
Test
of
Noncooperative
Bargaining
Theory.
American
Economic
Review,
78,
824‐836.
Ochsner,
K.,
Hughes,
B.,
Robertson,
E.
Gabrieli,
J.,
Cooper,
J.
&
Gabrieli,
J.
(2009).
Neural
Systems
Supporting
the
Control
of
Affective
and
Cognitive
Conflicts.
Journal
of
Cognitive
Neuroscience,
21,
1841‐1854.
Östling,
R.,
Wang,
J.
T.‐y.,
Chou,
E.,
&
Camerer,
C.
F.,
(2011).
Testing
Game
Theory
in
the
Field:
Swedish
LUPI
Lottery
Games.
Forthcoming,
American
Economic
Journal:
Microeconomics.
Preuschoff,
K.,
Quartz,
S.R.
&
Bossaerts,
P.
(2008).
Human
insula
activation
reflects
risk
prediction
errors
as
well
as
risk.
Journal
of
Neuroscience,
28,
2745‐2752.
Rabin,
M.
(1993).
Incorporating
Fairness
into
Game
Theory
and
Economics.
American
Economic
Review,
83,
1281‐1302.
53
Raichle,
M.E.,
MacLeod,
A.M.,
Snyder,
A.Z.,
Powers,
W.J.,
Gusnard,
D.A.
&
Shulman,
G.L.
(2001).
A
default
mode
of
brain
function.
Proceedings
of
the
National
Academy
of
Sciences
of
the
United
States
of
America,
98,
676‐682.
Reuben
E.,
Sapienza
P.,
Zingales
L.
(2009).
Is
mistrust
self‐fulfilling?
Economics
Letters,
104,
89‐91.
Ridderinkhof,
K.R.,
Ullsperger,
M.,
Crone,
E.A.
&
Nieuwenhuis,
S.
(2004),
The
role
of
the
medial
frontal
cortex
in
cognitive
control.
Science,
306,
443‐447.
Rotemberg,
J.J.
(2008).
Minimally
Acceptable
Altruism
and
the
Ultimatum
Game.
Journal
of
Economic
Behavior
&
Organization,
66,
nos.
3‐4,
457‐476
Ruby,
P.
&
Decety,
J.
(2001).
Effect
of
subjective
perspective
taking
during
simulation
of
action:
a
PET
investigation
of
agency.
Nature
neuroscience,
4,
546‐550.
Sanfey,
A.G.,
Rilling,
J.K.,
Aronson,
J.A.,
Nystrom,
L.E.
&
Cohen,
J.D.
(2003).
The
Neural
Basis
of
Economic
Decision‐Making
in
the
Ultimatum
Game.
Science,
300,
1755‐1758.
Saxe,
R.
&
Powell,
L.J.
(2006).
It's
the
thought
that
counts:
specific
brain
regions
for
one
component
of
theory
of
mind.
Psychological
science,
17,
692‐699.
Sbriglia,
P.
(2008).
Revealing
the
depth
of
reasoning
in
p‐beauty
contest
games.
Experimental
Economics,
11,
107‐121.
Shallice,
T.,
Fletcher,
P.C.,
Frith,
C.D.,
Grasby,
P.,
Frackowiak,
R.S.
&
Dolan,
R.J.
(1994).
Brain
regions
associated
with
acquisition
and
retrieval
of
verbal
episodic
memory.
Nature,
368,
633‐635.
54
Simon,
O.,
Mangin,
J.,
Cohen,
L.,
Le
Bihan,
D.
&
Dehaene,
S.
(2002).
Topographical
Layout
of
Hand,
Eye,
Calculation,
and
Language‐Related
Areas
in
the
Human
Parietal
Lobe.
Neuron,
33,
475‐487.
Smith,
A.
(2009).
Belief‐Dependent
Anger
in
Games.
Working
paper.
Stahl,
D.
O.
(1993).
Evolution
of
Smart
n
Players.
Games
and
Economic
Behavior,
5,
604‐617.
Stahl,
D.,
O.,
&
Wilson,
P.,
W.
(1995).
On
Players'
Models
of
Other
Players:
Theory
and
Experimental
Evidence.
Games
and
Economic
Behavior,
10,
218‐254.
Tadelis,
S.
(2011).
The
Power
of
Shame
and
the
Rationality
of
Trust.
Working
paper.
Takahashi,
H.,
Kato,
M.,
Matsuura,
M.,
Mobbs,
D.,
Suhara,
T.
&
Okubo,
Y.
(2009).
When
your
gain
is
my
pain
and
your
pain
is
my
gain:
neural
correlates
of
envy
and
schadenfreude.
Science,
323,
937‐939.
Tomlin,
D.,
Kayali,
M.A.,
King‐Casas,
B.,
Anen,
C.,
Camerer,
C.F.,
Quartz,
S.R.
&
Montague,
P.R.
(2006).
Agent‐specific
responses
in
the
cingulate
cortex
during
economic
exchanges.
Science,
312,
1047‐1050.
Tricomi,
E.,
Rangel,
A.,
Camerer,
C.F.
&
O'Doherty,
J.P.
(2010).
Neural
evidence
for
inequality‐averse
social
preferences.
Nature,
463,
1089‐1091.
Unterrainer,
J.M.,
Rahm,
B.,
Kaller,
C.P.,
Ruff,
C.C.,
Spreer,
J.,
Krause,
B.J.,
Schwarzwald,
R.,
Hautzel,
H.
&
Halsband,
U.
(2004).
When
planning
fails:
individual
differences
and
error‐related
brain
activity
in
problem
solving.
Cerebral
cortex
(New
York,
N.Y.:
1991),
14,
1390‐1397.
55
Van
Damme,
E.
(1999),
"Game
theory:
The
next
stage,"
in
L.A.
Gérard‐Varet,
A.P.
Kirman
and
M.
Ruggiero
(eds.),
Economics
beyond
the
Millennium
(pp.
184‐214).
Oxford
University
Press.
Vogeley,
K.,
Bussfeld,
P.,
Newen,
A.,
Herrmann,
S.,
Happe,
F.,
Falkai,
P.,
Maier,
W.,
Shah,
N.J.,
Fink,
G.R.
&
Zilles,
K.
(2001).
Mind
reading:
neural
mechanisms
of
theory
of
mind
and
self‐perspective.
NeuroImage,
14,
170‐181.
Vogeley,
K.,
May,
M.,
Ritzl,
A.,
Falkai,
P.,
Zilles,
K.
&
Fink,
G.R.
(2004).
Neural
correlates
of
first‐person
perspective
as
one
constituent
of
human
self‐consciousness.
Journal
of
cognitive
neuroscience,
16,
817‐827.
von
Neumann,
J.,
&
Morgenstern,
O.
(1944).
Theory
of
games
and
economic
behavior.
Princeton:
Princeton
Univ.
Press.
Wang,
J.T.,
Spezio,
M.
&
Camerer,
C.F.
(2010).
Pinocchio's
Pupil:
Using
Eyetracking
and
Pupil
Dilation
to
Understand
Truth‐telling
and
Deception
in
Games.
American
Economic
Review,
100,
984‐1007.
Weber,
R.
A.
(2003).
'Learning'
with
no
feedback
in
a
competitive
guessing
game.
Games
and
Economic
Behavior,
44,
134‐144
Yoshida,
W.,
Seymour,
B.,
Friston,
K.J.
&
Dolan,
R.
(2009).
Neural
mechanism
of
belief
inference
during
cooperative
games.
Journal
of
Neuroscience,
30,
10744‐10751.
Yoshida,
W.,
Dolan,
R.J.
&
Friston,
K.J.
(2008),
"Game
theory
of
mind
",
PLoS
computational
biology,
4(12),
e1000254.For
mixed
prospects,
however,
56
Zunshine,
Lisa.
Why
We
Read
Fiction:
Theory
of
Mind
and
the
Novel.
Columbus:
The
Ohio
State
University
Press,
2006.
i
Another
important
component
of
behavioral
game
theory
is
learning
from
repeated
play
(perhaps
using
reinforcement
rules
as
well
as
model‐based
“fictive
learning”
(Camerer
&
Ho,
1999).
Learning
models
are
widely
studied
but
lie
beyond
the
scope
of
this
chapter
(see
e.g.
Fudenberg
&
Levine,
1998;
Camerer,
2003,
chapter
6).
ii
Other
mechanisms
that
could
produce
equilibration
include
learning
from
observation,
introspection,
calculation
(such
as
firms
hiring
consultants
to
advise
on
how
to
bid
on
auctions),
imitation
of
attention‐getting
or
successful
strategies
or
people,
or
a
process
of
pre‐play
talking
about
future
choices.
The
learning
literature
is
well
developed
(e.g.,
Camerer,
2003,
chapter
6)
but
the
study
of
imitation
and
pre‐play
talking
could
certainly
use
more
collaboration
between
game
theorists
and
psychologists.
iii
Common
knowledge
requires,
for
two
players,
that
A
knows
that
B
knows
that
A
knows…
ad
infinitum.
iv
A
more
general
view
is
that
level
0’s
choose
intuitively
or
“heuristically”
(perhaps
based
on
visually
salient
strategies
or
payoffs,
or
“lucky
numbers”),
but
that
topic
has
not
been
explored
very
much.
v
Restricting
communication
is
not
meant
to
be
realistic
and
certainly
is
not.
Instead
communication
is
restricted
because
choosing
what
to
say
is
itself
a
“strategy”
choice
which
complicates
analysis
of
the
game—it
opens
a
Pandora’s
box
of
possible
effects
that
lie
outside
the
scope
of
standard
game
theory.
However,
game
theorists
are
well
aware
of
the
possible
powerful
effects
of
communication
and
have
begun
to
study
it
in
simple
ways.
In
her
thesis
Nagel
(1995)
reports
some
subject
debriefing
which
are
illustrative
of
CH
thinking,
and
Sbriglia
(2008)
reports
some
protocols
too.
Burchardi
and
Penczynski
(2010)
also
used
chat
messaging
and
team
choice
to
study
communication
and
report
evidence
largely
consistent
with
CH
reasoning.
vi
This
analysis
assumes
τ=1.5
but
the
general
point
holds
more
widely.
vii
Note
that
this
is
a
close
relative
of
a
“threshold
public
goods”
game.
In
that
game,
a
public
good
is
created,
which
benefits
everyone,
if
T
people
contribute,
but
if
even
one
person
57
does
note
the
public
good
is
not
produced.
In
that
case,
everyone
would
like
to
be
in
the
N‐
T
group
of
people
who
benefit
without
paying.
viii
p(X)
is
a
“common
prior.”
An
example
is
a
game
of
cards,
in
which
everyone
knows
the
contents
of
the
card
deck,
do
not
know
what
face‐down
cards
other
players
are
holding,
but
also
know
that
the
other
players
do
know
their
own
face‐down
cards.
ix
There
are
several
subtle
variants.
In
the
original
Mouselab,
boxes
open
and
close
automatically
when
the
mouse
enters
and
exits.
Costa‐Gomes
et
al
(2001)
wanted
more
deliberate
attention
so
they
created
a
version
in
which
a
click
is
required
to
open
a
box.
Brocas
et
al
(2010)
created
a
version
that
requires
the
mouse
button
to
be
held
down
to
view
box
contents
(if
the
button
press
is
halted
the
information
disappears).
x
The
rostral
ACC,
labeled
rACC,
is
more
active
in
level
1
than
in
level
2
players
in
the
human‐computer
contrast.
xi
DLPFC
is
also
involved
in
cognitive
regulation
of
emotions
(e.g.,
Ochsner
et
al.,
2009)
xii
What
we
have
in
mind
here
is
similar
to
Holyoak
and
Cheng
(1985),
Fiddick,
Cosmides
and
Tooby
(2000)
arguments
about
the
difference
between
abstract
logic
performance
and
contextualized
performance.
For
example,
games
that
resemble
hiding
food
and
guarding
hidden
locations
might
map
roughly
onto
something
like
poker,
whereas
a
lot
of
games
constructed
for
challenge
and
entertainment,
such
as
chess,
do
not
have
clear
counterparts
in
ancestral
adaptive
environments.
xiii
Other
models
of
belief‐dependent
utility
can
be
placed
in
the
general
framework
of
Battigalli
and
Dufwenberg
(2009).
For
example,
Caplin
and
Leahy
(2004)
model
doctor‐
patient
interactions
where
uncertainty
may
cause
patient
anxiety.
The
doctor
is
concerned
about
the
patient’s
well
being
and
must
decide
whether
or
not
to
provide
(potentially)
anxiety‐causing
diagnostic
information.
Bernheim
(1994)
proposes
a
model
of
conformity
where
players
care
about
the
beliefs
their
coplayers
have
regarding
their
preferences.
The
model
can
produce
fads
and
adherence
to
social
norms.
Related
work
by
Benabou
and
Tirole
(2006)
models
players
who
are
altruistic,
and
also
care
about
other’s
inferences
about
how
altruistic
they
are.
Gill
and
Stone
(2010)
model
players
who
care
about
what
they
feel
they
deserve
in
two‐player
tournaments.
The
players’
perceived
entitlements
depend
upon
their
own
effort
level
and
the
efforts
of
others.
xiv
A
(single
person)
decision
problem
involving
any
of
these
emotions
may
be
modeled
as
a
psychological
game
with
one
player
and
moves
by
nature.
58