Econ 3010 - David Dickinson`s Economics Page

Econ 3010: Intermediate Price Theory (microeconomics)
Professor David L. Dickinson
PROBLEM SET #3…….ANSWERS
GAME THEORY
1)
In short, a Prisoners’ Dilemma game involves conflict between self-interest and
joint interests. In other words, players have dominant strategies that lead to a Nash
Equilibrium in which both players are worse off than at some other outcome—the Pareto
Efficient outcome. The matrix below describes a Prisoners’ Dilemma game in the
context of the Cold War arms buildup between Russia and the U.S. The numbers I use
are not the only ones that work. What is key is that there is a dominant strategy for each
to build arms, but the joint outcome is best when neither builds arms. (as always, the first
number in each cell represents the payoff of the row player (the U.S. here) and the second
number is the payoff of the column player).
U.S
Build arms
Don’t build arms
RUSSIA
Build arms
Don’t build arms
-100 , -100
75 , -200
-200 , 75
50 , 50
To analyze a duopoly game between Acme and Gizmo, just replace the strategy “build
arms” for “produce much” and the strategy “don’t build arms” for “produce little”. The
payoffs in the cells can now just represent profits (or change in profits), where Acme and
Gizmo do jointly best when they both produce little (i.e., they behave as a single
monopolist), but there is the individual incentive to cheat on any implicit or explicit deal
to restrict production. In the end, they both end up producing a lot together earn more
than they would have at the Pareto Efficient outcome (i.e., they behave more like
competitive firms and reduce profits).
2)
Below is the Batter/Pitcher game matrix with the best response given the action of
the other highlighted in bold.
Player B (Pitcher)
Hi Fastball Low Fastball Hi Change-up Low Change-up
Hi
100 , -10
0 , 80
80 , 10
0 , 70
Player A
Middle
20 , 45
30 , 40
10 , 50
20 , 40
(Batter)
low
0 , 80
100 , -10
0 , 70
80 , 10
You can see that the Batter does not have a dominant strategy, because sometimes he will
swing Hi and sometimes Low, depending on what the Pitcher does. However, swinging
“Middle” is a dominated strategy, because the Batter always has a higher payoff by either
swinging Hi or Low. The pitcher does not have a dominant strategy, but a Low Changeup is a dominated strategy for the Pitcher.
Given that there is no strategy pair for which both players are doing the best they can
given the action of the other, there is no (pure strategy) Nash equilibrium in this game. A
mixed strategy equilibrium does exist, however, and in this context one could interpret
1
the assignment of probabilities to each pure strategy as a way of staying unpredictable in
the eyes of the other player, which is what goes on in many different sporting contexts.
(see Problem 4 for calculation of the mixed strategy equilibrium in a more simple game
of chicken).
3) The following game tree describes the Batter/Pitcher game now that the pitcher moves
first and the batter sees the pitcher’s strategy choice. As before, the first payoff listed in
parenthesis is the Batter’s payoff, and the second payoff is the Pitcher’s.
Pitcher Throw
Low
Fastball
Hi Fastball
Hi
Change-up
Batter
Swing
Batter
Swing
High
High
Mid
(20,45)
Low
Low
(0,80)
(100,-10)
Low Change-up
(0,80) Mid
(100,-10)
(30,40)
Batter
Swing
Batter
Swing
Low
High
(80,10) Mid
(10,50)
(0,70)
High
(0,70)
Low
Mid
(80,10)
(20,40)
Now, the pitcher will never throw a fastball. If he did, then the batter would hit it for a
massive home-run, and the pitcher’s payoff would be -10 (humiliation payoff I guess).
By throwing a change-up, the pitcher can guarantee a payoff of 10, which is better than 10. The outcome we predict (by backwards induction) is that the pitcher will throw some
type of change-up (either hi or low), which will be smacked for a home-run, yielding
payoff outcome (80,10). Wow, what an advantage the batter would have to be able to see
the pitches this well! Perhaps a more thorough analysis would then add a fifth Pitcher
strategy—the bean ball!
4)
If the Chinese goalie has expected payoff to diving right as
q*0 + (1-q)*(-10) = 10q-10, and the expected payoff of diving left is now
q*(-10) + (1-q)*0 = -10q, then she will dive right as long as the expected payoff is higher
for doing so. In other words, she’ll dive right if 10q-10 > -10q, or q > ½.
Similarly, the U.S. kicker will kick to the goalie’s right when p < ½ . So, as long
as p≠1/2 the U.S. kicker has a dominant strategy, and as long as q ≠1/2 the Chinese goalie
has a dominant strategy. We can depict this situation in the simple graph below.
2
P (Chinese Goalie)
Penalty Kick Game
1
Goalie Reaction
Kicker Reaction
1/2
1
q (U.S. Kicker)
and you can see that the only equilibrium is a
mixed strategy equilibrium where is each is taken with probability ½ (by both players).
1/2
5)
In this Battle-of-the-sexes game, there are two pure-strategy Nash equilibria,
highlighted below by the bold payoff cells. The equilibria are that they either both go to
Opera, or both go to the Wrestling match. This game is an example of a “coordination”
game, in which payoffs are highest when players can coordinate their actions.
Wife
Go see Wrestling
Go to the Opera
0,0
Husband Go see Wrestling
2,1
Go to the Opera
0,0
1,2
(this game also has a mixed-strategy equilibrium in which each player chooses his/her
preferred activity with probability 2/3….use the same process as in the previous
questions (4) to find this mixed strategy equilibrium).
It might be that the Opera is closer to their house than the Wrestling match, for example,
but we do need to assume something else into this or else it is hard to pick among the
different equilibria. If the husband and wife and some other reason to believe that one of
the equilibria is more “natural” than the others for whatever reason, then we could say
that the more natural equilibrium is a “focal point” outcome.
3
UNCERTAINTY
1)
The expected value of this lottery is just
.4*$100 + .3*$150 + .2*$250 + .1*$400 = $175.
The variance of this lottery is found by taking the sum of the squared deviations from the
expected payoff ($175), weighted by their appropriate probability of occurrence.
Variance=(100-175)2*.4 + (150-175)2*.3 + (250-175)2*.2 + (400-175)2*.1
= 2250 + 187.5 + 1125 + 5062.5 = 8625 (the variance)
2)
The probability distribution could be sketched as the following histogram, with
the height of each payoff bar representing the probability of that payoff’s occurrence.
probability of
occurrence
.4
.3
.2
.1
$100 $150
$250
$400
Payoff
To calculate the “certainty equivalents”, first take the expected utility of the lottery by
considering the weighted utility of each contingency, all added up.
For U ( x)  x , we have
EU ( x)  .4 * 100  .3 * 150  .2 * 250  .1 * 400  12.84
The certainty equivalent is the payoff amount that, if provided with certainty, would yield
this same level of utility. So, just solve 12.84  x  x  12.84 2  $165. In other
words, the risk-averse individual with U ( x)  x would accept $165 with certainty to be
just as well off as playing a lottery with expected payoff of $175.
For U ( x)  x 2 , the expected utility of the lottery is given by
EU ( x)  .4 *100 2  .3 *150 2  .2 * 250 2  .1* 400 2  39250 . The certain amount that
would provide the same utility level is found by solving
39250  x 2  x  39250  $198. In other words, you would have to pay a premium to
this individual in order that he/she be indifferent between the certain amount and the
lottery with expected payoff of $175.
By similar methods, you should confirm that the certainty equivalent for the riskneutral individual is just $175 (hopefully, this is not surprising).
4
3)
For a risk-averse individual, the more concave the utility function, the more riskaverse the individual. For example, an individual with utility function U ( x)  4 x = x1/4
is even more risk-averse than someone with utility function U ( x)  x = x1/2. This
difference is reflected in the graph below.
U(w)
U(w)=w1/2
U(w)=w1/4
w=wealth
The same goes for risk-loving individuals. The more convex the utility function (i.e., the
more sharply and quickly it rises), the more risk-loving is the individual. Notice in all
cases, however, that the utility functions never slope downward. This is an important
detail as it would imply that utility decreases in wealth at a certain point.
4)
The expected value of this new lottery is
.25*$25 + .25*$150 + .25*$200 + .25*$325 = $175 (same as before).
However, the variance of this lottery is now
Variance = (25-175)2*.25 + (150-175)2*.25 + (200-175)2*.25+ (325-175)2*.25
= 5625 + 156.25 + 156.25 + 5625 = 11562.5
Because the variance is now higher, intuitively Betty (with U ( x)  x ) would accept a
lesser amount with certainty to avoid this now riskier lottery. We can see this by finding
the expected utility of this new lottery
E U ( x)  .25 * 25  .25 * 150  .25 * 200  .25 325  12.35
This may seem like a small difference to the previous expected utility, but now calculate
the certainty equivalent by solving 12.35  x  x  12.35 2  $153. So, indeed Betty
has a lower certainty equivalent for this riskier lottery.
5)
In this situation, we can calculate what you would be willing to pay for “fair”
insurance as follows:
Your wealth if you buy insurance and suffer no loss will be $100,000$1,400=$98,600. You will have this same wealth if you buy insurance and suffer loss,
because the “full” insurance policy will repay you the entire $10,000 loss. So, you will
have wealth of $98,600 if you buy insurance, no matter what happens.
If you do not buy insurance, you face the lottery of having $100,000 wealth with
probability 80% versus $90,000 wealth with probability 20%. So, the expected utility of
this lottery is just EU (w)  .20 * 90,000  .80 * 100,000  313 . The utility provided
5
by purchasing this insurance policy would be U (98,600)  98,600  314. So, you have
higher expected utility from buying the policy, and you’ll buy it.
An alternative way of figuring this out is to find out the maximum amount you
would be willing to pay for this insurance policy. In other words, what is your certainty
equivalent in this situation?
The level of (certain) wealth that would provide this same utility as the expected
utility of the loss-lottery is 313  w  w  313 2  $97,969 . You would have this
wealth by paying out $100,000-$97,969=$2031.00 for the policy. This is the maximum
amount you would pay for the policy, but since the policy only costs $1,400, you’ll buy
it.
6)
The expected value of the lottery (i.e., arbitration) is .5*1,000,000+.5*2,000,000,
or just $1,500,000. Because Nichole is risk neutral, she would not accept any
management wage offer that is less than $1,500,000.
The expected value of the lottery to Guido is .25*1,000,000+.75*2,000,000 or
$1,750,000. But because Guido is risk-averse, the expected utility of the lottery is
EU (w)  .25 * 1,000,000  .75 * 2,000,000  1311 . The negotiated settlement that
would provide Guido the same level of utility is 1311  x  x  13112  $1,718,721.
So, though Guido will accept less than the expected value of the lottery in a negotiated
settlement, his threat point (or reservation point, or BATNA (management term)) is still
higher than Nichole’s threat point.
The lesson to be learned is not that bribing arbitrators or judges will give unions
better wage contracts. The main point is that risk-aversion and optimism work in
opposite directions. In this case, Guido’s optimism outweighs his risk-aversion relative
to Nichole (who thinks each outcome is equally likely and is risk-neutral). However, try
the same problem with Guido being even more risk-averse, like U ( w)  3 w =w1/3, and
you’ll see Guido’s threat point plummet (down to about $14,279).
None of this, however, answers the question of who the union should choose as its
negotiator. A lower threat point makes a voluntary settlement more likely, but the
outcomes from the settlement are not as good. A higher threat point makes voluntary
settlement less likely, but if settlement does occur, it is a better settlement.
6
U(w)
Union (Guido)
U(w)=w1/2
U(2,000,000)
.75*U(2,000,000)
+.25*U(1,000,000)
U(1,000,000)
1,000,000
2,000,000
w=wage package
1,718,721
7
GENERAL EQUILIBRIUM
1)
The two Edgeworth Box diagrams are below.
x1B
10
Person B
5
Here, there are 10 units
of x1 available and 5
units of x2 available
x2B
x2A
5
Person A
10
x1A
x1B
5
Person B
10
Here, there are
5 units of x1
available and
10 units of x2
available
x2B
x2A
10
Person A
5
x1A
8
2)
In this case, if Bob’s endowment is x1=6, x2=8, then Jamie’s endowment must be
the rest of the goods or x1=14, x2=2. This point is highlighted as E.
The indifference curves are also highlighted in the diagram below. Bob’s
preferences are increasing as we move northeast in the Edgeworth Box, but Jamie’s
preferences are increasing as we move southwest (turn your page upside-down to get the
usual view of Jamie’s indifference curves).
20
x1B
14
Person B (Jamie)
10
8
2
E
x2B
I1Jamie
x2A
I2Jamie
I2Bo
I1Bo
Person A (BOB)
x1A
10
b
b
6
20
3)
Bob’s indifference curves will be linear (perfect substitute preferences) with slope
equal to -1/2. Jamie’s will be L-shaped (perfect complement preferences) with slope
kinks all occurring on the x2=(1/2)x1 line (from Jamie’s perspective). The Pareto
Improving Region—shaded in grey—includes all points in the box giving both Bob and
Jamie higher utility than at E, but only the points on the contract curve are Pareto
efficient (we’ll not do the contract curve on this graph…see next problem….but it is only
a subset of the Pareto improving region, so that not all Pareto improvements imply a
Pareto efficient outcome).
20
x1B
14
Person B (Jamie)
10
8
2
E
x2B
IBob
x2A
x2=(1/2)x1 kink
line for Jamie
Person A (BOB)
x1A
6
IJamie
10
20
4)
Keep in mind the direction in which preferences increase for each individual, and
the fact that Person A has indifference curves that kink on the x2=2x1 line (from A’s
9
perspective), while Person B in this case has linear indifference curves with slope equal
to minus one. You can see that we can continue to exploit mutual beneficial trades from
any point in the Edgeworth Box until we are at the points where Person B’s linear
indifference curves hit the kink of Person A’s L-shaped indifference curves (if they both
had nice convex indifference curves, these would be the points where their indifference
curves were tangent). The resulting contract curve is highlighted as the bold line. Note
that when the contract curve hits the edge of the box, it continues to the Person B origin
(consider that weakly part of the Pareto set, because along that segment it is impossible to
make both people simultaneously better off)
x1B Person B
10
10
I3A
x2B
I2A
x2A
I1B
I3B
Person A
I1A
10
I2B
The Contract Curve (which
also happens to be the x2=2x1
kink-line for person A, until it
hits the box edge)
10
x1A
5)
For Person A, income is found by calculating the value of the endowment, which
for Person A is 2*$2+8*$4=$36. For person B, real income based on the endowment is
8*$2+2*$4=$24. Now, just proceed as normal to find, for person A, x1A=9, x2A=4.5. For
person B, we can find x1B=12, x2B=3. So, the current prices for x1 and x2 are such that
there is excess demand for x1. The price for x1 must therefore rise.
So, let’s raise the price of x1 to p1=$4 (you could also do p1=$3, and you would
find that there is still excess demand for x1), while keeping p2=$4—we’re just raised the
relative price of x1 as you can see. Now, the real income for person A is $40, and the real
income for person B is $40. The optimal demanded amounts for person A are x1A=5,
x2A=5. For person B, the optimal demanded amounts are x1B=5 and x2B=5. We are
therefore in general equilibrium as there is neither excess demand nor excess supply for
either good. The equilibrium is shown on the graph below (along with the indifference
curves that go through the original endowment point). If you want, you can draw a
separate Edgeworth Box for each situation of different relative prices, so that there is not
multiple budget lines cluttering up the same Box.
10
10 8
10
5
x1B Person B
E
8
2
x2B
5
5
x2A
The Contract Curve
Original Budget line
Equilibrium Budget line (i.e.,
equilibrium price ratio)
10
Person A
5
2
10
x1A
6) In this Production Edgeworth Box, the Technical Rate of Substitution of input x1 for
x2 is greater for Firm A and for Firm B (i.e., steeper isoquant for Firm A at point Z.
x1B
FirmB
Z
x2B
QB
x2A
QA
Firm A
x1A
So, this means that the Marginal product of input x1, MP1 relative to MP2 is greater for
Firm A than Firm B. Firm A is therefore getting more “production bang for the buck”
out of input x1 and so the economy can reallocate resources so that Firm A uses more x1
input and less x2 (and Firm B would do the opposite). This would move the firms into
the Pareto improving region of the Edgeworth Box (relative to point Z).
7) The following is the graph of the economy. At point Z, the economy’s MRT>MRS.
So, the current opportunity cost of another unit of butter is a higher number of lost cars
(i.e., not produced) than what consumers value another unit of butter (in terms of lost cars
consumed) in their preferences. So, the economy can increase efficiency by producing
less butter, and moving ultimately to point M, where MRT=MRS. AT point M, the
production process is balancing production of cars and butter and the same trade-off rate
as what consumers value cars and butter in their preferences.
11
Cars
M
PPF
Z
Indifference Curves
Butter
BEHAVIORAL AND EXPERIMENTAL ECONOMICS
1) In short, experimental economics is aimed at using controlled experiments to test
economic theories (of behavior, of markets, etc). Behavioral economics is a sub-field
that incorporates psychology insights into evaluating theories and into how experiments
are designed. Behavioral economics is often identified with the belief that individuals are
not perfectly rational and care about more than money (which is at odds with the
prototypical “economic man” (homo-economicus)).
2) In fact, individuals often propose significantly more than zero in this game, with
average proposals being roughly 40% of the pie. Such offers are typically accepted, but
offers less than 30% of the pie start getting rejected quite frequently. This only makes
sense if one considers that something other than money matters to at least the responders.
Proposers may offer more than the minimal amount if they are altruistic, but their
positive offers may also be out of fear of rejection of the offer, or out of fear the
experimenter will think he/she is a jerk. A modification of the game called the “dictator
game” where the responder cannot reject the offer, leads to lower offers, and when
administered in a double blind fashion so that not even the experimenter knows how
much was offered, then zero becomes a modal offer (though some still offer positive
amounts).
3) Infinite rationality implies the ability to reason and infinite number of “steps” through
the Guessing Game process. The equilibrium guess is zero for all parameters p<1 (e.g.,
the game described in this question is one where p=.5, since the target guess is ½ the
average guess). For example, suppose I assume others guess randomly, such that the
average guess would be 50. Then I should guess ½*50 = 25. But, if I think others are
thinking that same way (i.e., I think another step deeper in my reasoning), then the
average guess will likely be 25, so I should guess ½*25 = 12.5. But, if I think others
think two steps deep like that, then I should guess 6.25, etc…..all the way to zero. If,
however, individuals have a finite limit to how deep they can reason, then we expect to
see responses greater than zero (with clusters of responses at 25, 12.5, 6.25). A key
12
consideration is whether I submit a guess > 0 because I’m limited in rationality, or
perhaps I’m not but I think others are (how might you design an experiment to determine
this?)
4) These biases are optimism (or overconfidence), hyperbolic discounting, falling prey to
framing effects, failure to ignore sunk costs, and other-regarding preferences. [you
should be able to describe exactly what each of these biases is and give an example
of how these matter in the real world].
Retail outlets are “framing” the price of their goods so that you think you are
getting a bargain (you can think of this as “anchoring” you to a high price so that your
reference point implies you view the sale price as a gain on your part).
5) Laboratory experiments give one more control over variables that may confound the
interpretation of the key result. However, lab experiment are less “real world”. Field
experiments study individuals in naturally occurring decision settings and so are more
real world, but the experimenter has less control over other variables that may potentially
influence the key variable of interest. Ultimately, lab and field experiments should be
viewed as complements to one another.
13