Search Theory and the Winner``s Curse

fra51864_ch06W_001-013.indd Page 6S-1
21/01/13
2:49 PM f-499
Web Supplement
to Chapter 6
/208/MHR00219/fra51864_disk1of1/0071051864/fra51864_pagefiles
SEARCH THEORY AND THE
WINNER’S CURSE
6S.1 The Search for High Wages and Low Prices
If all jobs paid the same and were in all other respects equally desirable, there would be
no reason to continue searching once you received your first job offer. But jobs are, of
course, not all the same. In particular, some make much fuller use of your particular mix
of talents, training, and skills than others. The better the requirements of a given job
match your inventory of personal characteristics, the more productive you will be and
the more your employer will be able to pay you. If you are a slick-fielding shortstop
who can hit 30 home runs and steal 30 bases each year, for example, you are worth a lot
more to the Toronto Blue Jays than to a local Tim Hortons.
Whatever your mix of skills, your problem is to find the right job for you. The first
thing to note is that your search cannot—indeed, should not—be exhaustive. After all,
there are more job openings in Canada at any moment than a single person could possibly hope to investigate. Even if it was somehow possible to investigate them all, such
a strategy would be so costly—in terms of both money and time—that it would surely
not be sensible.
For simplicity, suppose we abstract from all other dimensions of job variation
other than wage earnings. That is, let us assume that there is a distribution of possible job vacancies to examine, each of which carries with it a different wage. Also for
simplicity, suppose that you are risk-neutral, that you plan to work for one period of
time, and that the per-period wage payments for the population of job vacancies are
uniformly distributed between $100 and $200, as shown in Figure 6S-1. This means
that if you examine a job vacancy at random, its wage payment is equally likely to
take any of the values between $100 and $200. Suppose, finally, that it costs $5 to
examine a job vacancy.
You have begun your search and the first job you examine pays $150. Should you
accept it, or pay $5 and look at another? (If you do examine another job and it turns out
to be worse than your current offer, you can still accept the current offer.) To decide
intelligently here, you must compare the cost of examining another offer with the
expected benefits. If there are to be any benefits at all, the new offer must be greater
6S.1 THE SEARCH FOR HIGH WAGES AND LOW PRICES
6S-1
fra51864_ch06W_001-013.indd Page 6S-2
21/01/13
2:49 PM f-499
/208/MHR00219/fra51864_disk1of1/0071051864/fra51864_pagefiles
FIGURE 6S-1
A Hypothetical Uniform
Wage Distribution
The wage paid by a
randomly selected new
job offer is equally likely
to take any of the values
between $100 and $200
per time period. On the
average, a new job offer
will pay $150.
100
200
Wage ($/period)
than $150. The probability of that happening is 0.5 (in Figure 6S-2, the ratio of the area
of the shaded rectangle to the area of the total rectangle).
Suppose the new offer does, in fact, exceed $150. What is its expected value? In
Figure 6S-2, note that since an offer greater than $150 is equally likely to fall anywhere
in the interval from $150 to $200, its average value will be $175, which is a gain of $25
over your current offer. So the expected gain from sampling another offer when you
have a $150 offer in hand—call it EG(150)—is the product of these two factors: (1) the
probability that the new offer exceeds the old one, and (2) the expected gain if it does.
So we have
EG(150) (1/2)($25) $12.50
(6S.1)
Since the expected gain of sampling another offer exceeds the $5 cost, you should continue searching.
How large should an offer be before you should accept it? The answer to this question
is called the acceptance wage, denoted w*. If you are risk-neutral, it is the wage for which
the expected monetary benefits of sampling another offer are exactly equal to the costs.
More generally, it would be the wage for which the expected utility gain from sampling
another offer is exactly offset by the loss in utility from the cost of search. The risk-neutral
case is much simpler to analyze, and still illuminates the most important issues.
If the current offer is w*, note in Figure 6S-3 that the probability of getting a better
one is (200 w*)/100 (which is again the ratio of the area of the shaded rectangle to the
area of the total rectangle). Assuming the new offer does land between w* and 200, its
expected value will be halfway between the two, or (200 w*)/2. This expected value
is (200 w*)/2 units bigger than w*.
FIGURE 6S-2
The Expected Value of an
Offer That Is Greater
Than $150
The probability that the
next job offer will exceed
$150 is 0.5. An offer that
is known to exceed $150
is equally likely to lie
anywhere between $150
and $200. Its expected
value will be $175.
6S-2
100
150
175
200
Wage ($/period)
Web Supplement to Chapter 6: SEARCH THEORY AND THE WINNER’S CURSE
fra51864_ch06W_001-013.indd Page 6S-3
21/01/13
2:49 PM f-499
/208/MHR00219/fra51864_disk1of1/0071051864/fra51864_pagefiles
FIGURE 6S-3
The Acceptance Wage
The acceptance wage,
w*, is the wage level for
which the cost of an
additional search exactly
equals its expected
benefit. The expected
benefit is the product of
the probability that a new
offer will pay more than
w* [(200 w*)/100], and
the average gain when it
does so [(200 w*)/2].
To find w*, set this
product equal to the cost
of search (here $5) and
solve for w*.
Wage ($/period)
100
w*
200
(200 + w*)/2
The expected gain of sampling another offer is again the probability that the new
offer exceeds w* times the expected wage increase if it does. So we have
EEG1w*2 a
1200 w*2 2
200 w* 200 w*
ba
b
100
2
200
(6S.2)
By the definition of the acceptance wage for a risk-neutral searcher, this expected gain is
equal to the cost of sampling another offer:
1200 w*2 2
5
200
(6S.3)
w* 200 21000 168.38
(6S.4)
which reduces to
In this example, then, the optimal decision rule will be to continue searching until
you find an offer at least as high as $168.38. When wages are uniformly distributed, as
here, you should end up, on the average, with a wage that is midway between $200 and
w*. Note, however, that following this rule does not necessarily mean you will always
do better than if you accept a current offer that is less than w*. If you are extremely
unlucky, for example, you might start off with an offer of $160 and then search 20 times
before finding an offer greater than w*. Your total earnings net of search costs could then
be at most $100, which is obviously worse than $160.
EXERCISE 6S-1
If you are risk-neutral, the cost per search is $1, and wage offers are uniformly distributed
between 10 and 60, what is the smallest wage you should accept?
Analogous reasoning leads to a similar optimal decision rule for someone who is
searching for a low-priced product. As we see in Chapter 13, most firms employ a variety
of discount pricing methods, with the result that there generally is a relatively broad
distribution of prices in the markets for most products. Again for simplicity, assume a
price distribution that is uniform on the interval (0, P), as shown in Figure 6S-4.
6S.1 THE SEARCH FOR HIGH WAGES AND LOW PRICES
6S-3
fra51864_ch06W_001-013.indd Page 6S-4
21/01/13
2:49 PM f-499
/208/MHR00219/fra51864_disk1of1/0071051864/fra51864_pagefiles
FIGURE 6S-4
A Hypothetical Price
Distribution
The price of a product
selected randomly
from this distribution
is equally likely to be
any number between 0
and P. The probability
of finding a price below
P* is P*/P, which is
simply the fraction of all
prices that lie below P*.
A price that is known
to be below P* has an
average value of P*/2.
Price ($)
0
P*/2
P*
P
The acceptance price, P*, is determined in much the same way as the acceptance
wage from the job search case. If the price of the product you have just sampled is P*,
the probability of getting a lower price on your next try is P*/P (as before, the ratio of
the area of the shaded rectangle to the area of the total rectangle). If you do find a lower
price, your savings, on the average, will be P*/2. Your expected gain from another
search at P* is therefore
EG1P*2 a
1P*2 2
P* P*
ba b 2
P
2P
(6S.5)
If the cost of another search is C, the expression for the acceptance price will be
P* 22PC.
(6S.6)
In both the wage and price-search cases, the acceptance levels depend on the cost of
search: when the cost of examining an additional option rises, your acceptance wage
falls, and your acceptance price rises. In the price-search case, the relationship between
P* and C (as given in Equation 6S.6) is shown in Figure 6S-5.
FIGURE 6S-5
The Acceptance Price as
a Function of the Cost of
Search
When price is uniformly
distributed over the range
(0, P), as the cost per
search (C) rises, it pays to
settle for higher-priced
products. As the diagram
shows, when C reaches
P/2, it doesn’t pay to
search at all: just buy the
first of the items you see.
P*
P
P* = 2CP
0
6S-4
P/2
Web Supplement to Chapter 6: SEARCH THEORY AND THE WINNER’S CURSE
C
fra51864_ch06W_001-013.indd Page 6S-5
EXAMPLE 6S-1
21/01/13
2:49 PM f-499
/208/MHR00219/fra51864_disk1of1/0071051864/fra51864_pagefiles
Suppose you are searching for a low price on a price distribution that is uniform on the
interval ($1, $2). How will your acceptance price change if the cost of search rises
from $0.05 to $0.10 per search?
The expression for the acceptance price given in Equation 6S.6 is for a price distribution
that is uniform on the interval (0, P). Note that the minimum value of the price distribution
here is not 0 but $1/unit. The acceptance price for this price distribution will be exactly
$1 higher than the acceptance price for a uniform price distribution on the interval (0, 1).
With a cost of $0.05/search we see from Equation 6S.6 that the latter acceptance price will
be 20.10 0.316, which means that the acceptance price for the uniform price distribution on (1, 2) will be about $1.32. With a cost of $0.10/search, the acceptance price for the
uniform distribution on (0, 1) rises to 20.20 0.447, so the new acceptance price for
the uniform distribution on (1, 2) will be about $1.45.
Note the essential similarity between the wage and price-search problems and the
toad’s decision considered at the beginning of Chapter 6. Like the rational wage or
price-searcher, a rational toad would weigh the costs of additional search against the
expected benefits. He would arrive at an “acceptance pitch,” such that if his rival’s
croak was pitched any higher, he would stay and fight.
6S.2 The Winner’s Curse
Some years back, Max Bazerman and William Samuelson performed the following
experiment in their microeconomics classes.1 First they placed $8 worth of coins in a
clear glass jar. After giving their students a chance to examine the jar carefully, they then
auctioned it off—coins and all—to the highest bidder. They also asked each student to
submit a written estimate of the value of the coins in the jar.
On the average, students behaved conservatively, both with respect to their bids and
to the estimates they submitted. Indeed, the average estimate was only $5.13, about onethird less than the actual value of the coins. Similarly, most students dropped out of the
bidding well before the auction price reached $8.
Yet the size of the winning bid in any auction depends not on the behaviour of the
average bidder, but on the behaviour of the highest bidder. In 48 repetitions of this experiment, the top bid averaged $10.01, more than 20 percent more than the coins were
worth. So Bazerman and Samuelson made almost $100 profit at the collective expense
of their winning bidders. At that price, the winners may consider it an important lesson
learned very cheaply.
Governments in Canada and elsewhere have used auctions in the allocation of rights
to utilize public natural resources, such as oil leases and logging rights, and interest in
such auctions has grown in the last few decades. Auctions have been used or considered
for the allocation not only of traditional natural resource-based rights, but also of municipal taxi licences, the right to erect billboards, and (in Europe) even pollution-emission
rights. Some of the most dramatic auctions were the multibillion dollar auctions held in
Europe in 2000 of cellphone bandwidth frequencies. Rights to use chunks of the electromagnetic spectrum itself (as a scarce public natural resource) were auctioned off, using
principles from game theory in designing the sequential-auction method that was used!
1
For a more detailed account of this experiment, see David Warsh, “The Winner’s Curse,” The Boston Globe,
April 17, 1988.
6S.2 THE WINNER’S CURSE
6S-5
fra51864_ch06W_001-013.indd Page 6S-6
21/01/13
2:49 PM f-499
/208/MHR00219/fra51864_disk1of1/0071051864/fra51864_pagefiles
Auctions have been used to allocate such rights because under conditions of competition and perfect information, they provide an efficient mechanism for appropriating
the economic rents or quasi-rents for the public resource owners or rights holders.
Under conditions of uncertainty and imperfect information, however, it is possible (just
as in the coins auction example) that the winning bidders may pay more than the rights
are worth.
The general principle that the winning bid for an item often exceeds its true value is
known as the winner’s curse. The startling thing about the winner’s curse is that it does
not require anyone to use a biased estimate of the value of the prize. The problem is that
all estimates involve at least some element of randomness. An estimate is said to be
unbiased if, on the average, it is equal to the true value. Temperature forecasts, for
instance, are unbiased; they are too high some days, too low others, but their long-run
average values track actual temperatures almost perfectly. By the same token, even if
each bidder’s estimate is unbiased, it will on some occasions be too high, on others too
low. And the winner of the auction will of course be the bidder whose estimate happens
to be too high by the greatest margin.
A fully rational bidder will take into account the fact that the winning bid tends to be
too high. Referring again to the coin auction, suppose that someone’s best estimate of
the value of the coins in the jar is $9. Knowing that the winning bid will tend to be too
high, he can then protect himself by adjusting the amount he bids downward. If other
bidders are fully rational, they too will perform downward revisions of their bids, and
the identity of the winning bidder should be the same as before.
How big should the downward adjustment be? A moment’s reflection makes it clear
that the more bidders there are, the larger the adjustment should be. Suppose the true
value of the good being auctioned is $1000. To illustrate what is meant by an unbiased
estimate, imagine that each potential bidder draws a ball from an urn containing 201
balls consecutively numbered from 900 to 1100. (Once its number is inspected, each ball
is returned to the urn.) The expected value of the number on any given ball is 1000, no
matter how many bidders there are. But the expected value of the highest number drawn
will increase with the number of bidders. (If a million people drew balls from such urns,
it is almost certain that someone would get 1100, the maximum possible value; but if
only five people drew, the highest number would be much smaller, on the average.)
Accordingly, the more bidders there are, the more you should adjust your estimate
downward.
To illustrate the mechanics of the adjustment process, consider an auction in which
the true value of the item for sale is 0.5, and in which each bidder uses an estimate that
is uniformly distributed between 0 and 1. This estimate is equally likely to take any
value between 0 and 1, so it has an expected value of 0.5, which means it is unbiased
(see Figure 6S-6).
If there was only one bidder at this auction, the expected value of the highest estimate would be 0.5, and so there would be no need for him to make an adjustment. If
there were N bidders, however, what would be the expected value of the highest estimate? Suppose we put the N estimates in ascending order and call them X1, X2, . . . , XN
so that XN denotes the highest of the N estimates. Since the estimates all come from a
uniform distribution, the expected values of X1, X2, . . . , XN will be evenly spaced along
the interval. As noted, when N 1 we have only X1, and its expected value, 0.5, lies
right in the middle of the interval.
What about when N 2? Now our two estimates are X1 and X2, and their expected
values are as shown in the second panel of Figure 6S-7. This time the highest estimate
has an expected value of 2/3. In the third panel, note that the highest of three estimates
6S-6
Web Supplement to Chapter 6: SEARCH THEORY AND THE WINNER’S CURSE
fra51864_ch06W_001-013.indd Page 6S-7
21/01/13
2:49 PM f-499
/208/MHR00219/fra51864_disk1of1/0071051864/fra51864_pagefiles
FIGURE 6S-6
An Unbiased Estimate
with a Uniform
Distribution
Each potential bidder has
an estimate of the value
of the resource that is
equally likely to take any
value between 0 and 1.
The true value of the
resource is the average
value of these estimates,
which is 0.5.
0
FIGURE 6S-7
.5
1
N=1
The Expected Value of
the Highest Estimate, N
ⴝ 1, 2, 3, and 4
When more people make
estimates of the value of a
resource, the expected
value of the highest
estimate increases. When
the estimates are uniformly
distributed on the interval
(0, 1), the largest of N
estimates (XN) has an
average value of N/(N 1).
0
1
X1
N=2
0
X1
1
X2
N=3
0
X1
X2
1
X3
N=4
0
X1
X2
X3
X4
1
has an expected value of 3/4. In the bottom panel, finally, the highest of four estimates
has an expected value of 4/5. In general, the highest of N estimates will have an
expected value of N/(N 1) (see footnote 2).
If the estimates were all drawn from a uniform distribution on (0, C), not on (0, 1), the
expected value of the highest estimate would be CN/(N 1).
2
The random variables X1, . . . , XN are known as the order statistics of a sample of size N. To find the expected
value of XN, note first that XN is less than any value z if and only if each of the N sampled values is less than
z. For the uniform distribution on (0, 1), the probability of that event is simply zN. This is the cumulative
distribution function of XN. The probability density function of XN is therefore d(zN)/dz, or NzN1. So the
expected value of XN is given by
1
N
zNzN1dz N1
0
冮
6S.2 THE WINNER’S CURSE
6S-7
fra51864_ch06W_001-013.indd Page 6S-8
EXAMPLE 6S-2
21/01/13
2:49 PM f-499
/208/MHR00219/fra51864_disk1of1/0071051864/fra51864_pagefiles
Suppose 50 people are bidding for an antique clock and each has an unbiased estimate of
the true value of the clock that is drawn from a uniform distribution on the interval (0, C ),
where C is unknown. Your own estimate of the value of the clock is $400. How much
should you bid?
Your problem is to adjust your estimate so that if it happens to be the highest of the 50, you
will not bid more, on the average, than the true value of the clock, which is C/2. The expected
value of the highest of 50 estimates is (50/51)C, so if your estimate of 400 happens to be the
highest, we have, on the average,
(50/51)C 400
(6S.7)
which solves for C 408. This is your adjusted estimate of the value of C on the assumption
that yours was the highest of the 50 unadjusted estimates. Since the true value of the clock is
C/2, you should therefore bid only $204.
EXERCISE 6S-2
If there had been only four bidders in Example 6S-2, how much should you have bid?
As a practical matter, do bidders really adjust their behaviour to eliminate the winner’s curse? An evolutionary argument can be made to the effect that there will be an
automatic tendency for this to happen. The idea is that bidders who fail to adjust their
estimates downward will end up losing money each time they win an auction, and will
eventually go bankrupt. Those who survive may not understand the winner’s curse at
all. They may simply be people who happen for other reasons to bid cautiously.
When extremely large sums of money are at stake and bankruptcy is a real possibility, the evolutionary argument has some force. Yet in its simplest form, the argument is
too simple. If the losses incurred by “winning” a single auction are not devastatingly
high relative to the wealth of the winner, then it will require a series of cursed wins to
drive the winner into bankruptcy. For this to occur, however, it is not sufficient that our
winner be overly optimistic in every auction: he must be consistently and systematically
the most overoptimistic of all bidders. Otherwise, he won’t win the other auctions, and
instead, other “successful” bidders will incur the winner’s curse. If the “wins” are
spread around, then the relative positions of the competitors will not be altered significantly, which is the critical factor in an evolutionary context.
Given that he has won the auction and paid more than the true value (or equivalently, that he has submitted an excessively low bid on a tender to provide, say, municipal
garbage collection services), let’s take the analysis a step further. He may not actually
lose money, provided that he works longer and harder than he had planned, to compensate for his excessive optimism. His utility will be lower, but his economic survival will
not be at stake. Moreover, in the process he should increase his stock of knowledge in
two ways. He will now have more accurate information on actual costs of garbage collection than his competitors. In addition, “learning by doing” and the need to innovate
in response to cost pressures can result in his developing more cost-efficient methods of
garbage collection, which enable him to underbid his competitors and make a profit
when the collection contract is next let out to tender.
In this extended evolutionary model, it is not just that if he fails to adjust his estimates
of the costs of garbage removal upward he will “become extinct” (go bankrupt). Rather,
the process of fulfilling the contract itself gives him the knowledge to make a more
6S-8
Web Supplement to Chapter 6: SEARCH THEORY AND THE WINNER’S CURSE
fra51864_ch06W_001-013.indd Page 6S-9
21/01/13
2:49 PM f-499
/208/MHR00219/fra51864_disk1of1/0071051864/fra51864_pagefiles
realistic estimate next time, as well as the opportunity to develop cost-efficient systems
that will make his bid more competitive. (The fact that in this auction he was a bad estimator doesn’t imply that he’s a poor manager in other respects.) In this version, which
allows for the possibility of learning, “Nothing succeeds like failure.” In the long run (if
our winner survives into the long run), the winner’s curse can be a blessing in disguise.
There are two further problems with the simple evolutionary argument. First, it
doesn’t address the question of how those who lost the auction (because they bid more
realistically) survive. If they have other sources of income, then our winner is surely
entitled to have other sources of income as well, and the profits from his profitable
activities can offset the losses resulting from the winner’s curse. This, incidentally, is
one further argument for diversification and spreading your risks.
Second, aided by the coins auction example, we slipped in several assumptions that
you may have already noticed: first, that there is such a thing as the “true” value of an
item at auction. For something like coins (assuming that there are no rare ones in the
jar), the “true” value is unambiguous: money is money. But consider Harry, whose taste
is all in his mouth, and who buys an absolutely hideous statue at an auction. He was
willing to pay $50 for it. He got it for only $7.50. His wife Madge says, “But, Harry, it’s
ugly!” “Maybe, but what a bargain!” For Harry, part of the utility he derives from auctions comes from the pleasure of getting a “bargain.” If Madge has her way, the statue
may turn up at next year’s yard sale, but in the meantime Harry is content: he “got his
money’s worth.”
Now consider Louise, who wants a picture with lots of sky blue in it for her study,
and finds one at a country auction (a bit dusty, and in an old frame, but with just the
shade of blue she’s after), which she gets for $40. The “true” value for her was what she
paid for it, because it complements her room. Actually, that’s not the “true” value, either,
because in dusting it off, she notices the signature “A. Y. Jackson” in the corner. She now
has a lost original Group of Seven painting, the market value of which turns out to be
$80,000 when she resells it at an auction a few months later, because she has decided to
repaint her study.
These examples suggest four final points about auctions and the winner’s curse.
First, for a unique item there is no issue of whether it could be purchased more cheaply
elsewhere: it can’t. Therefore, the only test my bid has to pass is whether the item is
worth that much to me. Second, asymmetries in information among the bidders can
alter the outcome of the auction. If a knowledgeable art dealer had happened to be at
Louise’s auction, Louise would never have won the auction. The art dealer would have
got the painting, for whatever bid it took to induce Louise to drop out of the bidding.
Third, in an open auction with possible informational asymmetries, the bidding process itself can generate information about the value of the item for less well-informed
bidders. So an auction has a game-theoretic dimension, with some bidders using early
bids to set up a low value for the final successful bid, known art collectors and dealers
using agents to bid on their behalf, and unscrupulous auctioneers using confederates or
“stooges” to bid up the price to artificially high levels. Fourth, while the specific formulas we have developed depend on the assumptions made (such as the uniform
distribution and risk neutrality assumptions), the principle of the winner’s curse is applicable to any situation in which there is a distribution of estimates around a true value.
EXAMPLE 6S-3
Suppose you are the economic adviser for a firm that is trying to decide whether to
acquire the Bumbler Oil Company, whose only asset is an oil field that has a net value
X under its current management. The owners of Bumbler know the exact value of X,
6S.2 THE WINNER’S CURSE
6S-9
fra51864_ch06W_001-013.indd Page 6S-10
21/01/13
2:49 PM f-499
/208/MHR00219/fra51864_disk1of1/0071051864/fra51864_pagefiles
but your company knows only that X is a random number that is uniformly distributed
between 0 and 100. Because of your company’s superior management, Bumbler’s oil
field would be worth 1.5X in its hands. What is the highest value of P that you can bid
and still expect not to take a loss if your company acquires Bumbler Oil?
If your company bids P and X is greater than P, then Bumbler will refuse the offer and the
deal is off. Your company earns zero profit in that case. If Bumbler accepts your company’s
offer, then we know that X P. Since X is uniformly distributed between 0 and 100, then for
any value of P you pick, the expected value of X for X P is P/2. This means that the expected
value of Bumbler’s oil field to your company is 1.5 times that, or 0.75P. If your company bids
P and Bumbler accepts, then the expected profit of your company will be 0.75P P
0.25P. So for any positive P, your company expects to lose 0.25P if Bumbler accepts its
offer. Your company’s best strategy is therefore not to bid at all.
6S.3 Some Pitfalls for the Expected Utility Maximizer
The expected utility model offers guidance about how to choose rationally in the face of
uncertainty. Yet some experiments on decision making in the face of risk have generated
apparently paradoxical results when judged by the expected-utility maximization criterion. One well-known example, based on the work of the French economist M. Allais,
suggests that most people behave inconsistently with respect to certain kinds of choices.
To illustrate, first consider the following pair of alternatives:
A: A sure win of $30
vs.
A: An 80 percent chance to win $45.
Confronted with these alternatives, most people choose A, the sure win.3 If a person is
risk-averse, there is nothing surprising about this choice, even though the expected
value of alternative A is $36.
Now consider the following pair of alternatives:
B: A 25 percent chance to win $30
vs.
B: A 20 percent chance to win $45.
This time most people choose the less certain alternative, namely, B. Taken in isolation,
this choice is also unsurprising, for the expected value of B ($7.50) is significantly lower
than that of B ($9), and both alternatives involve some risk. The problem is that the
most popular pair of choices (A and B), taken together, contradict the assumption of
expected utility maximization. To see why, suppose the chooser is a utility maximizer
with a utility function U(M) and an initial wealth level of M0. His choice of A over A
then implies that
U(M0 30) 0.8U(M0 45) 0.2U(M0)
(6S.8)
In turn, his choice of B over B implies that
0.2U(M0 45) 0.8U(M0) 0.25U(M0 30) 0.75U(M0)
3
(6S.9)
See Amos Tversky and Daniel Kahneman, “The Framing of Decisions and the Psychology of Choice,”
Science, 211, 1981: pp. 453458.
6S-10
Web Supplement to Chapter 6: SEARCH THEORY AND THE WINNER’S CURSE
fra51864_ch06W_001-013.indd Page 6S-11
21/01/13
2:49 PM f-499
/208/MHR00219/fra51864_disk1of1/0071051864/fra51864_pagefiles
Rearranging the terms of inequality Equation 6S.9, we have
0.25U(M0 30) 0.2U(M0 45) 0.05U(M0)
(6S.10)
Dividing both sides of inequality 6S.10 by 0.25, finally, we have
U(M0 30) 0.8U(M0 45) 0.2U(M0)
(6S.11)
which is precisely the reverse order of inequality Equation 6S.8, the one implied by the
choice of A over A. Hence the contradiction.
Psychologists Daniel Kahneman and Amos Tversky have labelled this kind of inconsistency the “certainty effect.” As a purely descriptive matter, they argue that “a
reduction in the probability of an outcome by a constant factor has a larger impact when
the outcome was initially certain than when it was merely probable.”4 So in the first pair
of alternatives, the movement from A to A represented a 20 percent reduction in the
chances of winning (from 100 to 80 percent), the same as the reduction when moving
from B to B (25 to 20 percent). But because the first reduction was from an initially certain outcome, it was much less attractive.
Note that Kahneman and Tversky are not saying that there is anything irrational
about liking a sure thing. Their point is simply that our choices in situations where both
alternatives are risky seem to imply a lesser degree of risk aversion than does our
behaviour in situations where one of the alternatives is risk-free.
Part of the attraction of the sure alternative is the regret many people expect to feel
when they take a gamble and lose. The expected utility maximizer will want to be careful, however, to avoid the “bad-outcome-implies-bad-decision” fallacy. To illustrate this
fallacy, suppose someone offers you the following gamble: You are to draw a single ball
from an urn containing 999 white balls and one red ball. If you draw a white ball, as you
most probably will, you win $1000. If you draw the lone red ball, however, you lose $1.
Suppose you accept the gamble and then draw the red ball. You lose $1. Do you now
say you made a bad decision? If you do, you commit the fallacy. The decision, when you
made it, was obviously a good one. Almost every rational person would have decided
in the same way. The fact that you lost is too bad, but it tells you nothing about the quality of your decision. By the same token, if you choose an 80 percent chance to win $45
rather than a sure win of $30, there is no reason to regret the quality of your decision if
you happen to lose.
As a general rule, humans typically prefer certainty to risk. At the same time, however, risk is an inescapable part of the environment. People generally want the largest
possible gain and the smallest possible risk, but most of the time we are forced to trade
off risk and gain against one another. When choosing between two risky alternatives,
we are forced to recognize this tradeoff explicitly. In such cases, we cannot escape the
cognitive effort required to reach a sensible decision. But when one of the alternatives is
riskless, it is often easier simply to choose it and not waste too much effort on the decision. What this pattern of behaviour fails to recognize, however, is that choosing a sure
win of $30 over an 80 percent chance to win $45 does precious little to reduce any of the
uncertainty that really matters in life.
On the contrary, when only small sums of money are at stake, a compelling case can
be made that the only sensible strategy is to choose the alternative with the highest
expected value. The argument for this strategy, like the argument for buying insurance,
rests on the law of large numbers. Here, the law tells us that if we take a large number
4
Amos Tversky and Daniel Kahneman, “The Framing of Decisions and the Psychology of Choice,” 1981, p. 456.
6S.3 SOME PITFALLS FOR THE EXPECTED UTILITY MAXIMIZER
6S-11
fra51864_ch06W_001-013.indd Page 6S-12
21/01/13
2:49 PM f-499
/208/MHR00219/fra51864_disk1of1/0071051864/fra51864_pagefiles
of independent gambles and pool them, we can be very confident of getting almost
exactly the sum of their expected values. As a decision maker, the trick is to remind
yourself that each small risky choice is simply part of a much larger collection. After all,
it takes the sting out of an occasional small loss to know that following any other strategy would have led to a virtually certain large loss.
To illustrate, consider again the choice between the sure gain of $30 and the 80 percent
chance to win $45, and suppose you were confronted with the equivalent of one sure
choice each week. Recall that the gamble has an expected value of $36, $6 more than the
sure thing. By always choosing the “risky” alternative, your expected gain—over and
beyond the gain from the sure alternative—will be $312 each year. Students who have
had an introductory course in probability can easily show that the probability you would
have come out better by choosing the sure alternative in any year is less than 1 percent.
The long-run opportunity cost of following a risk-averse strategy for repeated decisions involving
identical small outcomes is an almost sure loss of considerable magnitude. By thinking of your
problem as that of choosing a policy for dealing with a large number of choices of the
same type, a seemingly risky strategy is transformed into an obviously very safe one.
PROBLEMS
www.mcgrawhill.ca/olc/frank
6S1. You are searching for a high wage from a wage distribution that is uniform on the interval
($5, $8). The cost of each search is $0.06. What is the smallest wage you should accept?
6S2. A class of 100 students is participating in an auction to see who gets a large jar of quarters.
Each student has an unbiased estimate of the total value of the coins. If these estimates are
drawn from the interval (0, C), where C is not known, and your own estimate is $50, how
much should you bid?
**6S3. Suppose you are the economic adviser for a firm that is trying to decide whether to acquire
the Bumbler Oil Company, whose only asset is an oil field that has a net value X under its
current management. The owners of Bumbler know the exact value of X but your company
knows only that X is a random number that is uniformly distributed between 0 and 100.
Because of your company’s superior management, Bumbler’s oil field would be worth X 40
in its hands.
a. What is the most your company can bid and not expect to take a loss?
b. Assuming your company is the only bidder, what bid maximizes your company’s
expected profits?
6S4. If the wages you are offered are uniformly distributed between $75 and $150, and if the cost
of looking for another job is $2, what is the minimum wage you should consider?
6S5. The Earthly Bliss dating service charges $100 per date that it arranges. All of its dates will
accept an offer of marriage. In your estimation, the quality of the potential spouses offered
by the dating service can be measured by an index that runs from 0 to 100. The potential
spouses are uniformly distributed over this range. Suppose you value a spouse at $50 per
index point. If your dates were drawn at random from the Earthly Bliss pool, at what value
of the index would you stop searching?
6S6. You are a buyer for a used-car dealer. You attend car auctions and bid on cars that will be sold
at your dealership. The cars are sold “as is” and there is seldom an opportunity to make a
thorough inspection. Under these conditions, the lower bound for the value of a car can be
zero. A 2000 Dodge Neon has been offered at the auction. You are one of 20 bidders. Your
estimate of the value of the car is $200. If all the bidders have unbiased estimates drawn from
a uniform distribution with an unknown upper bound, what should you bid for the car?
**Problems marked with two asterisks (**) require calculus for their solution.
6S-12
Web Supplement to Chapter 6: SEARCH THEORY AND THE WINNER’S CURSE
fra51864_ch06W_001-013.indd Page 6S-13
21/01/13
2:49 PM f-499
/208/MHR00219/fra51864_disk1of1/0071051864/fra51864_pagefiles
ANSWERS TO IN-SUPPLEMENT EXERCISES www.mcgrawhill.ca/olc/frank
6S-1. Let w* again denote the acceptance wage. The probability of finding a higher wage is
(60 w*)/50. The average gain, given that you do find a higher wage, is (60 w*)/2. So the
expected gain is the product of these, (60 w*)2/100. Equating this to the cost per search, 1,
and solving for w*, we have w* $50.
10
w* = 50
60
Wage
6S-2. With four bidders, the expected value of the largest estimate is (4/5)C. Equating this to 400
and solving, we get C 500, so you should bid $250.
ANSWERS TO IN-SUPPLEMENT EXERCISES
6S-13