Is the “hot-hands” phenomenon a misperception of random events?

Japanese Psychological Research
2000, Volume 42, No. 2, 128–133
Short Report
Is the “hot-hands” phenomenon a misperception
of random events?
HIROTO MIYOSHI1
1664 Fujino, Wake-chou, Wake-gwun, Okayama 709-0412, Japan
Abstract: T. Gilovich, R. Vallone, and A. Tversky (1985) asked whether the so-called hot-hands
phenomenon – a temporary elevation of the probability of successful shots – actually exists in
basketball. They concluded that hot-hands are misperceived random events. This paper reexamines the truth of their conclusion. The present study’s main concern was the sensitivity
of the statistical tests used in Gilovich et al.’s research. Simulated records of shots over a
season were used. These represented many different situations and players, but they always
contained at least one hot-hand period. The issue was whether Gilovich et al.’s tests were
sensitive enough to detect the hot-hands embedded in the records. The study found that this
sensitivity depends on the frequency of hot-hand periods, the total number of shots in all hothand periods, the number of shots in each hot-hand period, and the size of the increase in
the probability of successful shots in hot-hand periods. However, when the values of those
variables were set realistically, on average the tests could detect only about 12% of the hothands phenomena.
Key words: hot-hands phenomenon, simulation, random event.
This paper examines the so-called “hot-hands”
phenomenon – a temporary elevation of the
probability of a particular player making
successful shots in basketball. Many fans,
players, and coaches believe in hot-hands, but
it could be just another example of a misperceived random event (Tversky & Kahneman,
1982).
Gilovich, Vallone, and Tversky (1985) examined whether hot-hands actually exist. Their
research comprised two parts. In the first, they
demonstrated that basketball players as well as
their fans strongly believe that hot-hands exist.
The second part consisted of three studies,
in which they sought to prove the existence of
hot-hands using three different types of empirical data: the seasonal statistics of professional basketball players; the professional
basketball free-throw data; and the data from a
controlled shooting experiment conducted
with varsity players. They used three different
types of statistics to analyse the data: the
proportion of successful shots, conditioned by
the success or failure of the previous shot(s);
the number of runs in the data; and the number
of successful, moderately successful, and less
successful series of consecutive shots, in blocks
of four. These statistics were compared with
the values probabilistically expected from the
player’s seasonal record. In Gilovich, Vallone,
and Tversky’s (1985) three studies, none of
these three statistical tests could reliably detect
hot-hands. Thus, they concluded that the belief
in hot-hands is another example of misperceived random events.
Although this research has been considered
clear and scientific evidence of how human
beings misperceive random sequences, there
1
Correspondence should be sent to Hiroto Miyoshi, 1664 Fujino, Wake-chou, Wake-gwun, Okayama 709-0412, Japan,
or [email protected]
© 2000 Japanese Psychological Association. Published by Blackwell Publishers Ltd, 108 Cowley Road,
Oxford OX4 1JF, UK and 350 Main Street, Malden, MA 02148, USA.
The hot-hands phenomenon
are still many people who believe in hot-hands
(Stacy & Macmillian, 1995). How may the
two camps be reconciled? Is it possible that
Gilovich et al.’s (1985) findings are valid but
that hot-hands still exist? To answer the question, one must consider two points: the factor
of human interactions in games; and the power
of the statistical tests used in their analyses.
Regarding the first, one may ask whether
players act differently when they have hothands. For example, they may attempt more
difficult shots. This change in behavior may
make it difficult for hot-hands to be statistically
detected. The other possibility is that the statistical power of their analyses may have been
insufficient. This study re-examines Gilovich
et al.’s (1985) conclusions for this second
possibility, using computer simulations.
Simulations
In this study, simulated records of shots were
created. Each shot was a Bernoulli trial and the
probability of successful shots was manipulated
to produce sequences of hot-hands shots.
The study focused on whether the tests used by
Gilovich et al. (1985) could detect the
hot-hands.
Because the effectiveness of the tests may
depend on several factors, such as the number
of hot-hand periods in a season, and the
number of shots in a hot-hand period, 120
different scenarios were considered separately.
Two hundred records were created for each of
120 different scenarios, and the tests were applied to each. The probability of the successful
detection of hot-hands was estimated for each
scenario. If the probability is high enough,
the test may reliably detect the hot-hands. The
criterion of successful detection was the same
as Gilovich et al.’s (1985), that is, statistical
(two-tailed) significance at the 5% level.
The 120 scenarios had different sets of values
for the following four variables: total number
of shots in all hot-hand periods; the number of
shots in a hot-hand period; the probability
of successful shots in hot-hand periods; and
the probability of successful shots outside the
129
hot-hand periods. The values set for these
variables were intended to reflect realistic
basketball games.
First, the total number of shots in the simulated season was set to 512 (= 29) throughout the simulations, because Gilovich et al.
(1985) analyzed the results from nine players
whose total number of shots in a season varied
from 248 (Clint Richardson) to 894 (Julius
Erving), with an average of 422.3 shots per
player.
Second, the author assumed that the total
number of hot-hand shots in a season is at
most 12.5% of all shots because hot-hands
are temporary and infrequent. Therefore, the
number of all hot-hand shots varied from eight
(= 23, 1.6% of all shots in a season) to 64 (= 26,
12.5% of all shots).
Third, the number of shots in each hot-hand
period varied from 21 to 24. Note, however, that
players may have more than 16 hot-hand shots
in a season, since they may have many hothand periods.
Fourth, the probability of successful shots
outside the hot-hand periods (the base rate)
was set to either .4 or .6. Because the data
presented by Gilovich et al. (1985) had an
average hit rate of .52 (ranging from .46 to .62
in the seasonal statistics and the free-throw
data), this paper focuses on the analysis of
the scenarios where average hit rates (including the hot-hands periods) vary from .40
to ,.7.
To keep the simulations simple, the hothands were presumed to appear periodically.
Consider a simulated player who shot 512
times in a season, and only eight were hot-hand
shots. The following three different types of
scenario were examined for the player. In the
first scenario, all hot-hand shots appeared in a
single hot-hand period at the end of the season.
In the second, the player had four hot-hand
shots at the end of the first half of the season,
and then four more hot-hand shots at the end
of the season. In the third, the player made two
hot-hand shots four times, at the end of the
first, second, third, and fourth quarter of the
season. An example of a record is shown in
the Appendix.
© Japanese Psychological Association 2000.
H. Miyoshi
130
The probability of successful shots was
increased by .2, .3, .4, .5, and .6 in the hothand periods. When a player’s probability of
successful shots increases by .5 from the base
rate of .4 in hot-hand periods, the hot-hand
shot was successful nine out of ten times (.9 =
.4 + .5). The probability of successful shots
increased by the same amount in all hot-hand
periods in a season, to keep the simulations
simple.
Although Gilovich et al. (1985) used three
kinds of statistics to analyse the hot-hands, the
present study examined only two of them:
the number of runs in a seasonal record, and
the probabilities of successful shots in blocks of
four consecutive shots, which they termed the
stationarity test. The test of the probability of
successful shots conditioned by the success or
failure of the previous shot(s) was not included
in this study. This was because the statistic did
not seem to be sensitive enough to measure
“temporary elevations of performance”
(Gilovich et al., 1985, p. 300), which is the
definition of hot-hands.
The total number of runs
The number of runs was examined first.
Table 1 shows the estimated probabilities of
successful detections of hot-hands in the different scenarios. There are two points to note.
First, the run test can detect hot-hands more
often when a player has more hot-hand shots.
Second, the test works more effectively when
the probability of successful shots increases
more in hot-hand periods.
When Table 1 was collapsed for all base rates
and increases in the probability of making successful shots in hot-hand periods, an additional
finding emerged: For a given total number of
hot-hand shots, the test can detect hot-hands
more easily when a player shoots more hothand shots in fewer hot-hand periods.
However, the test detected hot-hands in only
12.8% of all cases analysed in Table 1. Nearly
87% of the time, therefore, the test missed the
hot-hands in the data. Thus, the test seems
insufficiently sensitive.
© Japanese Psychological Association 2000.
Stationarity test
Gilovich et al. (1985) introduced another test,
called the stationarity test, to obtain a “more
sensitive test” (p. 301) than the run test. In this
analysis, the entire sequence of shots was
partitioned into non-overlapping sets of four
consecutive shots. Then, the experimenters
counted how many shots in each set were
successful. If the number of successful shots
in a set was three or four, the set was called a
“high set.” If two shots were successful in a set,
it was called a “moderate set.” If the successful
shots were less than two in a set, it was called a
“low set.” They counted the numbers of sets
and compared them with the expected numbers probabilistically derived from the overall
rate of successful shots. The rationale was that
“if a player is occasionally hot, then his record
must include more high-performance sets than
expected by chance” (p. 301).
The present study applied this analysis to the
same simulated records of shots examined in
the run test. The analysis was repeated four
times, starting the partition into consecutive
quadruples at the first, second, third and fourth
shot of the record. If at least one of the analyses detected hot-hands in a record, the test
was considered to be successful.
Table 2 shows the estimated probabilities of
successful detections in the different scenarios.
The same two conclusions may be drawn as
were apparent from Table 1. First, the detection
of hot-hands becomes easier when the total
number of all hot-hand shots increases. Second,
when the probability of successful shots increases more in hot-hand periods, the easier it
becomes to detect hot-hands.
When Table 2 was collapsed for all base rates
and increases of probabilities in hot-hand
periods, the same additional finding emerged:
The test becomes more efficient and effective
when a player has more shots in fewer hothand periods, for a given total number of hothand shots.
Overall, however, the test could detect only
about 10.2% of the hot-hand cases for the
scenarios in Table 2. Like the run test, the
stationarity test seems insufficiently sensitive.
The hot-hands phenomenon
131
Table 1. The estimated probabilities of successfully detecting hot-hands in simulated records
with the run test
Increase of probability in hot-hand periods
Number of shots in a hot-hand period
Eight hot-hands shots in a season
8
4
2
Sixteen hot-hands shots in a season
16
8
4
2
Thirty-two hot-hands shots in a season
16
8
4
2
Sixty-four hot-hands shots in a season
16
8
4
2
Base rate
.2
.3
.4
.5
.6
.4
.6
.4
.6
.4
.6
.06
.02
.05
.08
.04
.06
.06
.04
.07
.05
.05
.05
.05
.05
.05
.03
.04
.09
.02
.07
.08
.07
.03
.08
.4
.6
.4
.6
.4
.6
.4
.6
.05
.04
.07
.03
.05
.02
.04
.04
.04
.07
.05
.05
.06
.03
.05
.05
.06
.06
.06
.07
.05
.08
.02
.06
.07
.14
.01
.13
.08
.17
.08
.04
.4
.6
.4
.6
.4
.6
.4
.6
.07
.04
.04
.04
.05
.05
.08
.06
.07
.04
.07
.04
.03
.09
.06
.12
.08
.14
.08
.14
.01
.11
.18
.15
.17
.34
.17
.34
.23
.17
.13
.39
.4
.6
.4
.6
.4
.6
.4
.6
.09
.06
.05
.03
.10
.08
.06
.10
.09
.14
.10
.10
.09
.08
.03
.21
.37
.35
.19
.20
.40
.34
.04
.21
.58
.95
.54
.88
.40
.92
.18
.07
Discussion
The present study showed that the sensitivity
of the tests used by Gilovich et al. (1985)
depends on four factors: the frequency of hothand periods in a season; the total number of
hot-hand shots in the season; the number of
shots in a hot-hand period; and the size of the
increase in the probability of successful shots in
hot-hand periods.
The main focus was the effectiveness of the
tests in realistic situations defined by three
© Japanese Psychological Association 2000.
H. Miyoshi
132
Table 2. The estimated probabilities of successfully detecting hot-hands in simulated records
with the stationarity test
Increase of probability in hot-hand periods
Number of shots in a hot-hand period
Eight hot-hands shots in a season
8
4
2
Sixteen hot-hands shots in a season
16
8
4
2
Thirty-two hot-hands shots in a season
16
8
4
2
Sixty-four hot-hands shots in a season
16
8
4
2
Base rate
.2
.3
.4
.5
.6
.4
.6
.4
.6
.4
.6
.07
.03
.04
.08
.09
.06
.07
.09
.05
.08
.05
.07
.07
.05
.06
.03
.05
.07
.04
.06
.04
.08
.04
.09
.4
.6
.4
.6
.4
.6
.4
.6
.06
.07
.06
.05
.07
.02
.04
.05
.05
.09
.05
.07
.07
.07
.04
.07
.04
.07
.08
.06
.07
.09
.09
.05
.05
.09
.08
.06
.07
.11
.07
.07
.4
.6
.4
.6
.4
.6
.4
.6
.07
.09
.06
.05
.06
.06
.05
.06
.13
.08
.05
.06
.07
.05
.13
.08
.12
.08
.07
.07
.07
.10
.05
.08
.13
.15
.15
.13
.13
.13
.22
.08
.4
.6
.4
.6
.4
.6
.4
.6
.08
.08
.07
.07
.07
.12
.08
.08
.18
.14
.17
.12
.08
.07
.06
.11
.17
.19
.17
.12
.25
.23
.12
.14
.49
.49
.39
.42
.22
.52
.14
.26
conditions. First, the total number of hot-hand
shots was at most 12.5% (64) of all shots in a
season (512). Second, they were distributed
over multiple nonconsecutive hot-hand periods,
each comprising 16 hot-hand shots at most.
Third, the average hit rate of a season outside
© Japanese Psychological Association 2000.
the hot-hand periods was set to either .4 or .6.
The tests could detect, on average, only 12% of
all the hot-hands phenomena in the simulated
records. Thus, they were deemed to be relatively
ineffective and inefficient in realistic situations,
and this study concludes that the research of
The hot-hands phenomenon
Gilovich et al. (1985) may not provide enough
evidence to reject the existence of the hothands phenomenon in basketball.
Finally, one aspect of the belief in hot-hands
must be treated with more care. That is,
although the formal definition of hot-hands is
the temporary elevation of the probability of
successful shots, fans and players may see hothands not only in shooting but in players’ nonshooting performance as well. Many questions
remain unanswered. Do fans, coaches, and
players always see hot-hands in a player when
the player’s probability of successful shots
temporarily exceeds a certain level? How do
fans and players judge who has hot-hands?
Does a player’s game style change when the
player believes she/he has hot-hands? Incorporating these additional considerations with
the shooting data may help boost the power
of future studies to detect the hot-hands
phenomenon.
The ready acceptance of Gilovich et al.’s
(1985) conclusion without more complete consideration of the hot-hands phenomenon may
not be wise, even if it eventually turns out to be
correct. The author suggests that the belief in
hot-hands needs more careful and extensive
study before being dismissed as a misperception of random events.
References
Gilovich, T., Vallone, R., & Tversky, A. (1985). The
hot hand in basketball: on the misperception
of random sequences. Cognitive Psychology, 17,
295–314.
Stacy, W., & Macmillian, J. (1995). Cognitive bias
in software engineering. Communications of the
ACM, 38, 57–63.
133
Tversky, A., & Kahneman, D. (1982). Judgment
under uncertainty: Heuristics and biases. In
D. Kahneman, P. Slovic, & A. Tversky (Eds.),
Judgment under uncertainty: Heuristics and
biases (pp. 3–20).
(Received Dec. 15, 1997; accepted Jan. 23, 1999)
Appendix
An example of seasonal records of shots: base
rate = .4, increase of the probability of successful shots in hot-hand periods = .4, 64 hothand shots in a season, 16 shots in a hot-hand
period. Hot-hand shots appear at the end of
the first, second, third, and fourth quarter of
the season, and are underlined.
0000001000011110000001010101001111010001
0010100000000000110001110111100001000001
1000100010001001010010001100110101100011
01111111
0100001110011010111000110011010110011000
0000010111010101010000000010000100000011
1000101000100110000010011000101111111110
01010010
1000010011000001101000100000100011110000
1011011101001001010010011000000001000100
1010000111000100110101000110101011110111
01011001
0011000001011011100000101111111111001101
1000000100010100001010011101110111000100
0101111011101000011010001001100011111011
11101111
© Japanese Psychological Association 2000.