reporting “the polls” in 2004 kathleen a. frankovic

Public Opinion Quarterly, Vol. 69, No. 5, Special Issue 2005, pp. 682–697
REPORTING “THE POLLS” IN 2004
KATHLEEN A. FRANKOVIC
Abstract Media reports of polls indicate how well public opinion
polls have been integrated into campaign coverage. This article examines how polls were used in 2004. Although there were relatively limited methodological changes in how polls were conducted in 2004, there
were changes in how the polls were treated in the media. Americans in
2004 were subjected to intense debates about polls and to as much
reporting about “the polls” as there was of the polls themselves. The discussion of “the polls” in 2004 included claims of electability during the
Democratic nominating process, increased reporting about methodological issues, and heightened political criticisms of “the polls.” The article
concludes with a discussion of the current state and the future of news
polling.
Methodologically, preelection polls in 2004 seemed much as they had been
four years before, although there were some enhancements to the several
innovative approaches to collecting and organizing data developed in previous elections. For example, even before the near-deadlock of the Electoral
College in 2000, polling organizations recognized the need to conduct polls in
individual states—and battleground state polling did increase in 2004. Internet panels were used for polling well before 2000, and more organizations
adopted Internet panels in 2004 (with varying accuracy). News organizations
began overnight polling following debates and other major campaign events
in 1980, and in 2004 media polls reported nearly instantaneous reactions to
the debates and other campaign events. Even attempts by some organizations
to rely on lists of registered voters, instead of sampling households by random
digit dialing (RDD), were not new to 2004, although interest in that effort
increased.
When it came to reporting and perception, however, polls were different
in 2004. Although polls, as measures of public thinking, have had some
place in American journalism since partisan weeklies reported straw polls in
the election of 1824, Americans in 2004 were subjected to debates about
KATHLEEN A. FRANKOVIC
is director of surveys for CBS News. The author thanks Michael
Butterworth, Jennifer De Pinto, and Sarah Dutton of CBS News for help with portions of the article. Address correspondence to the author; e-mail: [email protected].
doi:10.1093 / poq / nfi066
© The Author 2005. Published by Oxford University Press on behalf of the American Association for Public Opinion Research.
All rights reserved. For permissions, please e-mail: [email protected].
Reporting “the Polls” in 2004
683
polls as much as they heard poll results. Those debates were more intense in
2004, although they echoed questions raised in earlier election years. In fact,
there seemed to be as much reporting about “the polls” as there was of the
polls themselves. Moreover, the discussion of “the polls” in 2004 had to do
with their perceived accuracy and their methodology, and not just what “the
polls” showed about voter perceptions of a candidate or a candidate’s chance
of victory.
This article will look at four characteristics of the reporting about the 2004
election polls: first, the frequency of reports about “the polls,” not “a poll”;
second, the use of “polls” in the campaign and their role in establishing John
Kerry as the unassailable front-runner for the Democratic nomination; third,
the focus on polling method, along with an increased level of scrutiny of the
polls, in the 2004 reporting; and fourth, the criticisms of polls, which reached
a new intensity in 2004. Finally, the article will look ahead to how these factors
and the overall accuracy of the 2004 polls may be reflected in the future.
Talking about “the Polls”
The number of public polls grew in the 1990s, even as the difficulties of conducting random samples of the public via telephone increased. By 2004, polls
were an integral part of news stories about campaigns. The Bill Clinton
impeachment demarcated the enormous increase. The searchable Roper Center for Public Opinion Research data archive at the University of Connecticut
stores questions and answers from nearly all public polls conducted since the
1930s. Between 1973 and 1974 there were 103 questions asked that contained
the words “impeach,” “impeachment,” or “impeaching” and the name Richard
Nixon. In 1998 and 1999, substituting Bill Clinton for Richard Nixon in the
search, there were 1,104 citations. Between 1972 and 2000, 556 question
items used the term “Watergate.” During a similar period of time, there were
602 questions that referred to “Whitewater,” which did not become a political
issue until the mid-1990s. In 1998 and 1999 there were 1,503 questions that
included the name Monica Lewinsky, tripling the number of poll questions
that had included the word Watergate.
There are also more topics in polling than ever. During World War II, Gallup
asked only five questions about what would be appropriate U.S. actions during and after the war. During the Korean and Vietnam conflicts, Americans
were asked to determine whether engagements in Korea or Vietnam were the
“right thing” or whether that involvement was a “mistake.” During the Persian
Gulf War in 1991, Americans were asked directly whether the use of force
was appropriate, what an acceptable number of casualties might be, and even
whether the United States should use nuclear weapons in Iraq. Language has
changed, too. Polls now ask Americans what the “United States” should be
doing in Iraq, not what actions “we” should take.
684
Kathleen A. Frankovic
Conducting polls has become easier. Instead of having to recruit and train
hundreds of field interviewers, as Gallup did beginning in the 1930s, organizations by 1970 needed only dozens of interviewers in one central location
making telephone calls. By the early1990s, instead of needing a full-size computer
and a staff of programmers and data processors, interviewers could enter
respondents’ answers directly into a personal computer, and that was all that
was needed for data analysis. Near the end of that decade, automated voicerecorded interviewing systems and Internet polling also increased the number
of polls, and those polls became more available to the news media (Frankovic
1992). The Roper Center archive includes 167 separate polls that asked voters
between September and Election Day 2004 if they would vote for George
W. Bush or John Kerry or Ralph Nader “if the election were held today.” And
the archive may not include all national polls that were conducted.1
Over time, almost without notice, the numbers and percentages that the
media use to represent what Americans think have morphed into something
less tangible, the more general phrase “polls say.” “Polls say” something
about the American psyche and American opinion, though it is frequently
unclear precisely how the reporter knows “polls say” it. In 2004, polls,
because of their number and because of their variability at key moments in
the campaign, were often treated as a group by the media. “The polls” were
discussed, critiqued, and used when necessary by politicians and journalists
alike.
“Polls say” and “polls show” are common phrases in the American news
media, phrases that have increased in popularity in this decade. Searching the
Lexis-Nexis major papers library (U.S. papers, magazines, and the broadcast
and cable news networks) indicates a massive jump in the number of such
vague poll references. In the entire 1992 election year, there were fewer than
4,500 such references; these references jumped to just under 8,000 in 1996
and reached over 11,000 in 2000 and 2004 (table 1).
Some of this increase, but by no means all of it, can be traced directly to the
growth of the 24-hour media. MSNBC and Fox News began in 1996, although
neither was on the air for that entire election year. But the rise in references
between 1992 and 1996 comes solely from other media, especially the doubling of references in newspapers (there were certainly no more newspapers in
1996 than there were in 1992). In 2000 and 2004 references in all media
exceeded 11,000 in each year. Looking only at CNN’s use of the phrases
“polls say” or “polls show” through Election Day in each presidential election
year demonstrates the phenomenon (as well as the aftermath of the 1998 Clinton
impeachment). In 1992 and 1996 CNN used these phrases about 500 times in
1. A similar examination in 2000 shows more national polls, but that figure is deceptive: the difference is solely due to the more frequent reports in the archive of the 2000 election Gallup tracking
polls. Reducing their number to reflect their frequency in 2004 brings the 2000 total well below
the 2004 figure.
Reporting “the Polls” in 2004
685
Table 1. Number of Stories that Used the Phrase “Polls Say” or “Polls
Show” in Election Years (Calendar Year)
U.S. Newspapers
Magazines
Television Transcripts (ABC,
CBS, NBC, CNN, Fox, MSNBC)
CNN only through Election Day
Total
2004
2000
1996
1992
8,726
681
8,921
661
6,708
467
3,227
381
1,920
1,001
11,327
2,308
1,250
11,890
809
545
7,984
881
526
4,489
SOURCE.—Lexis-Nexis.
the course of the campaign. In 2000 and 2004 there were twice as many such
statements on CNN.
The 2004 statements sometimes reflected conventional wisdom as much as
specific poll findings. While “polls say” or “polls show” as used in election
reporting can be accompanied by specific examples, it is easy to find cases
where that is not the case. Here are some election-related examples:
In accepting the Democratic Party’s presidential nomination, the U.S. Senator
from Massachusetts—whom polls show locked in a statistical dead heat with
President George W. Bush—devoted much of his speech to national security, the
one area in which polls show Bush retains a substantial advantage over him
(Detroit Free Press, July 30, 2004).
It’s no coincidence that as her husband’s poll numbers have sagged, Laura
Bush’s campaign schedule has ramped up considerably in the past few
months . . . . She’s far more in demand as a speaker than Vice President Cheney,
and polls say she’s more popular than her husband (Thomas M. DeFrank, New
York Daily News, August 22, 2004).
The polls say Americans think President Bush would do a better job protecting
Americans from terrorism, so Mr. Kerry’s task was to show he’s as tough as the
guy in the While House (Wayne Slater, Dallas Morning News, July 30, 2004).
Editorials are not immune from the use of generic poll references to justify an
argument:
Polls say voters are not excited about deficits. That’s good, because what they
can read about them is not usually very relevant (Boston Herald, September 10,
2004).
Invoking “the polls” quantifies information and gives apparent precision to
news coverage and the appearance of expertise to reporters. The poll numbers
substitute for political experts and let the reporter, who invokes the authority
of “the polls,” assume that role.
686
Kathleen A. Frankovic
Using Polls in the Primaries
“The polls” have become so important in campaigns for presidential nominations that strategists can be as concerned about the image of their candidates in
the polls as they are about primary election results. There is a literature about
strategic voting and how polls can help voters make decisions in multicandidate elections (Abramson et al. 1992; Cain 1978; Forsythe et al. 1993; Lau
and Redlawsk 2001). There is less of a literature about the meanings that
candidates, reporters, and voters attach to other kinds of poll results in the
primary campaign.
In 1992 strategists for Bill Clinton worried about poor results from a primary exit poll question that asked whether the candidate “had enough honesty
and integrity” to be a good president. The 1992 Clinton campaign solved its
“honesty” problem by influencing a single poll in a single state. Once a majority
of voters in the 1992 Pennsylvania Democratic primary answered “yes,”
which they did, Clinton appeared to have solved his “honesty” problem, at
least for a while (Frankovic 2000).
In 2004 “the polls” were used to help project a candidate’s electability,
perceived as the most important issue in the Democratic primaries, and thus
played a role in the debate about who should win the Democratic nomination. A Lexis-Nexis search of major news organizations for “Kerry” and
“electability” produced 1,500 stories between January 1 and March 31,
2004. A search for “Kerry” and “can win” or “beat Bush” produced even
more stories. The National Election Pool (NEP) exit poll questionnaire
reflected this editorial interest. Voters were asked whether they cared more
about having a nominee they agreed with on the issues or whether they
preferred a nominee “who can beat Bush.” In that forced choice question,
nearly four in ten voters in the Democratic primaries said that having a nominee who could beat Bush was more important than nominating someone
they agreed with on the issues.
Given a longer list of qualities that may have motivated their choice, the
voters’ most popular response was that the candidate they voted for could win.
That was a major change from 2000, when voters said other candidate qualities mattered more than electability (table 2). Just 7 percent of voters interviewed in the 2000 Voter News Service (VNS) New Hampshire Democratic
primary exit poll chose “best chance to win in November” from seven options
as the most important reason for their vote (table 3). Four years later, 20 percent
of New Hampshire primary voters said “can beat George W. Bush” was the
most important quality. As the 2004 primary season continued, electability
became even more important. In the 2004 Super Tuesday states 35 percent of
voters chose “can beat George W. Bush” as the most important quality for
their candidate, twice as many as the percentage who selected any other
option (table 4). John Kerry’s lead over first Howard Dean and then John
Edwards came entirely from voters who said they wanted a candidate who
Reporting “the Polls” in 2004
687
Table 2. 2004 Democratic Primary Voters’ Assessments of the Importance
of Electability
Those Who Voted For
It Is More Important to
Have a Nominee:
All
Kerry
Another Candidate
You agree with on issues
Who can beat Bush
53%
38%
42%
71%
58%
29%
SOURCE.—Aggregated NEP Democratic primary exit polls for Arizona, California, Connecticut,
Delaware, Georgia, Massachusetts, Maryland, Missouri, New Hampshire, New York, Ohio,
Oklahoma, Rhode Island, South Carolina, Tennessee, Virginia, Vermont, and Wisconsin, 2004,
N = 26,887.
NOTE.—Question wording: Did you vote for your candidate today more because you think: (1)
He can defeat George W. Bush in November; or (2) He agrees with you on the major issues.
Table 3. 2000 New Hampshire Democratic Primary Voters’ Assessments
of Most Important Candidate Quality
Most Important Candidate Quality
He is a strong and decisive leader
He has new ideas
He is not a typical politician
He has the right experience
He has the best chance to win in November
He stands up for what he believes
He is a loyal Democrat
Percentage of Voters
17%
7%
11%
21%
7%
29%
2%
SOURCE.—Voter News Service New Hampshire exit poll, February 1, 2000, N = 899.
NOTE.—Question wording: Which ONE (candidate) quality mattered most in deciding how you
voted today?
could win; Kerry and other candidates split the votes of those who said agreement
on the issues was more important.2
Was Kerry truly more electable than Howard Dean or John Edwards? No
one can know for sure, but whatever the polls said, the perception of electability—
one that could only by fueled by “the polls”—very likely helped John Kerry
win the Democratic nomination.
2. One reviewer of this article suggested that this concern about electability among Democrats
was appropriate because of the presence of an incumbent Republican president in 2004, something missing in 2000. However, in recent elections against an incumbent, “electability” ranked
lower with voters. In 1996’s VNS Super Tuesday exit polls, for example, just 16 percent of
Republican primary voters cited “can beat Clinton” as the most important quality for a candidate,
placing it third on the list.
688
Kathleen A. Frankovic
Table 4. 2004 New Hampshire and Super Tuesday Democratic Primary
Voters’ Assessments of Most Important Candidate Quality
Most Important Quality
He cares about people like me
He stands up for what he believes
He has the right experience
He can defeat George W. Bush
He has a positive message
He would shake things up in
Washington
He is a loyal Democrat
He is honest and trustworthy
New Hampshire 2004
Super Tuesday 2004
12%
29%
12%
20%
13%
7%
13%
18%
10%
35%
13%
N.A.
2%
N.A.
N.A.
11%
SOURCES.—NEP New Hampshire exit poll, January 27, 2004, N = 896; NEP Super Tuesday
exit polls (California, Connecticut, Georgia, Massachusetts, Maryland, New York, Ohio, Rhode
Island, and Vermont), March 2, 2004, N = 12,745.
NOTE.—Question wording: Which ONE (candidate) quality mattered most in deciding how you
voted today?
Polling Methods
The media’s increasing use of “the polls” as a source helps to provide authority
to reporters. But in 2004, those polls’ methods were subject to special scrutiny. Although new technologies made conducting polls easier in the 1980s,
there were new difficulties that emerged around the turn of the twenty-first
century. Beginning with the 1996 election, lower response rates became a
public issue. At about the same time as the Clinton impeachment, Arianna
Huffington, a syndicated columnist and part-time humorist, launched a campaign to uncover pollsters’ “dirty little secret” of nonresponse and a Web campaign that she called the Partnership for a Poll-Free America.3 She wrote,
“Pollsters have replaced leaders. . . . Alas, it seems that as long as pollsters
exist, politicians are going to consult them. So in order to wean our political
leaders off their daily numbers habit, what we need to do is make the numbers
themselves completely unreliable. That’s as easy as hanging up your phone”
(Huffington 1998).
Huffington had turned against polling and first suggested her nonresponse
strategy in 1996: “In that spirit and for the good of the American republic, we
need to make answering pollsters’ questions déclassé. Since we cannot expect
today’s politicians to ever kick the habit on their own, we must focus on stopping the polls at their source. We the people are the source, and if enough of
3. The reference to response rates as the “dirty little secret” of polling predates Huffington’s 1998
campaign. Karlyn Bowman, of the American Enterprise Institute, used it in 1996 (The Economist
1996, p. 26).
Reporting “the Polls” in 2004
689
us stop talking to pollsters, we could force our leaders to think for themselves”
(Huffington 1996).
Lower response rates have been caused by answering machines and privacy
managers (Curtin, Presser, and Singer 2005). Recent American Association
for Public Opinion Research (AAPOR) conference programs include numerous
papers on these phenomena and the rise of cell phone–only households. In
2004 news stories also discussed these issues, especially the underrepresentation of cell phone–only households. (Sterngold 2004; Toland and O’Toole
2004; Turner 2004) The assumption was that by not interviewing this group
(which was not part of landline-based household samples) pollsters would
undersample young adults. This assessment helped those who wanted to make
the case that the 2004 polls missed hidden Kerry voters.
Cell phone–only access was in fact higher among the young. The best estimate from the 2004 NEP exit poll and its associated absentee voter poll was that
6 percent of voters had no landline and could not be sampled and fewer than 2
percent had no phone at all. The lack of coverage for cell phones has not yet
affected preelection polling accuracy (Salvanto and Butterworth 2004). The
percentage was, as expected, much higher among the youngest voters. But the
noncoverage bias this created was minimal for practical purposes. Of voters in
cell phone–only households, 54 percent voted for John Kerry, and 45 percent
voted for George W. Bush. That was not enough of a difference from the rest of
the voting population to affect preelection poll accuracy. Younger voters with
and without landlines voted similarly.
The response rates for many public polls (AAPOR response rate 3) are in
the 20s, and sometimes even lower (Krosnick, Holbrook, and Pfent 2003).4
But lower response rates do not necessarily mean greater nonresponse bias. In
fact, serious efforts to increase response rates can increase nonresponse bias.
In 1996 respondents in CBS News preelection polls who first refused to be
interviewed and those who were interviewed after multiple dialing attempts
were more Democratic than other respondents, which contributed to a higher
error in the final preelection estimate (Frankovic 1997).
In addition, the process of making more callbacks and dialing more telephone
numbers is more costly. Some have experimented with list-based sampling,
using registered voter lists supplied by local governments, with phone numbers
attached when available (Green and Gerber 2003). This improves interviewer
efficiency in finding likely voters and saves money. However, there is significant noncoverage bias, especially among urban residents and minorities, who
may not have listed phone numbers, and there is no reason to expect that the
matching of names to phone numbers is completely accurate (Butterworth
et al. 2004; Deane and Morin 2003). Recent attempts to compensate for
4. AAPOR response rate 3 is the number of completed surveys divided by the total number of
households in the survey plus an estimate of the number of households not reached (AAPOR
2004).
690
10.0
8.0
6.0
4.0
2.0
0.0
-2.0
-4.0
-6.0
Kathleen A. Frankovic
4/13
6/2
7/22
9/10
10/30
Figure 1. George W. Bush’s lead over John Kerry in published polls,
April–November 2004.
nonrepresentation are still dependent in the quality of the lists. In one 2004 test
one phone number was attached to over 60 names (Butterworth et al. 2004).
HOW THE MEDIA DEALT WITH POLL METHODS
Although the National Council on Public Polls (NCPP) and AAPOR have
long promulgated standards in the reporting of polls, for many years news
media shied away from including this information (Miller and Hurd 1982;
Traugott and Lavrakas 1996). In 2004 many reporters and observers expected
more than final predictive accuracy from polls. Some critics claimed that most
2000 preelection polls underrepresented Al Gore’s vote.5 Many expected a
similar deviation from the 2004 outcome. Consequently, discussions of polling
methods became a major portion of the media’s coverage in 2004.
Plotting all 2004 polls, beginning in April when the nominees were
known, showed expected variations in the spring, as some organizations
reported “likely voters” and others cared only about registered voters (figure 1).
Voters were also less attentive in the spring than they would be in the fall, so
deviations were expected. Throughout the spring, George W. Bush lost
support, as problems increased in Iraq. He gained some of it back in the
summer.
What is surprising about figure 1 is that those polls conducted during the
early part of the fall campaign, after the Republican convention but before the
presidential debates, were extremely variable, even more variable than they
5. While nearly all the final 2000 preelection polls were within sampling error of the final outcome, most had Bush ahead, although not by a statistically significant margin (NCPP 2001).
Reporting “the Polls” in 2004
691
had been in the spring. In comparison, the 2000 fall polls were fairly stable;
and while there were “house” effects that might differentiate one polling organization from another, the general pattern in the polls was clear. That was
not the case in 2004. In September, after the end of the Republican National
Convention, poll results released within days of one another ranged from a
double-digit lead for President Bush to a tied race. The differences (particularly a Pew Research Center poll that showed no statistically significant difference between Bush and Kerry and a Gallup Poll that showed a 13-point
Bush lead, which appeared within one day of each other) prompted media discussions of poll methods. News organizations devoted stories to a variety of
methodological explanations, including the order in which the poll questions
were asked, how respondents were selected, whether polls reported results
only among registered voters or among some definition of likely voters (and
what that definition was), whether there was weighting to an expected partisan
identification, and even extreme volatility in the campaign. The academic discussion of these differences can be found in Erikson, Panagopoulos, and
Wlezien (2004), McDermott and Frankovic (2003), and Voss, Gelman, and
King (1995), among others; however, in 2004 media organizations covered
these differences as well (De Pinto, Dutton, and Frankovic 2004; Hulse 2004;
McCormick and Zeleny 2004).
By mid to late-September 2004, some of the early September differences
had disappeared. All polls of registered voters reported a Bush lead. And only
small differences remained.
Poll Critiques
Politicians have always criticized poll results, and they did so with particular
intensity in 2004. Typically, they attack a single poll result, unless, of course,
all polls indicate they are losing. Harry Truman attacked all polls in 1948,
when he called them “sleeping polls,” because, he said, they were like “sleeping
pills designed to lull the voters into sleeping on election day.” Intelligent voters,
according to Truman, were not being “fooled. They know that sleeping polls
are bad for the system. They affect the mind and the body. An overdose can be
fatal” (Truman 1948).
There were only a small number of polling organizations in Truman’s era,
not the massive number of polls there are today. In 2004 political attacks came
against the backdrop of unusually high campaign interest and intensity. By
comparison, the 1948 election had one of the lowest turnouts in history. At the
end of October 2004, 66 percent of registered voters interviewed in a CBS
News poll said they were paying “a lot” of attention to the campaign; eight in
ten voters said that the 2004 election was “more important” than other elections they knew about; and 73 percent said the outcome of the election was
“very important” to their families, up from just 59 percent who felt that way in
692
Kathleen A. Frankovic
Table 5. Are There Important Differences between the Parties? Responses
of Registered Voters
Yes
No
2004
1998
1990
1980
75%
16%
64%
28%
53%
41%
49%
38%
SOURCE.—CBS News/New York Times Polls: July 11–15, 2004 (N = 823); September 8–9,
1998 (N = 633); October 28–31, 1990 (N = 1,137); September 10–14, 1980 (N = 1,417).
NOTE.—Question wording: Do you think there are any important differences in what the Democratic and Republican parties stand for?
2000.6 Of all recent elections, only 1992, which took place during a recession
and was invigorated by the maverick candidacy of Texas billionaire Ross
Perot, matched 2004 for voter interest.
The 2004 election was also an election with high levels of polarization, as
shown in table 5. By 2004, 75 percent of registered voters believed there were big
differences between the two parties. That number had been steadily increasing
since 1980, when just 49 percent said there were major differences in the parties.
One in four voters in a CBS News/New York Times preelection poll said
they would be scared if Bush were reelected, and a similar number said they
would be scared if Kerry were elected. There was very little overlap between
the two groups; half the potential electorate viewed their candidate’s losing
the election as potentially devastating. Moreover, more than half of voters
believed those who voted differently from them were qualitatively different.
In a postelection CBS News/New York Times poll, more than half of Kerry
and Bush voters said those people who voted for the candidate they opposed
did not share the same nonpolitical goals and values that they did.7
6. The questions asked were (1) How much attention have you been able to pay to the 2004 presidential campaign—a lot, some, not much, or no attention so far? (October 28–30, 2004, N = 824);
(2) Compared to previous presidential elections, do you think this year’s election for president is
more important, less important, or about as important? (October 14–17, 2004, N = 931); and (3)
How important do you think who wins this year’s presidential campaign is to you and your family—
very important, somewhat important, not very important, or not at all important? (October 2–
November 1, 2004, N = 824).
7. For the first question, the wording was, “If George W. Bush is reelected as president (in 2004),
what best describes your feelings about what he will do in office: excited, optimistic but not
excited, concerned but not scared, or scared? If John Kerry is elected as president (in 2004), what
best describes your feelings about what he will do in office: excited, optimistic but not excited,
concerned but not scared, or scared?” (CBS News/New York Times poll, October 28–30, 2004,
N = 643 likely voters). For the second question, the wording was, “Thinking for a moment about the
people who voted for (George W. Bush/John Kerry), which of these comes closer to your view:
(1) People who voted for (George W. Bush/John Kerry) feel differently than I do about politics,
but they probably share many of my other values and goals; or (2) People who voted for (George
W. Bush/John Kerry) feel differently than I do about politics, and they probably do NOT share
many of my other values or goals, either” (CBS News/New York Times poll, November 18–21, 2004,
N = 336 Kerry voters and 344 Bush voters).
Reporting “the Polls” in 2004
693
The early inconsistency in the fall polls, the methodological questions
about the pollsters’ ability to sample the country, and this intense campaign
environment made attacks on polls by partisans and others more credible. The
presidential campaigns and their supporters felt it necessary to “correct” what
they frequently saw as problems with the public polls. Sometimes they did
this by taking issue with the partisan distributions in the polls. For the Republicans that meant attacking polls with what they described as “too many Democrats.” One example of this approach was the Republican attack on a June
2004 Los Angeles Times poll. Matthew Dowd, the president’s pollster, was
quoted as saying: “A note of caution: Be very careful in reporting the Los
Angeles Times poll. It is a mess.” Of course, Dowd was concerned with other
results besides the party distribution, but it was a way of forcing reporters to
ignore findings critical of the administration. (Los Angeles Times, June 22,
2004). Another example of a Republican-led attack on polls concerned the
Minneapolis Star-Tribune’s Minnesota Poll, detailed elsewhere in this issue.
In 2004, however, the most vitriolic attacks came not from the right but
from the left. The Gallup poll was attacked in a full-page New York Times
advertisement taken out by MoveOn.org. The ad read: “GALLUP-ING TO THE
RIGHT: Why Does America’s Top Pollster Keep Getting it Wrong?” MoveOn
accused the Gallup organization of inflating the number of Republicans in its
polls and claimed this was being done because the son of the poll’s founder
was interested in religion. Other Democratic supporters often repeated this
particularly harsh criticism (Memmott 2004).
Many pollsters still reel from the post-1996 election criticism by Everett Carll
Ladd (1996), repeated in the Wall Street Journal, where he said the accuracy of
the preelection polls was the worst since 1948. That article, by a respected academic, may have made it more legitimate for partisans to attack public opinion
polls. In 2004 the criticism was magnified by the democratic nature of the Internet.
Matthew Dowd’s original criticism of the Los Angeles Times poll first gained
wide prominence in “The Note,” ABC News’s daily, well-read, political blog.
Charges of underrepresentation of partisans in preelection polls were routinely
made in places like “Donkey Rising” (www.emergingdemocraticmajority.com),
Kaus Files (www.kausfiles.com), and Daily Kos (www.dailykos.com). The
attacks on the Minnesota Poll were covered heavily in Power Line
(www.powerlineblog.com), which also covered Republican attacks on the
same poll in 2002. Bloggers could critique polls and know that their comments
would be magnified by others, reflecting the Internet’s echo chamber.8 Those
echoes exaggerated the preelection criticism.
8. Not all bloggers in 2004 were partisan critics. Mysterypollster.com was a Web site that
frequently reported on the criticisms of others, sometimes in support but more likely to debunk
them. Other useful sites were the Hotline (www.nationaljournal.com), the Polling Report
(www.pollingreport.com), and Real Clear Politics (www.realclearpolitics.com). In 2005 the
National Council on Public Polls awarded Mark Blumenthal, the “Mystery Pollster,” a special
citation for his work explaining polls to the Internet reader.
694
Kathleen A. Frankovic
What’s Ahead
Despite these difficulties, the final preelection polls, especially those conducted using traditional RDD telephone methods, were surprisingly accurate
at the national level. The polls’ accuracy may have benefited from various
methods that organizations had developed to define likely voters and their
review after the 1996 postelection controversy; but these polls were still
dependent on the robustness of the original sample selection and completeness. Just as in 2000, the majority of preelection polls in 2004 were extremely
accurate predictors of the outcome. In both years the polls reflected the closeness of the final results. However, the polls’ variations and the questioning of
poll methodology in the mainstream press resulted in unsettled and skeptical
media discussion of the polls.
Methodologically, the changes of 2004 indicate adaptation to the realities of
both the difficulties of polling and the needs of covering this specific election.
That adaptation is likely to continue. The year 2004 saw fewer tracking polls
and more state polls. The Roper Center archive lists 83 different CNN/USA
Today/Gallup or Gallup-only national poll results for the horse race, reported
with leaners included, in the fall of 2000, but it contains only 30 in 2004, suggesting that the Gallup tracking polls were reported much less often in 2004.
In 1988 ABC News and the Washington Post conducted a 50-state poll, using
conventional methods—but not in 2004.
It is traditional telephone polling, using some variation of random digit
dialing, that remains for now the methodology of choice for public opinion
polls. What is likely to happen in the future is not that the method will change
but that the adaptations that have already taken place will be expanded, and
there may be more recognition of the increasing costs of polling. Public opinion polls have fewer respondents now than they did before. The final CBS
News/New York Times poll in 2000 included 1,356 registered voters. The final
CBS/New York Times poll in 2004 included only 824. This may be an extreme
example for a preelection poll, but many media polls today routinely have
fewer than 1,000 respondents. And with the reduction in the use of tracking
polls, there are fewer polls in general.
Instead, in 2004 it was “the polls” and not “a poll” or tracking poll that
made news. The polls, particularly the media polls, took on functions
beyond those they had in the past. They continue to serve as the “weathercocks” that James Bryce cited as one of the functions of the news media
in the nineteenth century (Bryce 1888). They are, for both elite and public, the voice of the people, providing accurate information and a mirror
to let the public understand itself. They democratize information about
the public.
But in the new world of media, polls also provide attention for the organization doing them. A news organization (or a college or university) can promote
its “brand” by conducting a poll, especially one that will be reported by other
Reporting “the Polls” in 2004
695
media.9 News partnerships (CBS News/New York Times, ABC News/
Washington Post, NBC News/Wall Street Journal) receive automatic publicity for one partner in the other’s media. Although conducting branded polls
may create risks for an organization (by fostering potential attacks by partisans who dislike a poll’s results), the news media have accepted these risks.
Historically, one of the advantages of news media polling has been its proof
of accuracy and its basis in science. It is that perception that leads to another
function for news polls. Since “the polls” have new and legitimate information to convey, they can be used as a way of challenging authority. Questions
at press conferences often begin with statements like, “Mr. President, the
polls show that the public does not believe that you have been telling the
truth about. . . .” Interviews on television news shows ask candidates to
respond to issues that the press sees emerging from recent polls: “Mr. X, polls
have you in fourth place in New Hampshire. How are you going to win the
nomination while losing New Hampshire?” Such questions leave the candidate no choice but to answer that he never pays attention to polls, that his campaign is going well, and that after all, the only poll that counts is the one on
Election Day, and, of course, he will win. He or she can also attack “the polls.”
The discussions of polling’s weaknesses and the criticisms of 2004 may
have a long-term impact on this perception of accuracy. In addition, the partisan nature of the debate may also have long-term consequences. One of the
strengths of public opinion polling over time has been the belief that polling
was scientific and nonpartisan. In the attacks of 2004, however, the sides were
frequently partisan versus pollster. When political figures position specific
polls (or polls in general) as representing a political enemy, then that strength
of polling is weakened.
It is likely that such scrutiny will continue. In 2004 political journalists
became poll reviewers, commenting more skeptically on poll findings. In
principle, this is a positive movement. One can imagine the possible development of polling reviews as part of the coverage of politics (Rich Morin and
Claudia Deane, in the Washington Post, already do this in their “Pollwatchers”
column). This trend was exacerbated by the postelection focus on exit polls,
with questions about response rates, within-precinct error, the question wordings, and whether there was enough effort spent in finding absentee and early
voters. Media scrutiny will be both technical—cell phones, phone methods, and
so forth— and political.
From the journalist’s perspective, such criticism of polls could also detract
from the perception of polls as the “expert” on public opinion. If that is the
case, journalists could lose one of their favorite tools. Still, at least for now,
journalists (and politicians) still need to believe in the “precision” of polls to
keep doing their jobs.
9. One indication of the promotional value for polls is the fact that there are now at least four academic institutions that conduct polls in New York City: Marist College, Quinnipiac University,
Siena College, and Pace University.
696
Kathleen A. Frankovic
References
Abramson, Paul R., John H. Aldrich, Phil Paolino, and David W. Rohde. 1992. “‘Sophisticated’
Voting in the 1988 Presidential Primaries.” American Political Science Review 86:55–69.
American Association for Public Opinion Research (AAPOR). 2004. Standard Definitions: Final
Dispositions of Case Codes and Outcome Rates for Surveys. Lenexa, KS: AAPOR.
Bryce, James. 1888. The American Commonwealth. Vol. 3. London: Macmillan.
Butterworth, Michael, Kathleen Frankovic, Marla Kaye, Anthony Salvanto, and Doug Rivers.
2004. “Strategies for Surveys Using RBS and RDD Samples.” Paper presented at the annual
meeting of the American Association for Public Opinion Research, Tapatio Cliffs, AZ.
Cain, Bruce E. 1978. “Strategic Voting in Britain.” American Journal of Political Science 22:
639–55.
Curtin, Richard, Stanley Presser, and Eleanor Singer. 2005. “Changes in Telephone Survey Nonresponse over the Past Quarter Century.” Public Opinion Quarterly 69:87–98.
Deane, Claudia, and Richard Morin. 2003. “Polls in Black and White: Examining the Differences
in the Demographics of RDD and RBS Surveys in Maryland.” Paper presented at the annual
meeting of the American Political Science Association, Philadelphia, PA.
De Pinto, Jennifer, Sarah Dutton, and Kathleen A. Frankovic. 2004. “Polling: Who’s on First?”
September 28. Available online at http://www.cbsnews.com/stories/2004/09/28/opinion/polls/
main646125.shtml (accessed October 31, 2005).
The Economist. 1996. “Campaign Issues: Opinion Polls.” October 12, p. 26.
Erikson, Robert S., Costas Panagopoulos, and Christopher Wlezien. 2004.“Likely (and Unlikely)
Voters and the Assessment of Campaign Dynamics.” Public Opinion Quarterly 68:588–601.
Forsythe, Robert, Roger B. Myerson, Thomas A. Reitz, and Robert J. Weber. 1993. “An Experiment on Coordination in Multi-Candidate Elections: The Importance of Polls and Election Histories.” Social Choice and Welfare 10:223–47.
Frankovic, Kathleen A. 1992. “Technology and the Changing Landscape of Media Polls.” In
Media Polls in American Politics, ed. Thomas E. Mann and Gary R. Orren, pp. 32–54.
Washington, DC: Brookings Institution Press.
———. 1997. “The Polling Institution.” Paper presented at the annual meeting of the American
Association for Public Opinion Research, Norfolk, VA.
———. 2000. “Election Polls: The Perils of Interpretation.” Media Studies Journal 14 (Winter):
104–9.
Green, Donald P., and Alan S. Gerber. 2003. “Using Registration-Based Sampling to Improve
Pre-Election Polling: A Report to the Smith Richardson Foundation.” Available online at http://
www.vcsnet.com/pdf/registrationbasedsampling_Smith-Richardson-Report_11_10_03.pdf
(accessed October 31, 2005).
Huffington, Arianna. 1996. “A Modest Proposal.” October 3. Available online at http://www.
ariannaonline.com/columns/column.php?id=670 (accessed October 31, 2005).
———. 1998. “Hang It Up.” May 21. Available online at http://www.ariannaonline.com/
columns/column.php?id=445 (accessed October 31, 2005).
Hulse, Carl. 2004. “Varying Polls Reflect Volatility, Experts Say.” New York Times, September
18, p. 10.
Krosnick, Jon, Allyson Holbrook, and Alison Pfent. 2003. “Response Rates in Surveys by the
News Media and Government Contractor Survey Research Firms.” Paper presented at the
annual meeting of the American Association for Public Opinion Research, Nashville, TN.
Ladd, Everett Carll. 1996. “The Election Polls: An American Waterloo.” Chronicle of Higher
Education, November 22, A52.
Lau, Richard R., and David Redlawsk. 2001. “Advantages and Disadvantages of Cognitive
Heuristics in Political Decision-Making.” American Journal of Political Science 45:951–71.
McCormick, John, and Jeff Zeleny. 2004. “Conflicting Polls Confuse Voters, Pros.” Chicago
Tribune, September 18, p. 1.
McDermott, Monika, and Kathleen A. Frankovic. 2003. “Horserace Polling and Survey Method
Effects: An Analysis of the 2000 Campaign.” Public Opinion Quarterly 67:244–64.
Memmott, Mark. 2004. “Gallup Defends Results against MoveOn.org Attack.” USA Today,
September 29, p. A1.
Miller, M. Mark, and Robert Hurd. 1982. “Conformity to AAPOR Standards in Newspaper
Reporting of Public Opinion Polls.” Public Opinion Quarterly 46:243–49.
Reporting “the Polls” in 2004
697
National Council on Public Polls (NCPP). 2001. “Presidential Poll Performance 2000.” January 3.
Available online at http://ncpp.org/poll_perform.htm (accessed October 31, 2005).
Salvanto, Anthony, and Michael Butterworth. 2004. “Picking Up Cell Phone Voters.” December 8.
Available online at http://www.cbsnews.com/stories/2004/12/08/opinion/polls/main659846.shtml
(accessed October 31, 2005).
Sterngold, James. 2004. “World of the Wireless Stymies Political Pollsters: Those Who Use Only
Cell Phones Difficult to Track for Surveys.” San Francisco Chronicle, October 10, p. A4.
Toland, Bill, and James O’Toole. 2004. “New Trouble for Pollsters in Caller ID, Cell Phones.”
Pittsburgh Post-Gazette, October 12, p. A1.
Traugott, Michael W., and Paul J. Lavrakas. 1996. The Voter’s Guide to Election Polls. Chatham,
NJ: Chatham House.
Truman, Harry. 1948. Address in the Cleveland Municipal Auditorium. October 26. Truman Presidential Museum and Library, Independence, MO. Available online at http://www.trumanlibrary.org/
publicpapers/index.php?pid=2009&st=&st1= (accessed October 31, 2005).
Turner, Douglas. 2004. “Polls Grow Increasingly Fuzzy; Cell Phones, Caller ID Pose New Challenges.” Buffalo News,October 26, p. A1.
Voss, D. Stephen, Andrew Gelman, and Gary King. 1995. “A Review: Pre-Election Survey Methodology: Details from Eight Polling Organizations, 1988 and 1992.” Public Opinion Quarterly
59:98–132.