political polling and the new media culture: a

Public Opinion Quarterly, Vol. 69, No. 5, Special Issue 2005, pp. 698–715
POLITICAL POLLING AND THE NEW MEDIA
CULTURE: A CASE OF MORE BEING LESS
TOM ROSENSTIEL
Abstract Changes in journalism—including newsroom cutbacks,
an emphasis on repackaging secondhand material, and the demands of
24-hour news—have expanded the reliance on polls as news, including
polls of a sort once considered not reliable for publication, and led to a
more superficial understanding of the 2004 presidential race. The proliferation of outlets offering news, which has resulted in greater competition for audience, has also intensified the motivation of using polls in
part for their marketing value rather than purely their probative journalistic value. The more “synthetic” style of contemporary journalism has
increased the tendency to allow polls to create a context for journalists
to explain and organize other news—becoming the lens through which
reporters see and order a more interpretative news environment. A
greater dependence on horse race tracking polls by the media has reinforced these tendencies and further thinned the public’s understanding
toward who won and away from why. Growing audience skepticism and
political polarization have created an environment of distrust about the
methodology and integrity of polling. All of these factors, in turn, are
frustrating the efforts of academic and commercial pollsters to maintain
standards and deepen understanding among journalists about public
opinion research and how to use it as journalism.
The editors and reporters around the table had always appreciated the candor
of the man who was running the meeting, one of the paper’s assistant managing editors. He did not disappoint them this day.
The gathering was a two-day retreat of the political team of the Los Angeles
Times, including myself, to plan its coverage for the upcoming presidential
election. The year was 1991, which in political time was an epoch ago or
maybe two. Like strategic planning meetings held months before a campaign
TOM ROSENSTIEL is founder and director of the Project for Excellence in Journalism, a research insti-
tute on the news media affiliated with the Columbia University Graduate School of Journalism.
The author would like to acknowledge Scott Keeter of the Pew Research Center and Cliff Zukin of
Rutgers University for their help in the preparation of this article, as well as the staff of the Project
for Excellence in Journalism for their invaluable help in conducting much of the research that has
informed this analysis. Address correspondence to the author; e-mail: [email protected].
doi:10.1093 / poq / nfi062
© The Author 2005. Published by Oxford University Press on behalf of the American Association for Public Opinion Research.
All rights reserved. For permissions, please e-mail: [email protected].
Political Polling and the New Media Culture
699
actually begins, this one was full of good intentions about how we would
cover the campaign better this time, elaborate analysis of the country, the
parties, the mood of the public, and the shifting intellectual policy strands on
the issues, as well as predicted changes in political tactics and strategy.
This particular editor was there to discuss the Los Angeles Times poll. He
outlined how much money the paper would spend on the poll that year, what it
hoped to accomplish, how many polls it would conduct, and more. The poll
was a newly acquired part of this editor’s managerial portfolio. He then shared
a little knowledge about press polling that he had recently learned himself
upon gaining his new responsibilities.
The paper would be conducting this many polls and spending this much, he
pointed out, as if to cut off any further discussion on the point, because we
needed to recognize something. Polls are a form of marketing for news organizations. Every time a Los Angeles Times poll is referred to in other media,
the paper is getting invaluable positive marketing of its name and its credibility. The same was true for any news organization, CBS News, the New York
Times, the Wall Street Journal, and the rest. So if there was more money
devoted to this than the assembled political journalists thought necessary, that
was the reason, he was implying, and there was no point in arguing the issue.
In the 14 years, four presidential election cycles, and seven House and Senate election cycles since, a good deal about political polling has changed. The
number of polls has increased dramatically (as shown in Michael Traugott’s
article in this volume). The rate of refusal among potential respondents has
swelled, from a quarter to more than 50 percent being common today (Curtin,
Presser, and Singer 2005, p. 90).1 More people, especially the young, now rely
exclusively on cell phones, which present significant logistical and legal challenges to pollsters who wish to call them.2
And some things have remained unchanged. The marketing motive in polling, even at the best news organizations, has not disappeared. In an age when
an expanding number of news outlets has put added pressure on all of them to
produce audience, indeed, marketing has become a bigger part of news generally and has intensified as a motive in political polling in particular.
Just as that editor did 14 years ago, this article will discuss underlying and
often unstated pressures in the use of political polling. It will also discuss how
those pressures in turn are changing our understanding as citizens of the political process. It will begin by describing changes to the media culture generally.
Then it will outline six major trends that are shaping the use of polling today
and further discuss how these are shallowing out our understanding of American
politics. It will draw on research I supervised at the Project for Excellence in
Journalism and as principal author of the project’s annual State of the News
1. Curtin, Presser, and Singer (2005) report that the University of Michigan’s Survey of Consumer Attitudes has seen response rates decline from 72 percent in the late 1970s to 48 percent in
2003. The Pew Research Center for the People and the Press has seen similar declines.
2. Currently, it is illegal to use “autodialers,” which most pollsters employ, to call cell phones.
700
Tom Rosenstiel
Media report. It will also draw on my observations as a reporter for more than
20 years reporting on the media and politics, particularly at the Los Angeles
Times, including background interviews with pollsters and media executives
during campaigns.
Key to this analysis are the six major trends that are shaping the press’s use
of polling today:
Changes in journalism—including newsroom cutbacks, an emphasis on
repackaging secondhand material, and the demands of 24-hour news—
have expanded the reliance on polls as news, including polls of a sort once
considered not reliable for publication, reduced the vetting these polls are
getting, and led to a more superficial understanding of the political race.
The proliferation of outlets offering news, which has resulted in greater competition for audience, has intensified the motivation of using polls in part for
their marketing value rather than purely their probative journalistic value.
The more “synthetic” style of contemporary journalism has increased the
tendency to allow polls to create the dominant context by which journalists explain and organize other news—becoming the lens through which
reporters see and order a more interpretative news environment.
These trends are reinforced by a growing reliance in the press on reporting
daily horse race tracking polls as news, which due to their superficial nature,
is further thinning the press’s and the public’s understanding of campaigns.
Growing audience skepticism and political polarization have created an
environment of distrust about the methodology and integrity of polling.
And all of these factors, in turn, are frustrating the efforts of academic and commercial pollsters to maintain standards and deepen understanding among
journalists about public opinion research and how to use it as journalism.
The combined effect of these trends in polling is providing citizens with more
facts about the daily ups and downs in the horse race and tactics of American
politics but a weaker understanding of the deeper structural meaning of elections or the mandate that they give the victors to govern.
Some of this has even spilled into the press coverage of polls themselves. In
late January 2004, on the eve of the New Hampshire primary, the Washington
Post’s respected pollsters Rich Morin and Claudia Deane wrote a story that
noted even that early in the race, “bad methodology, bad timing and simple
bad luck have conspired to produce some of the most memorable miscues in
the error-filled annals of media polling” (2004, p. C1). What they did not outline are these structural trends that created the conditions for that performance.
Changes in Journalism
Any discussion of the issue of the press and polling must begin with how the
changes in the culture of journalism are encouraging an expanded and less
Political Polling and the New Media Culture
701
critical use of polls. These changes involve several aspects, but the first one to
understand involves how the proliferation of news outlets spawned by new
technology is changing political journalism. When Ronald Reagan was president, his communications team could reach the American public by simply
walking into the pressroom and talking to seven outlets—the AP, UPI,
Washington Post, New York Times, and the three commercial TV networks.3
Today, two decades later, that list is not seven outlets but perhaps 70, or more,
and includes media—including talk radio hosts, Internet “bloggers,” and others—
that did not exist in 1980. Before we even consider the difference in style and
purpose of these different outlets, one more basic fact must be understood, a
fact that might be called the dirty little secret of the information revolution.
Most of this heralded revolution is about presenting information, not gathering it. The explosion in outlets has not meant more reporters doing original
shoe-leather reporting. Instead, more people are involved in taking material
that is secondhand and repackaging it. This greater reliance on secondhand
material inevitably has two consequences. First, it means that the reporting
news organization is less likely to have independently verified the information. Second, the understanding of the reporting news organization is usually
more superficial. They did not do the work themselves, discovering its
nuances and limitations. Rather than conducting the work, usually the reporter
or editor is paring down, summarizing, or rewriting a news agency account.
There are innumerable examples of the growing dependence on secondhand
material. Research by the Project for Excellence in Journalism (PEJ) has
established that the three major cable news channels, for instance, focus on
roughly four stories a day with their reporters. Most of the rest of the stories
are handled through anchor reads of wire stories (PEJ 2004, Cable TV, Content Analysis; 2005, Cable TV, Content Analysis). The Web sites generating
the largest traffic on the Web similarly depend almost entirely on wire stories
(PEJ 2004, Cable TV, Content Analysis; 2005, Cable TV, Content Analysis).
The trend in local television news is similarly moving toward heavier reliance
on feed material rather than original reporting (PEJ 2004, Cable TV, Content
Analysis). This growing reliance on repackaging secondhand material makes
news organizations more likely to use polling from sources that they cannot
vouch for. In effect, while there are more outlets, the number of reporters
working for each outlet is smaller than it once was, and they are busy presenting or re-presenting the same stories as each other.
The emphasis on repackaging material rather than gathering is reinforced,
too, by the demands of the 24-hour news culture. If people are busy updating
Web sites or standing outside the White House to be ready for their next live
stand up, they will have less time or opportunity to leave their post at the Web
site or their position on the lawn to do any reporting. The continuous news
3. David Gergen, Director of Communications, and Michael Deaver, Assistant to the President
(both Reagan communications strategists), interviews by the author, fall 1992.
702
Tom Rosenstiel
cycle adds another dimension to this as well. There is frankly more news time
to fill than there is news to fill it. As such there is more appetite for the latest
poll, the latest anything, further making the press less discriminating. Against
this backdrop, more polls means more stuff to put on the site or on the air. It
helps alleviate that problem.
The culture of online journalism has even elevated the notion of indiscriminate republishing to a value. Web sites are considered more robust if they
incorporate more material, including all polls, rather than by trying to discriminate
among that material. In the last election cycle, several Web sites emerged that
republished every poll, offering all of them equal value. While there is some
logic to this concept of more is more when it comes to polling—the notion
that five bad polls can equal one good poll by increasing the sample size and
averaging out the outliers—there is something else happening here journalistically. What is occurring here is that the value of the potential for infinite
depth on the Internet and the desire among users to sort material for themselves have trumped the value of having the journalist act independently as a
gatekeeper over what material is more or less valuable.
The push toward publishing material without vouching for it is reinforced
by another trend in the news culture—newsroom cutbacks. The cutbacks are a
function of simple market forces. The proliferation in outlets due to new technology has meant that most outlets are attracting a smaller audience. Newspaper
circulation has dropped by roughly 1 percent a year since 1990 (PEJ 2005,
Cable TV, Content Analysis). By the end of 2004, the three network evening
newscasts had lost 45 percent of their viewership since 1980, the year CNN
began, and 60 percent since their historic peak in 1969.4 Local TV news viewership has been dropping for the last half decade.5 The result is that most
media sectors have reduced staff. In network evening news, the number of on-air
correspondents has dropped by more than a third since the 1980s.6 The number
4. Nielsen Media Research, unpublished data (cited in PEJ 2004, Network TV, Audience; 2005,
Network TV, Audience).
5. BIA Financial Network (BIAfn), unpublished data, collected by Nielsen Media Research and
then calculated by PEJ (2005, Local TV, Audience). The data of ratings and share for 529 different local TV stations were originally collected by Nielsen Media Research and then collated by
BIAfn by station. PEJ then used that data to calculate national audience averages for both earlyevening and late-night local news programs, going back to 1997. Between May 1997 and May
2003, early-evening news programs lost 16 percent of their available audience share—or more
than 3 percent a year. Late news programs lost even more, 18 percent, again more than 3 percent
each year. Ratings dropped at a slightly slower rate.
6. Joe Foote, Arizona State University, Tempe, unpublished data (cited in PEJ 2004, Network
TV, News Investment). The number of correspondents featured on air during the average evening
newscast has been cut by more than a third since the peak in 1985, from 76.7 to 50 in 2002,
according to Foote’s data. That is a drop of 35 percent. That reduction in staff has meant an
increase in reporter workload. In 1985, reporters appearing in evening newscasts did an average
of 31.4 stories a year. By 2002, that number had climbed to 40.9, according to Foote. Figures for
other network staff (producers, cameramen, etc.) were not available, but reductions among them
may be even greater due to technological changes.
Political Polling and the New Media Culture
703
of people in newspaper newsrooms is down by 2,200 since 2001, or about
4 percent (American Society of Newspaper Editors 2004).
Not only do the consequence of the cutbacks reinforce the reliance on outsourced material, but the people evaluating that material tend to be less experienced, as the most cost-effective reductions occur among more senior
people. Thus the people posting information about polls on the Web site or
producing the local evening newscast and deciding to lead with the poll are
less likely to have a long background in political polling. The cutbacks also
mean that generally people have less time to burrow into the secondhand
material they are reporting. They will tend, inevitably, to have a shallower
grasp of the wire copy account. They will have less time to consider poring
through the full survey, if it is available to them; or to download it, if it is not
part of the wire story; or to ask prudent questions that would help them decide
for themselves whether the poll is valid, assuming they knew which questions
to ask to decide that.
Polls as Marketing
If it came as something of a candid admission to veteran journalists at the
L.A. Times in 1991 that the use of polls was driven, even at major news organizations, in part by marketing considerations, in 2005 it is no revelation. The
proliferation of news outlets has meant that many of these outlets are straining
for identity, for some reason for people to turn to them, sometimes any reason.
And no story is more competitive or, in the received wisdom of news-gathering
culture, more a part of how a news outlet brands itself and builds audience
than elections. With more outlets, we see plainly more news organizations
viewing polls as marketing, as “branded news,” which serves their commercial needs. This, indeed, is the principal appeal of daily tracking polls. They
offer something branded for a news organization to offer every day, its proprietary tracking poll.
Morin and Deane in the Washington Post even said in their own words what
pollsters generally do not admit publicly: “Too many of the most widely
reported pre-election polls cut corners, take big risks and use methods that are
less than gold standard” (2004, p. C1). This is particularly true at those news
outlets that are more oriented to repacking rather than original news gathering.
Perhaps one of the most striking examples is MSNBC.com, a popular news site
online, usually in the top three in overall web traffic according to the major
online rating services. Yet MSNBC relies almost entirely on wire copy for its
material, and if anything its tendency, given recent budget cuts, is to do even
less adaptation and editing of that wire copy than it originally did (PEJ 2004,
Cable TV, Content Analysis; 2005, Cable TV, Content Analysis). In its political
coverage in 2004, MSNBC.com dealt with this problem by highlighting its more
marketable and prominent feature, polling provided to it by Zogby International,
704
Tom Rosenstiel
run by John Zogby. In particular, during the 2004 election, MSNBC led each
day with a tracking poll by Zogby that was so controversial, the Washington
Post took the extraordinary step of writing a story that signaled to the political
community that Zogby’s polls were considered untrustworthy by most professional pollsters. The story noted that some of the most respected news organizations in the country, such as ABC News, will not even note his results. “He is
more a salesman and a self promoter than a pollster,” said Warren Mitofsky,
the legendary former head of polling for CBS and later head of two media consortium polling operations. Further, “he has made lots of mistakes on election
outcomes—five in 2002. . . . I have heard of volatile campaigns, but he has volatile polls” (Morin and Deane 2004, p. C1). Morin and Deane wrote that
“Zogby International does all kinds of controversial things to produce its headlinegrabbing tracking polls. John Zogby calls only people with listed telephone
numbers, missing those who are unlisted. About 30 percent of the people in his
samples were called during the day—a good time to reach retirees and housewives but a bad time to reach most working people” (2004, p. C1).
Other pollsters have complained privately that Zogby will change his screen
and his voter models at the last minute to make his final preelection polls without acknowledging it, as if his methods were right all along and the electorate
just changed at the last minute.7 Zogby himself acknowledged the private criticisms in his public comments to the Post. “I know I do some things different
than others,” Zogby said: “I know the so-called ‘Poll-ice’ would deny it, but
there’s art as well as science involved in this” (Morin and Deane 2004, p. C1).
Zogby is not the only pollster popular with the media whose methods are
privately questioned by pollsters at larger news organizations. Many of the
new generation of smaller one-name polling operations are often singled out,
despite their public prominence.8 One such pollster is Frank Luntz, who was
formally reprimanded by the American Association for Public Opinion
Research (AAPOR) for his work polling on the GOP’s 1994 “Contract with
America.” After making his “results” public, Luntz failed to produce even the
most basic information about the poll, such as the questions asked and how
many people were polled. “He finally did give us some information, but it
wasn’t enough,” said Diane Colasanto, then president of AAPOR: “It didn’t
really explain what the figures were based on. All we could tell was it seemed
like there might have been some survey done” (Chinni 2000). David W.
Moore, the author of the book The Super Pollsters, said the work of Luntz and
other people who might be dubbed “pollster-pundits” is nothing more than
“propaganda” masked as research. “What bothers me is they are given so
much prominence,” Moore told Salon: “One of the reasons media organizations
7. Pollsters, interviews by the author, 2000 and 2004. I interviewed various political pollsters inside
news organizations and campaigns over the course of the elections in 1996, 2000, and 2004. These
interviews were conducted on background as part of reporting done for Newsweek magazine in 1996
and for different outlets and the Project for Excellence in Journalism in 2000 and 2004.
8. Pollsters, interviews by the author.
Political Polling and the New Media Culture
705
started doing their own polling was to make sure they wouldn’t get biased
data.” Now the media are paying these people to poll: “The whole thing is
really a backward step (Chinni 2000).”
Privately, some of these pundit-pollsters even acknowledged to me that
they sometimes offer media companies their political polling work for free
because of the publicity it generates for them, thus attracting corporate commercial clients.9 The polling operations offer this work to news organizations
for the visibility or free publicity that it generates for their corporate work.
They become more prominent and can promote themselves as the pollster for
a high-visibility news operation. The news organizations, looking for interesting material to post online or put on the air, are apparently willing to save the
substantial dollar value such work would otherwise cost. The problem, of
course, is obvious. Media outlets get what they pay for.
Journalistically, such polls would seem dangerous, and, as the Morin and
Deane article makes clear, to some larger news organizations they apparently
remain so. From a marketing standpoint, however, the issue is not so cut-and-dry.
Polls that are outliers, diverging from the results of other polls, are also provocative and draw traffic to a news outlet, particularly to a Web site, where
consumers who hear about a poll on TV might subsequently visit the site that
originally published it. Controversial polls, in other words, can be even construed by some as good salesmanship. Luntz went so far as to claim to Salon
that his fight with AAPOR made him money, because certain clients were
attracted to the idea that the basis behind their data would be kept confidential
(Chinni 2000). In the case of polling, which by definition is a snapshot in time
of public attitudes and can change fairly markedly in a period of days, the
question of accuracy is harder to pin down. While social scientists might be
able to agree in hindsight that tracking polls on a given day were probably
wrong, in practice most journalists and most citizens will not look back. Realistically, in other words, the penalty for publishing controversial polls today is
fairly slight. The gain in traffic to a Web site might even be deemed to outweigh it. What is more, the arguments about which polls are better than others,
in an environment of cost cutting, may to some business-side interests even
seem esoteric. In reality, standards now vary among news organizations, even
those associated with fairly large corporate parents. This reality is evidenced
by the simple fact that the Washington Post has a joint operating agreement
with MSNBC.com, yet its pollster, Morin, is sufficiently alarmed by Zogby’s
work that he took the extraordinary step of criticizing him in a Post article. In
short, the fact that polls have a marketing value in today’s media environment
seems to militate against the potential damage of relying on survey work that
is controversial or even questionable.
9. Pollsters, interviews by the author. Two pollsters have acknowledged privately to the author
that they do their public political polling for free. They argued that they do this as pro bono work
because it serves the public interest and for the publicity. Rival pollsters have told me they do this
because they could not get hired if they charged.
706
Tom Rosenstiel
The “Synthetic” Journalism
The third major trend influencing, distorting, and elevating the role of polls,
even poor polls, is that journalism is becoming more “synthetic.” By synthetic
here I do not mean artificial, though it does imply processed, in the same way
that modern food can be processed. The new synthetic style of journalism
involves reporters and editors trying harder than ever to try to synthesize the
work of their competitors into their own. In effect, with more outlets reaching
the audience, there is a stronger concept of information that is already “out
there” in the public mind. The notion that one’s audience is one’s alone is
obsolete. People, journalists know, increasingly are getting their information
from multiple sources. Thus the wise journalist increasingly must account for
what is out there, what the public already has heard. The result is that more
and more journalism involves synthesizing that competitive material into
one’s own account and then adding something new or special to it or trying to
account for all that information into one interpretive or analytical frame.
In effect, this emphasis on synthesis is the new pack journalism. The “group
think” among journalists is no longer, as depicted in Timothy Crouse’s Boys
on the Bus, reporters looking over the shoulder of Walter Mears as he writes
his lead, knowing that their own editors would see Mears’s AP story first and,
trusting his judgment, want their own reporters to mimic it.10 The pack journalism of the 21st century occurs through the Hotline, CNN, Fox, and all the
e-mail from campaigns and interest groups that flows into reporters’ and editors’ terminals each day. There is an inevitable tendency, even conscious strategy, to try to synthesis all of that into a coherent and perhaps safe or
reasonable consensus. The speed and ubiquity of media, in other words, force
journalists to want to take subtle account of all those other media in their own
version of events.
Clearly polls are not the only example of “created” news. Nor are they the
only news that has a marketing component. A major exposé by a news organization is newsroom-generated news and serves the public interest. It also, in
lesser degrees, is supposed to get people in the community to read the paper or
watch the newscast and to get other organizations to talk about the work.
Polls, when done right, also serve the public interest, and while their probative
value is not perfect (pollsters have to frame issues for the public to respond to,
and that inevitably limits their parameters), they are a statistically reliable way
of measuring public attitudes. Polls also can give citizens a sense of the level
of information campaign operatives are operating at.
Yet the increasingly synthetic component of news elevates the power of
polling in a subtle and important way. In the more synthetic style of journalism,
10. Some journalists are skeptical of the specific anecdote in Crouse’s book, arguing that the
journalists who asked Mears what the lead was were making a joke about the need for the AP to
find “news” in every speech. Whatever was in the mind of the reporters in the incident, the larger
pressures Crouse was observing are hard to deny. The first wire service accounts do “frame”
events in the minds of editors.
Political Polling and the New Media Culture
707
in which editors, reporters, and producers spend more effort trying to encompass and add value to what is out there in the media culture, polls create a
context for journalists to explain and organize other news. In short, the new
media culture has intensified the degree to which polls become the lens
through which reporters see and order the news in a more interpretative news
environment.
This phenomenon has always existed to a degree. From my first efforts at
reporting on the press and politics, I could see it on television. As early as
1988, I recall seeing a case where a flat tire on Michael Dukakis’s airplane on
the tarmac became an irresistible metaphor for his predicament in the polls.
The tire became the focal point of the Dukakis story that day. Just as Dukakis
can’t get off the ground in the polls, his plane today couldn’t get off the ground
at the now forgotten airport. It is a shot I have seen many times since, be it a
candidate stumble, a mumble, or yet another case of airplane trouble. Flat tires
for George Bush’s campaign in 1988 were not news, because he was ahead.
Another factor in the synthetic news culture increases this tendency to use
polls as a thematic or narrative lens. That is the license given reporters today
to interpret the news. Our content analysis work at PEJ has revealed over and
over that news stories are much more thematic today than they once were. The
straight, event-driven news story, while not gone, has increasingly given way
to analytical pieces as first-day stories, particularly in politics. And polls provide a seemingly objective or at least nonobjectionable basis for reporters to
frame stories. This is particularly the case at a time when reporters are under
assault from the Right for the charge that they are liberals pressing a bias on
political news. The basis of a story that is tactical and strategic in its frame,
based on a candidate’s consensus position in the polls, is harder to attack.
Finally, the phenomenon of using polls as the frame or contextual lens for
one’s interpretative reportage only accelerates as you get more polls. A poll
each day offers a new hook for this contextual approach. And if you have
more tracking polls in particular, the cheapest and most frequent kind of poll
but one that offers only a horse race understanding of the day’s events, that
encourages more stories that define the election race in tactical, strategic, and
horse race terms. Polling data become the dominant explanation for much of
what candidates and their campaigns do. Values, political philosophy, life
experience, authentic belief, and all the other motivations behind political
action are devalued in the coverage because they are harder to report, harder
to identify, harder to measure.
Growth in Tracking Polls Is Further Thinning the Press’s
Grasp of Politics
The pressure of the first three trends—the reliance on secondhand material,
the reliance on polls as marketing, and the tendency to use polls as the context
708
Tom Rosenstiel
by which to frame the rest of the campaign—are reinforced by a fourth feature
of modern political journalism—the rise of the tacking poll as news. Tracking
polls are different than more traditional longer surveys of 20 or 30 questions,
conducted over three to five days, of a representative sample of the nation or
whatever universe is being studied. Tracking polls are quicker snapshots than
that; they ask usually fewer questions of a much smaller sample of people.
Typically, tracking polls involve nightly sampling of 150 to 200 people, and
in politics they usually limit themselves to one or two subjects, including candidate preference—or the so-called horse race. Each day, two or three nights’
numbers are then added together to arrive at a more statistically reliable sample.
When each day’s multinight sample is compared to the next, such tracking
polls are believed to detect changes in the direction of races.
Tracking polls were developed in the 1960s to track the effect of coffee
commercials, and they were brought to politics by the Reagan campaign in
1976. As late as 1992, many newspapers and television news operations considered nightly tracking polls appropriate as background material but not for
publication. Skeptics thought such polls were fine for indicating trends—
whether a candidate was gaining or losing momentum. But the actual numbers
were believed to be misleading. The sample sizes each night were too small to
be considered statistically representative or to pinpoint a candidate’s exact
standing. They might reliably suggest a candidate was dropping, but the margin of error was pretty large to tell how far. For instance, Kathy Frankovic at
CBS and Mary Klette at NBC saw great risk. The results were too imprecise,
they feared, and the public would not distinguish between the precision of a
conventional poll and the rough trend lines of a tracking poll (Rosenstiel
1993). Neither, they worried frankly, would most journalists.
By 1992, some in journalism were becoming more aggressive. Jeff Alderman
at ABC had come to believe that tracking polls were valuable, as long as they
were used carefully. He had used them in 1984, distinguishing himself at his
network and in the press polling community, by employing the only media
poll to catch the late surge of Gary Hart in New Hampshire over the weekend.
If heading into the 1992 race Alderman was the most aggressive pollster in the
media about using tracking polls, that soon began to change. In the primaries
that year, CNN began airing tracking polls on a daily basis. The news channel
not only aired multinight samples, it aired one-night samples as well (Rosenstiel
1993). The practice irked veteran press pollsters, who complained privately.11
CNN at the time was still considered something of an upstart, though one that
had earned respect during the Gulf War of 1991. But with so much time to fill,
it still often filled it with things that more veteran political journalists thought
ill considered.
11. Various pollsters, interviews by the author, 1992. Several of these polling professionals complained to me while I was covering the campaign for the L.A. Times and was working on a book
about the press and the race.
Political Polling and the New Media Culture
709
One problem with tracking polls is that the numbers are highly volatile even
when done carefully. On the last Sunday in October 1992, for instance, the
track had Clinton up by 15. The next night’s tracking poll had Bush up by
three points. It could not be right. Alderman decided there was only one thing
he could do. ABC was adding two nights together to arrive at its figure each
day. The two-day track for Monday still would average Clinton up by seven
points. The two-night track for Sunday had him up by 11. It would suggest
Clinton had dropped four points in a day, something that could not be right,
but Alderman figured they could note that the four-point drop was still within
the tracking poll’s margin of error. The investment in these polls by news
organizations often puts the journalists who know the most in an awkward
position. The two men who ran ABC’s signature evening newscast, anchorman Peter Jennings and Executive Producer Paul Friedman, thought the 1992
ABC tracking polling was out of control, but they also felt trapped into running them, since they could not know which numbers were wrong. They dealt
with it by trying to bury the poll at the end of the political report and avoiding
at all costs leading the newscast with a poll. They succeeded through the
course of the campaign in leading only three newscasts with a poll. Other
ABC news shows were not so careful (Rosenstiel 1993).
Twelve years later, the subtleties of the arguments being conducted in private in 1992 seem almost quaint. Rather than tracking polls being done by
Gallup and ABC, there are many more tracking polls being done—and some
of them by some of the most controversial pollsters of all, including Zogby. In
the 2004 race, for instance, there were six different organizations conducting
daily tracking polls in New Hampshire alone: Zogby, American Research
Group, Suffolk University, University of New Hampshire, Gallup, and the
Boston Globe/WBZ-TV. What is more, on television, Web sites, and in print,
the publishing of one-night tracking samples became commonplace.
Various Web sites such as Realclearpolitics.com and “Larry J. Sabato’s
Crystal Ball” daily posted all polls, tracking and otherwise. Some made some
attempt at creating an average of them all. CNN dealt with the problem by following these Web sites, running not only its polls but comparing them to all
the others. In effect, the choice implied that maintaining standards by sifting
through the information and functioning as a gatekeeper over what was trusted
and what was not was no longer possible. Run it all.
The point here is not to revisit the arguments over whether tracking polls
are safe to publish as news. Those who publish or air these tracking polls
would doubtless defend their accuracy and reliability. In the general election,
the national tracking polls in the end were not far off from predicting the race.
The ABC/Washington Post poll, with larger sample sizes as it neared Election
Day, forecast a 49 percent to 48 percent result with Bush ahead, within the
margin of error of the actual result. Others will argue that in the new media
age it is best to let a thousand polling flowers bloom: it is the notion, which I
have heard from different pollsters over the years, that six bad polls equal one
710
Tom Rosenstiel
good poll. The idea is that with more polls, the sample size in effect becomes
large enough that outliers are averaged out. Taken together, we actually get a
better picture. Resolving this argument is not my point. I will leave the argument to others as to whether the change in standards from being wary of tracking polls to embracing them is an erosion in accuracy or whether more
information in real time offers a closer understanding of the race day by day.
What is more important than the technical debate is something that is less in
dispute. What is the effect of the new polling standards on what we are learning as a society? Even putting aside their inevitable volatility, the emphasis on
tracking polls is offering citizens a shallower understanding of the race. The
fact is that the explosion in and growing reliance by the press on tracking polls
is focusing the attention of journalists and citizens increasingly on the daily
ups and downs of the horse race and away from the myriad other concerns that
make up an election—candidate record, vision, values, policy offerings,
promises, the state of the country economically and in other ways, and the
interplay of those with the electorate psychologically.
Part of this is due to the short nature of tracking poll instruments themselves. The surveys usually focus on the horse race and not much more. More
traditional polls are longer, offering 20 to 30 questions about voter attitudes
toward the race. Probing beneath the horse race, such polls are designed to try
to get at what is causing voter behavior. What do they see in a candidate now?
How does that compare to how they viewed that candidate a month ago? How
have external events—in Iraq, the economy, terrorism abroad or at home, or
their voter psychology elsewhere—influenced how they see the race? Longer
polls of this design, in other words, help us get closer to seeing the race in a
broader context, to tell us what an election tells us about ourselves as a nation
in a national race or a state or a community in a local race. Polls and press
coverage of this sort transform elections into conversations among citizens
about their lives, their fears, and their aspirations. Such deeper coverage, in
turn, offers a deeper mandate for governing. They tell us more than who won
or “the what” of the election. The let us know more about the reasons for that,
“the why” of the election, which is where the meaning of a campaign and the
basis of governing are derived.
Some of the better organizations that conduct tracking polls will rotate substantive questions on and off the questionnaires, providing a broader array of
items across time. This adds a dimension that simpler tracking polls do not
have. Yet they still offer a much slimmer sense of the race than a longer
instrument.
Some will doubtless note, quite rightly, that the longer, more substantive
polls are still being conducted. They have not been supplanted. The material
for that deeper understanding is still there. Yet that misses the point. The coverage of the more numerous tracking polls is crowding that other discussion out.
More stories about the daily horse race shift the focus of the race. More horse
race polls, in short, translate into more horse race coverage. Cumulatively, the
Political Polling and the New Media Culture
711
reliance on tracking polls in the coverage leads to a different kind of meaning
for and understanding of the race.
This shift in media values from understanding the race to predicting it is
perhaps nowhere more starkly seen than in what has occurred with the
media’s exit poll in recent election cycles. As the use of shorter, more frequent
polls has grown, the media exit poll has eroded. In 2000, errant and premature
calls of Florida led to congressional hearings and internal investigations. In
the 2002 midterm elections, the newly revamped exit poll was so troubled, no
results were released at the time. In the 2004 race, although no network made
any faulty projections using it on Election Night in November, the poll had
more problems, including a 2.5-hour data blackout and samples that at one
point or another included too many women, too few Westerners, not enough
Republicans, and a lead for the eventual loser, John Kerry, that persisted until
the poll was weighted by the actual vote counts (Morin 2004). The reasons for
these problems, sources who worked for the exit poll have told me, largely
rest with money.12 The networks, which pay the lion’s share of the poll, have
cut back on it to the point that these problems were waiting to happen if a
presidential race were close enough. The significance of this is that the exit
poll at its best is a remarkable tool—not for projecting winners but for understanding the electorate after the election. The exit data offer us a report of why
the people who actually voted made the choices they did. It is a remarkable
asset in understanding what an election really means and where the country
should go—for journalists, scholars, and citizens. The erosion of the exit poll,
in contrast with the rise of horse race tracking polls, puts in relief what is happening to our emphasis in polling and the consequences for our understanding
of our politics.
Audience Skepticism
The next trend facing both pollsters and journalists in the new press culture is
public skepticism, perhaps even cynicism. The most dominant new feature of
the last election cycle I heard was both journalists and pollsters saying they
had never encountered the level of vitriol and distrust from audiences about
their work.
Andrew Kohut of the Pew Research Center for the People and the Press
said it was like nothing he ever heard before. Kohut’s polling operation is the
closest one can come to pure research today. Funded by the Pew Charitable
Trusts, it produces surveys without the explicit demand from a paying client
news organization that the survey make news. Nor does it have the mandate,
like polls funded by interest groups, to push an agenda. Yet Kohut says he was
barraged by e-mail and letters in 2004 by citizens who attacked his motives
and his methods when a poll came to findings that the citizens disliked.
12. Former exit poll executives, interviews by the author on background.
712
Tom Rosenstiel
“I have never experienced anything like it, the amount of it or in the tone of
it,” Kohut said.13 Therein lies the important dimension. In the new culture,
where the media is a dialogue and less of a lecture, the role of the expert is
devalued and even under suspicion. Professional expertise is believed to be in
the service of other agendas. Professional objectivity is considered something
of a canard.
The criticisms pollsters and news managers reported hearing from angry
citizens were often highly technical. People complained about the screens
pollsters were using to identify voters and nonvoters. Other pollsters said they
were attacked over the wording of their surveys and over their past client lists.
A major California pollster told me he also got e-mail and letters challenging
his models about the demographic makeup of the electorate. He complained,
privately, that he considered the letter writers unqualified to challenge him at
this methodological level.14 As an example, gadfly Arianna Huffington on her
Web site is running a crusade to get polls to release the response rates for each
of their surveys, because she is convinced that high response rates cast doubt
on their reliability, a technical issue pollsters have studied but for the moment
have resolved is not valid.
Interestingly, news organizations heard similar complaints about their
reporting, not only of polls but of other work. One local news director from
the Northwest told a private postelection analysis meeting at the Museum of
Television sponsored by the Carnegie Foundation in December 2004 that
there was little doubt in his mind that some of this was organized. The letters
he received attacking coverage considered negative of certain candidates all
had certain common phrases. Organized or not, however, the fact that citizens
seem so willing to attack journalists and pollsters for ill motives, to distrust
their professionalism so easily, is a factor both groups need to keep in mind.15
It is part of a new culture of news consumption, not only news. Audiences
have always rooted for their own interests in the news. Bias, to some degree,
has always been in the eye of the beholder. But the willingness to discount the
professionalism of the journalist and the pollster is reaching a new level. The
doctor is no longer the universal medical authority. He or she is, we might say,
the surgical procedure and pharmaceutical expert in a family health team that
includes many others. The journalist is no longer the gatekeeper over information. And the pollster is no longer a social scientist with special accumulated
insight into public attitudes. Pollsters are, at least to a growing part of the public,
operators of “public attitude snapshot” methodology. And their integrity may
depend on how that snapshot comes out.
13. Andrew Kohut, interview by the author, various occasions throughout the election of 2004.
14. A California Democratic pollster, interview by the author on background.
15. The event, which I attended, was not for quotation, so the identity of the news director and
the station are being protected here. Other news directors at the same meeting agreed they had
encountered similar intensity and nature of complaints.
Political Polling and the New Media Culture
713
Efforts at Maintaining Standards
All of these factors, of course, make another task more difficult. They all converge in ways that will make it harder for academic and commercial pollsters
to maintain standards and deepen understanding among journalists.
Online consumers, for instance, flocked to Web sites in the last election
cycle that noted every poll—good, bad, and indifferent—did averaging of
them, and offered commentaries and the Web authors’ own personal views of
which polls were most reliable. The popularity among users of these sites is
driven in part by the confusion of people over which polls were most reliable
and the sheer quantity of polls themselves. It was so hard to keep track of all
the polls, these Web sites were a way of doing it. The expertise of these different
Web site authors was often hard to detect. Some had multiple authors. Others
were hosted by anonymous figures. The agendas and political leanings of
these people were sometimes obscured, purposely. Yet the fact that every poll
was included and accounted for was their appeal. The polling sections of most
news organizations’ Web sites offered no such comprehensiveness. The Web
sites of MSNBC and CNN, for instance, generally offered only their polls and
sometimes a few others. They remained moored in trying to market or promote themselves, not function as an advocate for the frustrated poll consumer.
That service, however flawed, was being offered by nontraditional journalistic
sites. But they were, in their own sometimes curious fashion, performing this
journalistic function better than journalists.
The efforts by the polling community to educate journalists and consumers
and polling go back some time. The National Council on Public Polls developed “20 Questions a Journalist Should Ask about Poll Results.” Now in its
third edition, the paper, written by Sheldon R. Gawiser and G. Evans Witt,
includes such basic questions as: “Who paid for the poll and why was it done?
How many people were interviewed for the survey? How were these people
chosen?” It also includes more subtle questions such as, “Who should have
been interviewed and was not? In what order were the questions asked? What
other polls have been done on the topic?” (Gawiser and Witt 1994).
This last election cycle, frustrated by the situation with polls, the American
Association of Public Opinion Research decided to create its own primer.
Then AAPOR Vice President Cliff Zukin of Rutgers University authored
“Sources of Variation in Published Election Polling: A Primer” (2004). Written
to anticipate problems seen in media coverage of polls now, Zukin’s essay
hints that the press understanding and use of polls may be more of a problem
than many journalists imagine. Even concepts as basic as sampling error are
often misunderstood. As Zukin puts it
What is less commonly known is that the margin of error does not apply to the
spread between the two candidates, but to the percentage point estimates themselves. If applied to the five point spread the four point margin of error would
seem to say that Bush’s lead might be as large as nine (5 + 4), or as little as one
714
Tom Rosenstiel
(5 – 4). But when correctly applied to the percentage point estimates for the candidates Bush’s support could be between 52 and 44% (48 ± 4), and Kerry’s
between 39 and 47% (43 ± 4). Thus the range between the candidates could be
from Bush having a 13 point lead (52 – 39) to Kerry having a 3 point advantage
(44 – 47). So, sampling error is generally much larger than it may seem, and is
one of the major reasons why polls may differ, even when conducted around the
same time. (2004, p. 2)
These are excellent primers and good counsel. Yet groups such as AAPOR are
frustrated by whether the line journalists are seeing and understanding them.16
They are fighting an uphill battle. The trends cited above, particularly the
cutbacks, increased workload, and moving away from experienced expertise
at news organizations, will only frustrate these attempts.
Zukin, now president of AAPOR, sees a difficult, perhaps even dangerous
landscape. “In effect, you have less brainful vetting of more brainless polling,” he summarized in fall 2005.17 Zukin sees various factors to explain this,
but a major one is the changes in journalism itself: “The media own much of
the polling industry. So when you change the values and practices of the press,
the values and practices of the polling industry change.” Zukin can cite
numerous examples of pollsters who know better following what he considers
questionable practices, from small sampling to polling too early, when public
opinion is still forming, such as single-night surveys conducted immediately
after a presidential debate. “A lot of pollsters are not following best practices
and they know it,” he said, “including pollsters at some of the biggest news
organizations, because there is a demand for it.”18 Just as structural changes in
journalism are creating more demand for questionable polling, Zukin sees two
structural factors about polling presenting a problem: “First, it is harder to do
election polling well, with response rates falling and cell phone use rising.
Second, it is easier to do polling at all, with nonscientific Internet surveys and
automatic dialers recording touch tone activated answers, which means more
people are out there doing questionable work.”
These pressures on polling as well as media represent a challenge that the
academic and commercial polling community needs to reckon with. If the polling profession was dissatisfied with how journalism handled polls in the past,
the situation has not improved with effort. And the future looks more complex.
Conclusion
In the end, the landscape of political polling and the press might be thought of
as an ecosystem in distress. The press culture represents a marketplace. With
16. Zukin and AAPOR President Nancy Belden were so concerned about the situation that they
met with me to strategize on how to more effectively get such information in front of journalists.
17. Cliff Zukin, interview by the author, fall 2005.
18. Cliff Zukin, interview by the author, 2004.
Political Polling and the New Media Culture
715
fewer barriers to entry, there are more pollsters who will move in whatever
ways they can to fill that market. More outlets competing with fewer
resources will want more polls and will want to pay as little as they can for
them. The marketing gain in getting one’s name out there with a poll each day
may outweigh today the risk of a poll that is controversial among professionals or even a poll that is inaccurate. Amid so many polls, the risk of being
wrong may be smaller anyway, since with more polls, there are more outliers.
Consumers, and journalists, who want to understand polling have more
tools for doing so than ever before. Yet the other side of this access is equally
true. We will all be more exposed to more of everything from now on, the best
and the worst, both of polling and of the press coverage about it.
References
American Society of Newspaper Editors. 2004. “Newsroom Employment Survey.” Table A.
Available online at www.asne.org/index.cfm?id=5147.
Chinni, Dante. 2000. “Why Should We Trust This Man? Frank Luntz Is the King of Pollster Pundits, but Don’t Ask Him Where His Numbers Come From.” Available online at www.salon.
com/story/politics/feature/2000/05/26/luntz. (accessed October 11, 2005).
Curtin, Richard, Stanley Presser, and Eleanor Singer. 2005. “Changes in Telephone Survey Nonresponse over the Past Quarter Century.” Public Opinion Quarterly 69:87–98.
Gawiser, Sheldon R., and G. Evans Witt. 1994. “20 Questions a Journalist Should Ask about Poll
Results.” National Council on Public Polls. Available online at www.ncpp.org/qajsa.htm
(accessed November 21, 2005).
Morin, Richard. 2004. “Exit Polls: New Woes Surface in Use of Estimates.” Washington Post,
November 4, p. A29.
Morin, Richard, and Claudia Deane. 2004. “A Snowy Graveyard for Pols and Polls.” Washington
Post, January 26, p. C1.
Project for Excellence in Journalism. 2004. State of the News Media. Available online at
www.stateofthenewsmedia.com.
Project for Excellence in Journalism. 2005. State of the News Media. Available online at
www.stateofthenewsmedia.com (accessed November 21, 2005).
Rosenstiel, Thomas. 1993. Strange Bedfellows: How Television and the Presidential Candidates
Changed American Politics, 1992. New York: Hyperion.
Zukin, Cliff. 2004. “Sources of Variation in Published Election Polling: A Primer.” American
Association of Public Opinion Research. Available online at www.aapor.org/pdfs/varsource.pdf
(accessed November 30, 2005).