political opinion polling: use and abuse

POLITICAL OPINION POLLING:
USE AND ABUSE
R. Mark Tiller
Houston Community College--Northwest
Public opinion research, through the use of polling, has become a vitally important aspect of politics. Polling
techniques are increasingly sophisticated--through the use of computers, census data, telephone banks, focus
groups, and market research. Much of the methodology employed by the consumer market research industry,
such as electronic "pulse-buttons" (also called "speech pulse buttons")--which record a person's opinion
second-by-second while he or she watches a speech or a television advertisement--is now being used in
political campaigns, to sell candidates, rather than soap. Polls have come to dominate American elections,
transforming them into "horse races," according to many observers. While some believe the increasing use of
opinion polling reflects a healthy dose of direct democracy, critics maintain polls are often abused and used to
manipulate public opinion. This essay will discuss the democratic nature of polling, some common flaws in
polling techniques, and the effects that polling has on politics today.
To Poll, Or Not To Poll: Is That A Questionnaire?
Opinion polls serve many functions, but perhaps the most important of them is to inform governmental
leaders. Many Americans believe that representatives should act as delegates--that is, they should faithfully
mirror the views of their constituents. If candidates and representatives are to reflect their constituents'
wishes, they need the means to measure public opinion. Lobbying, whether by individuals or interest groups,
does not necessarily reflect the attitudes of the mass public, and therefore cannot be the sole guide to public
opinion. In fact, less than 10% of the American public even bothers to phone or write a member of Congress
or the president at least once per year. A properly-done opinion poll is a better instrument for measuring all
members of the group in question as well as predicting their voting behavior. For democratic theorists,
scientific polling is the solution to many problems of representation, and may be an even more accurate
indication of public opinion than are elections.
However, opinion polling for such purposes is often criticized by those who advocate the trustee model of
representation. Trustees should be free to make policy as they see fit, since they are better informed on the
issue than is the public. They are held accountable for their votes by elections. The delegate model of
representation is certainly more democratic, but it is not guaranteed to produce good policy. After all, the
United States is a republic, not a democracy. Many argue that representatives need to be leaders rather than
poll-watchers. Consistent, enlightened policy can hardly be achieved by following the advice of pollsters, they
argue, since the public is largely apathetic and politically unsophisticated. Consider the words of the British
philosopher Edmund Burke:
Your representative owes you, not his industry only, but his judgement;
and he betrays, instead of serving you, if he sacrifices it to your opinion.
Indeed, the public's attitudes are often quite unstable and subject to dramatic shifts due to the news of the day
or even random events. Herd mentality often rules--many base their opinion on what they believe to be the
prevailing public view, creating a bandwagon effect. This helps explain why interest group spokespersons are
always so anxious to publicize opinion polls that indicate support for their position. These polls, while often
misleading, may create more support, among both the citizenry and their leaders; thus, such polls act as
self-fulfilling prophesies.
This brings us to the main theme of this paper. Polls are frequently misused by interest groups, candidates,
and elected officials. That is, they are sometimes poorly constructed, poorly administered, and/or poorly
analyzed and reported. Flaws in opinion polls may be made unintentionally, but they are often made
purposely--the audience should beware the sponsor of the poll. Interest groups typically ask very biased
questions, and publicize the results in order to gain support for their agenda among elected officials and create
more popular [bandwagon] support. Candidates often pretend to poll, when they are really advertising their
candidacy and attacking their opponent. Members of Congress and state legislatures send their constituents
questionnaires designed to provoke the response they wish, so that they can later claim to be following the will
of their constituents.
Therefore, some degree of skepticism with regard to polls is in order. The audience of opinion poll reports
should ask several questions:
--Who commissioned (paid for) the poll?
--What does the poll's sponsor have to gain by the report of the results?
--Who created the poll?
--What were the exact questions?
--How and when was the poll administered?
--How many people were polled?
--What kind of people were polled?
--Who analyzed or interpreted the poll?
--How was the poll reported?
Each of the questions is revealing. If such questions cannot be determined, there may be reason to doubt the
poll's validity. Now let us examine some specific examples of polling flaws.
The Perils Of Polling
QUESTION WORDING
The way the question is worded can have a dramatic effect on responses. The most common criticism of
question wording involves leading questions. When a question is biased in such a way as to help provoke a
certain response, it is said to "lead" the respondent. Through the use of biased or emotional loaded terms
("terrorists" vs. "freedom fighters"), the sponsor can create almost any desired result. Those in favor of a state
lottery might ask: "Do you favor a well-regulated state lottery, rather than increased taxes, to fund vital
education programs and save the Texas economy?" Opponents might ask: "Do you favor legalizing lottery
gambling, in spite of the failure of many states to raise significant revenues through lotteries and the potential
for increased activities of organized crime and other social problems?"
If the question is confused, so that different respondents interpret it in different ways, the results will be
meaningless. For example: "Would you never refuse to not pay taxes for yourself?" There are so many
interpretations of this question that the answers will be invalid. Grammatical errors, questions that are too
long, and statements that include words that are not familiar to the average respondent will also provoke
mistakes.
Question wording may also be oversimplified, so that the respondent can answer in a variety of ways,
depending upon his or her interpretation of the meaning. For example, one might be asked: "Do you believe
government should guarantee economic rights?" Some who answer "yes" might do so because of their belief
that the government should regulate less, tax less, and interfere less with property rights and free enterprise--a
conservative or economic libertarian point of view about the economy. Others who agree might think the
question asks whether citizens should be guaranteed a basic education, job training, shelter, health care, and a
minimum standard of living--a more liberal point of view about the economy.
The poll will also be meaningless if it results in a vague question. Almost everyone would agree that "The
Texas prison system is in need of reform." However, what they consider "reform" could be a wide variety
choices--mandatory sentences, more construction, no early release, administrative changes, more
rehabilitation, less brutality, alternatives to traditional incarceration, etc. Some polls taken after the 1995
Oklahoma City bombing indicated majority support for the proposition that the federal government was a
threat to individual freedom. Does this mean most Americans agree with the anti-government rhetoric of the
more radical militias? Hardly. What it probably means is the question is so vague it could be interpreted in a
wide variety of ways. Furthermore, almost any kind of anti-government rhetorical question will attract
majority support. Perhaps this is a healthy sign of the long American tradition of democratic dissent and
individualism. Perhaps it is a measure of how naively ignorant and ungrateful is the American public.
Even if the question itself is not vague, there may be vague response categories. One example is the many
meanings that could be ascribed to the word fair when it is offered as one of the response categories: It could
be interpreted to mean good, okay, reasonable, just, mediocre, only meeting minimum standards, etc. Another
problem occurs when many choices sound similar, so that the pollee is not sure which one is most like his or
her opinion. Just what exactly is the difference between often, sometimes and occasionally?
Consider the following question: "Do you favor cutting government spending; for example, on defense?" This
question is plagued by multiple stimuli. The pollee can respond in this case to either question--spending in
general, or defense spending.
Poll-makers should also avoid open-ended questions when possible. An open-ended question, such as "How
do you feel about taxes?" asks for a short answer rather a selection from among multiple choices. The
responses to such a question will have to be categorized by the administrator, which involves a substantial
amount of subjective interpretation.
QUESTION ORDER
Order is also important. If the pollster asks a series of questions that tend to remind the respondent of Ronald
Reagan's faults and mistakes, and then finally asks whether Reagan was a "great president," the majority
response is likely to be "no." If, on the other hand, a series of questions first remind the respondent of
Reagan's successes and good traits, many more respondents will answer "yes."
In a Times-Mirror Center for the People and The Press poll taken in late March 1992, Bush led Clinton 50%
to 43%. When (at the same time) the Center polled a different sample that first asked them about Bush's
handling of the economy, the fact that Saddam Hussein remained in Iraq, and Bush's breaking of his no-newtaxes pledge, the margin was reduced to 48% to 45%. A third group was polled after asking them about
Clinton's alleged affairs, conflicts of interest, and Vietnam draft status, resulting in a Bush lead of 54% to
39%.
TIMING
It is often said that polls are merely photographs or snapshots of public opinion at the moment of the poll. Just
as the photograph only records what once was, the poll records opinion as it once was. Opinion may fluctuate
greatly from day to day. For example, a slip of the tongue by a politician may cause a 50% drop in the polls
temporarily, but by the time of the election, many people will have forgotten the incident.
Polling of the popularity of Texas Republican gubernatorial candidate Clayton Williams offers an excellent
example of the importance of timing. Williams began the campaign polling far ahead of Ann Richards, but
made a series of terrible mistakes that gradually reduced his popularity. Polls taken immediately after one of
his spontaneous remarks was publicized showed a dramatic drop in his popularity. Polls taken a few weeks
later usually indicated that he had regained most of what he lost. Each time he made an off-the-cuff comment,
his popularity fluctuated. While Richards' popularity remained fairly constant, Williams' rose and fell daily.
The Richards campaign tended to cite polls taken the day after one of his comments was publicized; the
Williams campaign cited polls taken immediately before the latest gaffe.
Support for the president changes according to vetoes, bill-signings, press conferences and speeches, reports
on the economy, foreign visits, international crises, public relations stunts, natural disasters, and even events
in the citizens' personal lives. Thus, the point at which the president's popularity is measured determines the
outcome, to a large degree.
One should be suspicious of opinion polls that are taken immediately after a dramatic event occurs. News
organizations often accompany their reporting of crises with polls on the subject. The results of such polls,
while perhaps accurate, are often misleading--opinion has been temporarily altered by a barrage of reporting.
After reporting of the event diminishes, opinion may change, often reverting largely to its original pattern
before the crisis occurred. For example, polls taken during the Los Angeles rioting probably do not reflect
stable long-term opinions.
Sometimes large polls are taken over a period of a few weeks. These results are especially misleading,
something akin to a double-exposure photograph. That is, the opinion of those first polled may have already
changed by the time of the poll of the last individuals. Although timing problems may confuse surveys,
timing is also revealing. Pollsters sometimes ask the same question(s) over a period of time, to record, or
track, how opinion changes due to events, advertising, or other factors. This is called a tracking poll.
RESPONDENT PROBLEMS
The respondent (the person interviewed) can cause a variety of problems if pollsters do not take several factors
into account. First, guesses, or non-attitudes, should be filtered out; the pollster should not create opinions
where none exist. If the respondent is not given the options of "don't know" or "no opinion," they will be
forced to guess from among the other choices. While there is an important difference between "don't know"
and "no opinion," the latter is generally preferable, because some respondents may still guess to avoid the
embarrassment of choosing "don't know." Additionally, one cannot assume, just because a respondent
answers the question with an "opinion," that he or she really cares enough about the issue to remember it one
hour later. In other words, it is difficult to weight the intensity of the respondent's conviction about a certain
issue, although some polling now does try to do exactly that.
Sometimes pollsters try to filter out non-attitudes and less-intense opinions through the use of open-ended
questions. Those who are uninformed on the issue will not be able to respond, since no choices are made
available to them. This helps eliminate guessing, but causes its own problems, as discussed above in the
section on question wording.
The reader of polls should also be aware of the neutral effect. Whether or not the respondents are familiar
with the issue, many will choose the middle answer for questions that have scalar responses (such as "very
high, high, average, low, very low"). Most people want to appear normal and will therefore pick the middle
answer. This leads to what pollsters call the illusion of central tendency--that is, a false indication of
moderate consensus. Manipulative pollsters can take advantage of this tendency by skewing the answers, so
that even the middle choices are answers that favor their position. (Example: "always, almost always, usually,
sometimes, never.")
Political motivations threaten opinion polls. That is, respondents may bias their answer in order to strengthen
their political positions. For example, those who benefit from a certain program are unlikely to admit that
their program's funding is adequate, because they want to guard that program's future budget requests. Others
may feel that there has been improvement concerning a certain problem, but do not want to admit it, lest
others feel the problem is solved. Unlike the neutral effect, political motivations can cause the respondent to
exaggerate his or her opinion (or lie).
In some cases, the respondent may not answer truthfully because of embarrassment. A few years ago, this
was dubbed the David Duke syndrome, since polls universally underestimated support for him in his campaign
for a seat in the Louisiana House of Representatives. (Duke is a former Ku Klux Klan leader and neo-Nazi.
He toned down his rhetoric, said he regretted some of his "intolerant past," and became a Republican.
Although he trailed by a substantial margin in the pre-election polling, he won.) Because of this experience,
some pollsters suggested that polls taken during the subsequent 1991 gubernatorial contest between him and
Edwin Edwards probably underestimated his popularity. As it turned out, they did not, and Edwards won by a
larger margin than had been predicted by the pollsters. Perhaps either pollees were no longer embarrassed to
indicate their support for Duke, or they were equally embarrassed to say they supported Edwards!
On the other hand, some candidates poll higher than they get on Election Day because respondents do not
intend to actually vote for the candidate, but use the poll simply to send a message. They may think the
candidate has no practical chance of winning, as in the case of Jesse Jackson's presidential campaign, or they
may not really trust the candidate, as in David Duke's gubernatorial contest.
A more common example of the embarrassed pollee is one who falsely claims to vote. Post-election polls,
compared with the actual vote, typically show that many more people say they voted than actually did, and that
many more people say they voted for the winner than actually did. The reverse of the latter phenomenon is
also possible, as in the case of the 1972 presidential election of Richard Nixon. George McGovern, his
opponent, used to relish noting how more and more people said they voted for him as Nixon's popularity
plummeted, beginning in 1973. McGovern often said: "According to current polls, I won."
Finally, for a variety or reasons, some respondents may simply wish to sabotage the poll. Some may find it
entertaining to pick extreme or even contradictory responses. Some take pleasure in retaliating against
intrusiveness pollsters. (This is a particular problem for the U.S. Census Bureau.) Others object to the
pervasiveness and increasing importance of polls before elections, which turns serious political questions into
sporting events and overwhelms the substance of the election debate. In the 1984 and 1988 presidential
elections years, several prominent editorial columnists urged their readers to lie to pollsters, especially in polls
taken near Election Day.
THE OBSERVER EFFECT
Scientists in all disciplines face the dilemma of the observer effect, also called the Hawthorne effect. The
very act of observation of a phenomenon can change its normal state. For example, in order to observe
something, light is usually needed. However, a light pointed at the ocean floor scatters the natural order of
activity there. Light disturbs sub-atomic particles being observed, so we cannot precisely measure the orbital
path and velocity of electrons (the Heisenberg uncertainty principle). A camera light shown into a dark alley
where a drug deal is taking place will likely disrupt the deal (and endanger the camera operator!). In these and
other cases it is difficult to observe the natural state. Pollees' opinions may be altered because they know they
are being observed.
Further, there many be a certain interviewer characteristic that biases the respondent. The age, sex, race,
religion, nationality, accent, etc. of an interviewer can alter the response. For example, a white male is more
likely to respond favorably to affirmative action if the interviewer is a black woman than if the interviewer is a
white male.
SAMPLE ERROR
The survey sample may be too small. Confidence in the accuracy and validity of the poll rises according to
the number of people surveyed. Of course, polling is expensive, but political opinion polls that involve less
than 1000 persons are usually of questionable validity. For example, random polls of about 100 people, with a
95% confidence level, are (95% of the time) as much as ten percentage points in error--in either direction; i.e.
they are not very dependable. A similar poll of 1075 yields results that are--95% of the time--plus or minus
three percentage points of the whole population's opinion (see graph below).
As one can see from the graph, even in the case of the 5% of the time when the poll is inaccurate--that is, true
opinion of the whole is not within +/-3%--it is likely to be +/-4%, or maybe +/-5%. In other words, it is still
quite accurate for most purposes.
Other margins (95% confidence):
200 responses= +/-7%
600 responses= +/-4%
2400 responses= +/-2%
9600 responses= +/-1%
Equally important is the poll's representativeness. If the poll is unrepresentative, the above margins of error
will be greater. There may be a socioeconomic bias, for example--which causes only a certain class of people
to respond. The poll may survey entirely the wrong population, as do election surveys of schoolchildren, who
do not vote (although they usually mirror the preference of their parents). Sampling procedures may induce an
unrepresentative population. Telephone surveys that cull names from only the first part of the telephone book
will get an unusually large percentage of people of Arabic origins. By using only the pages beginning with
"Ch," an unrepresentative percentage of people of Chinese origin might be polled.
The telephone has long been the instrument of choice for pollsters. However, getting a representative sample
is increasing difficult as the public grows annoyed with telemarketing and is more suspicious than ever of
unsolicited calls. According to American Survey Research, the "hang-up rate" increased from 29% in 1985 to
38% in 1992. According to the American Enterprise Institute, the overall "nonresponse rate" (those who
refuse to cooperate with pollsters) went from 40% to 45% from 1988 to 1992. Furthermore, about 20% of
those called have answering machines on today. Because of these problems, pollsters are tempted to weight
opinions (of those who fit into a category which is under-represented in the survey) more heavily; for
example, one response might be counted several times. This increases the likelihood of anomalies
(exceptional cases) corrupting the representativeness of the poll.
Polls that allow completely respondent-initiated participation are likely to be highly unrepresentative.
Radio and television call-in shows are notorious for such polls. Beware any polls based on 800 or 900
telephone lines. The now-closed Houston Post offered a telephone line to readers who wanted to respond to a
daily "InfoPoll" question. In such polls, those that call are rarely typical of the whole, and may phone
repeatedly in an attempt to influence the poll. Another such example is a poll based on responses to mass
mailings by representatives and interest groups. Only a portion of the public responds to such questionnaires.
Usually the transparent purpose of an interest group poll is fund-raising.
ERRORS IN DATA COLLECTION
Errors in the test itself, such as coding errors, in which a person's response could be miscoded can invalidate
a poll. Furthermore, those inputing the data may simply make clerical errors--they may misread information,
make typing mistakes, and other errors. Given the enormity of the data entry and the fact that the value of a
poll is a product of how fast the results are tabulated and reported, such mistakes are all too common.
Interviewers, especially when they are poorly paid, may fake interviews, rather than really do the job. For
this reason, their supervisor must randomly contact some of those interviewed--when possible--to check the
responses.
A common complaint by respondents is that the response they would like to make is not offered as a choice.
Of course, this problem may be intentionally caused by the pollsters, if the sponsor is trying to force a choice
between limited choices or offering false alternatives. (Example: "Which is the best strategy for dealing
with the U.S.-Japanese trade deficit: diplomatic threats or trade protectionism?" The pollee may think the best
strategy is investment, hard work, and better education.) When the desired category is unavailable, they are
forced to choose another opinion. Thus, the results are invalid.
Another consideration is how lengthy and difficult the survey is. If it is excessively hard, fatigue will be a
factor in the latter questions, since respondents will be anxious to finish. That is, respondents may hurry
through (or fail to answer) the final questions. If a sponsor is particularly worried about publication of the
result of a certain question, that question may be placed near the end for this very reason. Two-sided
questionnaires also face the problem of incomplete surveys caused by those who do not notice the back page.
ERRORS IN DATA ANALYSIS
Even if data is properly collected, it may be misinterpreted and/or misreported. Remember the
poorly-worded question: "Do you favor cutting government spending; for example, on defense?" This is
really two questions in one. The analyst cannot really know which stimulus prompted a response--spending in
general, or defense spending. A conservative analyst might report a positive response to this question as
evidence of support for a smaller government; a liberal analyst might report it as support for defense cuts.
Sophisticated polls often try to discover why pollees respond as they do. If additional questions are asked, this
may be possible. However, in their absence, reporters sometimes read too much into the raw data,
misinterpreting it. During the 1991 Senate hearings of Clarence Thomas, a poll revealed that a slight majority
of the public favored his confirmation to the Supreme Court. Upon noting this, one commentator concluded:
"Apparently, most people do not believe Anita Hill." This analysis ignores the many reasons why the pollees
responded as they did, such as the more obvious reason--they wanted him to be confirmed! It is entirely
possible that many did not know whether to believe Thomas or Hill, and made their judgement based on his
testimony, writings, and record, or even more likely, whether they thought he would vote the way they wanted.
This kind of analysis problem is especially likely when the question wording is oversimplified.
Some analysts imply that because a respondent chooses one answer, he is necessarily opposed to other
alternatives. In the aftermath of the Los Angeles rioting, pollsters asked the public what should be done. In
some polls, the pollee was only allowed one response, and the analyst then concluded that a certain percentage
of the public favored more effective policing, a certain percentage favored more social programs, a certain
percentage favored more investment in the infrastructure, etc... What this overlooks is that most people
probably favor a combination of many approaches.
Another analysis problem is caused by grouping response categories. For example, there may be five
categories: "always, almost always, usually, usually not, never"--in such cases, there is usually not a majority
opinion for any single category, so they are subject to manipulation by analysts who then group them together
arbitrarily as if it were a yes/no question. In the above example, if 20% agreed with each of the five responses,
the analyst could then report that 80% did not say "never," or that 80% did not say "always." Alternatively,
one could report that "only 20%" always (or never) agreed. These reports are clearly misleading, and all too
often the audience is not aware of the grouping of responses.
Finally, pollsters should be cautious in reporting opinions on issues on which the public has little knowledge.
Polling presumes the opinions of the majority are informed and meaningful. When they are not, polling is not
only misleading, but it compromises policy-makers ability to do what they believe is right, as they watch the
polls and try to avoid offending constituents.
-
-
-
Do these criticisms mean opinion polling is worthless? Certainly not. Most of the abuses in political opinion
polling are done by interest groups and individual candidates, each of which the public tends to be view with
healthy skepticism. On the other hand, reputable independent pollsters, such as Gallup, Harris, Roper, and
most news organizations are acutely aware of the above flaws in polling, and most go to great lengths to
eliminate them from their polls. In any case, the reader should be aware of some of the common pitfalls in
polling, and demand that those who report public opinion follow appropriate scientific polling methods.
Some Effects Of Polling On Politics
Certainly the advent of accurate polling has made elected officials more attentive to the opinion of their
constituencies. To that degree, it has given us a more responsive, democratic government. It should be
remembered, however, that polls inform elected officials of public opinion--they do not educate the public.
Polling tests one's faith in true democracy. After all, what is the value of uneducated opinion?
Polls generally dictate what must and must not be said in the political debate. In the case of Ronald Reagan,
whose pollster used pulse-buttons (the device mentioned in the introductory paragraph) to discover what key
phrases and buzz words were the most popular, they sometimes determined exact words to be used. At a
minimum, most candidates carefully study the polls before explaining their positions, and try to make
somewhat ambiguous policy statements that offend the least amount of voters. Increasingly, their speeches
and advertisements are based on sophisticated polling methods rather than their positions, ideas,
qualifications, ideological values, and goals--the real substance of politics. During the 1992 Democratic
presidential primary campaign, Paul Tsongas often complained that the other candidates, including Clinton,
were taking positions solely on the basis of the survey data, especially concerning the popularity of the
middle-class tax cut, which he opposed. He seemed to pride himself on taking positions which were
unpopular with the voters. "I am not Santa Claus," he repeatedly said. His was an unusual candidacy.
This is not to suggest that most politicians slavishly bend all of their positions to fit opinion polls. Politicians
do not want to appear inconsistent or unwilling to stand up for their values, so polls do not necessarily dictate
positions to candidates. However, polls tell them what their strengths and weaknesses are, so that they may
stress certain issues and avoid others. Polls may also suggest what regions of the city, state, or country they
should campaign in, or what segments of the population they are weak with. They indicate what kind of
arguments will be well-received to support the positions they hold.
Polls do, however, make or break a candidate’s campaign strategy. They anoint front-runners. (This is why it
is so important to lead in the early presidential party primaries.) They prematurely doom candidates at the
bottom of the pack, since most contributors will not give money to someone that they think has no chance.
(This is why presidential candidates usually drop out early in the primary campaign, as polls reveal their lower
ranking and money consequently evaporates.) Early 1992 polls of presidential candidate Ross Perot's
popularity led to a bandwagoning of support for his candidacy, even though most Americans knew very little
about him. At the time, most experts questioned the intensity of support for Perot, and expected it to fade
away as he became better known. However, Perot's candidacy was extremely unusual in that no other
independent or minor party presidential candidate since the advent of modern polling ever polled high enough
to make them appear "electable." Therefore, more people (19%) were willing to actually vote for him on
Election Day. Perot would probably have received even more votes if he had not pulled out and later reentered the campaign. In 1980, independent presidential candidate John Anderson polled about three times
higher than his actual 7% vote (and about two times immediately before the election). Polling revealed that
many would-be Anderson supporters voted for Reagan or Carter, rather than "waste their vote."
Another common complaint is that news organizations' announcements of the results of exit polling (polls
taken of a sample of voters as they leave the voting area) before the polls close interferes with the election,
discouraging voters. In 1980, NBC announced Ronald Reagan's election at 5:30 p.m. (Houston time), far
before polls closed around the country. Such announcements probably do cause some voters to stay home, but
studies of the exact effect are inconclusive. Some theorize that announcements of winners causes a
bandwagoning effect, and discourages those who intended to vote for the candidate who has apparently lost.
Others speculate that it is more likely to pacify voters who intended to vote for the candidate who has
apparently won, and cause a rallying effect for the underdog. In the case of the announcement of presidential
winners, it may also affect other down-ballot elections for state and local offices. Such announcements
probably cause a degree of both the bandwagoning effect and the [underdog] rallying effect, but they may
largely cancel each other's impact. Furthermore, after the 1980 criticism of NBC, most national pollsters do
not announce results until at least the Californian polls are closed (sorry, Alaska and Hawaii). At any rate, the
effect of exit polling is dwarfed by the effect of other polls taken in the last days before the election.
Office-holders must keep a close eye on public opinion polls, not only for re-election purposes, but because
unpopular leaders are not very effective politically. On most issues, the public is largely apathetic and neutral;
polls produce a Bell curve, with most of the public in the middle of the curve, answering "no opinion,"
"undecided," or "don't know." For these issues, the office-holder is free to be a trustee.
On issues of high saliency (salient means prominent or conspicuous), for which there is a large attentive
public, there are fewer neutral opinions. Therefore, polls concerning highly salient issues usually produce
either a J curve or a U curve (see graphs that follow). A polarization of opinion concerning a controversial
issue produces the U curve, whereas the J curve represents great consensus. (Note that the J curve could also
resemble a backward J.)
Politicians love J curve issues (e.g. flag-burning, foreign aid, early release from prison, tax increases,
drug legalization), because they can support the popular position as a good delegate should. Issues that
produce U curves (e.g. abortion, affirmative action, gambling, gun control, education reform), on the other
hand, are the stuff of nightmares, since no matter what position the politician takes, a large part of the public
will be upset and demand his or her head in the next election.
Skillful politicians try to transform U curve issues into J curve issues. For example Douglas Wilder, in his run
for the Virginia governorship, hurdled the abortion question by saying that although he was personally
opposed to abortion, he did not like the idea of government interfering in people's personal lives, dictating
private decisions. In this way he appealed to most people on both sides of the abortion U curve, by using the
privacy J curve.
In addition to knowing whether the curve resembles a Bell, J, or U, a skillful politician will need to judge
intensity as well, which is a much more difficult task. Many polls come dangerously close to creating news
rather than simply reporting it. In other words, the skillful politician must judge which polls report intenselyheld views based on thoughtful public opinion, and which polls just report unreliable, transitory opinions
based on apathy and ignorance. Less-intense opinions are ripe for change by courageous opinion leaders who
are willing to take chances by swimming against the tide of public opinion, such as in the case of Nixon's
detente with the USSR and China or Carter's Panama Canal Treaty. Leadership by the president is sometimes
enough to reverse public opinion overnight. For this reason, after the war against Iraq, many commentators
urged Bush to expend some of his political capital (his temporarily high popularity rating) by undertaking
some difficult domestic policy initiatives.
Ironically, perhaps the greatest criticism of opinion polling is that it is all too accurate. It is so accurate that it
dominates our political processes, for better or worse. It sometimes reduces politicians to pandering and
demagogic appeals to poorly-informed voters. It may provide the rationale for office-holders to vote in a
manner which they themselves believe to be unwise or irresponsible, but reflects "the will of the people."
Politicians who are true opinion leaders are rare. Candidates who are trailing their opponents in the polls
often say they "do not believe in polls." Rest assured, they do...