Political Polling: 95% Expertise and 5% Luck

Political Polling: 95% Expertise and 5% Luck
Author(s): Robert Worcester
Source: Journal of the Royal Statistical Society. Series A (Statistics in Society), Vol. 159,
No. 1 (1996), pp. 5-20
Published by: Wiley for the Royal Statistical Society
Stable URL: http://www.jstor.org/stable/2983464
Accessed: 15-11-2016 08:24 UTC
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted
digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about
JSTOR, please contact [email protected].
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
http://about.jstor.org/terms
Wiley, Royal Statistical Society are collaborating with JSTOR to digitize, preserve and extend access to
Journal of the Royal Statistical Society. Series A (Statistics in Society)
This content downloaded from 140.105.48.199 on Tue, 15 Nov 2016 08:24:44 UTC
All use subject to http://about.jstor.org/terms
J. R. Statist. Soc. A (1996)
159, Part 1, pp. 5-20
Political Polling: 95% Expertise and 5% Luck
By ROBERT WORCESTERt
Market & Opinion Research International Ltd, London, UK
[Read before The Royal Statistical Society at a meeting on 'Opinion polls and general elections' on Wednesday, June
14th, 1995, the President, Professor A. F. M. Smith, in the Chair]
SUMMARY
The record of British election polls was good- until the general election of 1992. The
Market Research Society's inquiry into the performance of the polls in 1992 found
inadequacies in the implementation of the sampling system used, evidence of a late swing
and some reluctance of Conservative supporters to reveal their loyalty; but it generally
endorsed the principle of well-conducted quota polling and found that variations in
methodological detail had nil effect on the results. The evidence is presented and some
possible future developments to counter the 'spiral of silence' are discussed.
Keywords: MARKET RESEARCH SOCIETY INQUIRY REPORT; 1992 GENERAL ELECTION; NONRESPONSE BIAS; OPINION POLLS; PUBLIC OPINION; QUOTA SAMPLING; SPIRAL OF
SILENCE; SURVEYS; TELEPHONE POLLS
1. INTRODUCTION
Price (1992) in his book Public Opinion argues that opinions and attitudes have been
said to differ conceptually in at least three ways. First, opinions have usually been
considered as observable verbal responses to an issue or question, whereas an
attitude is a covert psychological predisposition or tendency. Second, although both
attitude and opinion imply approval or disapproval, the term attitude points more
towards affect (i.e. fundamental liking or disliking), opinion more towards cognition
(e.g. a conscious decision to support or oppose some policy, politician or political
group). Third, and perhaps most important according to Price, an attitude is
'traditionally conceptualised as a global, enduring orientation toward a general class of
stimuli, but an opinion is viewed more situationally, as pertaining to a specific issue in a
particular behavioural setting'.
Opinions in my view are those low salience, little thought about reactions to
pollsters' questions about issues of the day, easily manipulated by question wording
or the news of the day, not very important to the respondent, not vital to their wellbeing or that of their family and unlikely to have been the topic of discussion or
debate between them and their relations, friends and work-mates. These normally
would for most people include their satisfaction with the performance of the parties
and their leaders, economic optimism and pessimism, and the salience of the issues of
the day, but not voting intention.
Attitudes derive from a deeper level of public consciousness, are held with
conviction and are likely to have been held for some time and after thought,
discussion, perhaps the result of behaviour (Festinger's cognitive dissonance), and
tAddressfor correspondence: Market & Opinion Research International Ltd, 32 Old Queen Street, London, SW1H
9HP, UK.
? 1996 Royal Statistical Society 0035-9238/96/159005
This content downloaded from 140.105.48.199 on Tue, 15 Nov 2016 08:24:44 UTC
All use subject to http://about.jstor.org/terms
6
WORCESTER
[Part
1,
harder to confront or confound. Examples of these are the Scots' support for a
separate assembly, held with force over generations and based on strong beliefs that
they are not fairly represented either in Parliament or in our system of government,
perhaps attitudes to the taking of medicines or exercise, forms of education, local
authority service delivery for services used frequently and by large percentages of
citizens such as rubbish collection, streetlighting and schools. With some, voting
intention derives from their attitudes; with most, from their values.
Values are the deepest of all, learned parentally in many cases, and formed early in
life and not likely to change, only to harden as we grow older. These include belief in
God, attitudes to religion generally, views about abortion or the death penalty,
family values and the like. For many people, these include voting intention, learned
in childhood from parental influence, class and peer group identification.
I define an opinion poll as 'a survey of the views of a representative sample of a
defined population' (Worcester, 1991). What polls (and I use the term more or less
interchangeably with surveys, although there are those who use 'polls' only to describe
'political' soundings) cannot tell us well are likely future actions of the public
generally and especially the future behaviour of individuals. They are better at telling
us what, rather than why (although the latter is the principal function of qualitative
research and especially focus groups, which concentrate on the interaction of the
group rather than the question and answer, 'expert'-on-respondent, format of the
individual depth interview). Polls are not particularly good at exploring concepts
that are unfamiliar to respondents, nor are they good at identifying the behaviour,
knowledge or views of tiny groups of the population, except at disproportionate
expense. They are useless in telling us much about the outcome of an election weeks
or even months or years in the future. Nevertheless, the voting intention question is
valuable for what it summarizes about people's attitudes and values at the moment.
Polls are a marriage of the art of asking questions (Payne, 1980) and the science of
sampling. The conduct of survey research is quite a simple business really; all that
needs be done is to ask the right sample the right questions and to add up the figures
correctly, the latter task including proper analysis and reporting.
2. PUBLIC OPINION IN A DEMOCRATIC SOCIETY
David Hume said in 1741 that 'All government rests on public opinion', and
Aristotle stated centuries earlier that 'He who loses the support of the people is a kin
no longer'. Somewhat more recently Abraham Lincoln said that 'Public opinion is
everything' and went on to avow that he saw his role as the elected leader of the USA
as finding out what his electorate wanted and, within reason, giving it to them.
Burke's 1774 belief (Hoffman and Ross, 1970) that his vote in Parliament was
determined by his own judgment rather than the wishes of his Bristol electors is often
quoted admiringly by those who would dismiss poll findings; seldom recalled is that
in the following election his electors chose another to represent them in Parliament!
Nor is it recalled that in 1780, after his defeat, he said that 'The people are the
masters' and he wrote that
'No man carries further than I do the policy of making government pleasing to the people.
I would not only consult the interest of the people, but I would cheerfully gratify their
humours.'
This content downloaded from 140.105.48.199 on Tue, 15 Nov 2016 08:24:44 UTC
All use subject to http://about.jstor.org/terms
1996]
POLITICAL
POLLING
7
One important point that Lippmann (1922) made is that inevitably our opinions
cover a bigger space, a longer reach of time and a greater number of things than we
can directly observe. They have, therefore, to be pieced together out of what others
have reported and what we can imagine.
We do not measure truth, we measure perceptions. But the reality of the world of
public policy, as well as the media and industry, is that it is perception, not fact, that
determines public opinion.
My position on the role of polls, surveys and assessment of public opinion is one
not of advocacy of any particular policy, subject or topic, but of the provider of
both objective and subjective information, obtained systematically and objectively,
analysed dispassionately and delivered evenly (Worcester, 1981).
3. THE ART OF ASKING QUESTIONS
A good question should be relevant to the respondent, easily understood by the
respondent, unambiguous in meaning and should mean the same thing to the respondent, the researcher and the client for the results, relate to the survey objectives
and not be influenced in any untoward way by the context of the questioning.
4. THE SCIENCE OF SAMPLING
I arrived in the UK some 26 years ago not knowing anything but two-stage
probability sampling, and my institute, Market & Opinion Research International
(MORI), does more than a third of its survey research by using probability samples.
But we also believe that, in the heat of a general election, we are worse, not better,
served by the application of probability techniques. The principal reason is time, not
cost, although cost is a factor. Probability samples given less than a week to conduct
fieldwork rarely rise above a response rate in the 50s; even three weeks' fieldwork
achieves samples in the low 60s, and the 70s is thought excellent by most government
agencies who waste, in my view, millions of pounds of tax-payers' money by insisting
on strict probability samples with many repeated attempts to contact and interview
the designated respondents when looser approximations would suffice for the public
policy purpose in mind. After all, even if precision to the usual 95% confidence level
could be guaranteed, how many times does the Minister care whether the finding is
77% or 73%, so long as he or she knows that about three-quarters of the electorate
favour the policy?
Certainly, if the proof of the pudding is in the eating, then the test of five elections
in the 1970s, comparing nine probability 'forecasts' with election results, with an
average 'error' of 6.3%, was hardly to be preferred to the 3.0% error of the 19 quota
samples reported simultaneously (Table 1).
And specifically, at the 1975 European Economic Community (EEC) referendum,
when the answer was a straightforward yes or no, four out of the five quota samples
were within 1% whereas the probability sample was far out, in an election where
the tracking polls showed that there was virtually no shift in the opinions of the
electorate over the three-week election period.
But perhaps the best test that is independent of an election with its possibility of
late swing was at the time of the American bombing of Libya when three polls
conducted independently, with different sampling points, different interviewers and
different question wordings, found virtually identical results.
This content downloaded from 140.105.48.199 on Tue, 15 Nov 2016 08:24:44 UTC
All use subject to http://about.jstor.org/terms
8
WORCESTER
[Part
1,
TABLE 1
Comparative record of random and quota polls in British elections
Election Average error (%) in leadfor thefollowing types of pout:
Random polls Quota polls
1970
7.3
(3)
5.4
(2)
February 1974 3.7 (2) 1.7 (4)
October 1974 8.0 (2) 3.5 (4)
1975 (referendum) 10.0 (1) 3.9 (5)
1979
Mean
tThe
1.6(1)
error
1.7(4)
1970-79
number
of
6.3
3.0
polls
is
given
in
paren
TABLE 2
Accuracy of the final polls, 1945-92
Year Mean error gap (%) Average error per No. of polls
party (%)
1945
1950
1951
1955
3.5
3.6
5.3
0.3
1.5
1.2
2.2
0.9
1
2
3
2
1959
1.1
0.7
4
1964
1.2
1.7
4
1966
3.9
1.4
4
1970
6.6
2.2
5
1974, February 2.4 1.6 6
1974, October 4.5 1.4 4
1979
1.7
1.0
4
1983
1987
4.5
3.7
Average
1992
3.3
8.7
1.4
1.4
1.4
2.7
6
6
51
5
5. WHAT THE 1992 BRITISH GENERAL ELECTION TAUGHT
BRITISH POLLSTERS
What is the record of the British polls over the years? On the whole,
good (Table 2), until 1992! Some may quibble, but their record, especially over the
past 20 years, has been very good indeed. In 1983, all six were within sampling
tolerance, given the sample size, three within ? I1%. In 1987, all six were within
I l% on average, for the share of each party.
Over the 13 British general elections since the war until now, and over the period
of election polling, until now, polls have performed, on average, about as well as
could be expected. The margin of error on the gap should be approximately double
that of the share, according to sampling theory, and indeed it is, according to the
empirical evidence of 51 election forecasts over 42 years.
This content downloaded from 140.105.48.199 on Tue, 15 Nov 2016 08:24:44 UTC
All use subject to http://about.jstor.org/terms
1996]
POLITICAL
POLLING
9
TABLE 3
Eve-of-poll predictions, April 1992
Results from the following polls:
Harris MORI NOP Gallup ICM General
election
Sample 2210 1731 1746 2478 2186
Fieldwork 4th-7th April 7th-8th April 7th-8th April 7th-8th April 8th April 9th April
Conservative vote (%) 38 38 39 38.5 38 42.8
Labour vote (%) 40 39 42 38 38 35.2
Liberal Democrat vote (%) 18 20 17 20 20 18.3
Other
parties
(%)
4
3
2
3.5
4
3.7
Conservative lead (%) -2 -1 -3 0.5 0 7.6
Error on lead (%) 9.6 8.6 10.6 7.1 7.6 8.7 (average)
Error on share (%) 2.55 2.75 3.4 2.25 2.4 2.67 (average)
About a quarter of the polls since 1945 have been able to estimate the Labour and
Conservative share of the vote to within ?1 %, and over 50% have forecast to within
+2%. Of these 51 polls, 29 have overestimated the Tory share of-the popular vote
and 29 overestimated the Labour share; 20 underestimated the Tory share and 19
underestimated the Labour share. There is no sign of a bias.
On average they have marginally underestimated the Tory lead over Labour, but
as polls do not measure postal voters and these are generally agreed to favour
the Tories-perhaps by as much as 3:1- the direction of these averages is not
surprising, and in any case the slight bias has been only about 4% in each direction,
or an error in the lead of 1%.
6. THE 1992 BRITISH POLLS GOT IT WRONG!
But the polls in the 1992 British general election have much to answer for, for one
thing the most exciting election night in a long time! The five major pollsters, Gallup,
Harris, International Communications and Marketing (ICM), MORI and National
Opinion Polls (NOP) averaged a 1.3% Labour lead; the final result was a 7.6% lead
for the Conservatives (Table 3). The polls backed the wrong winner, by a wider
margin than can be accounted for by sampling error, and were widely criticized for
their inability to perform to expectation.
7. CAMPAIGN POLLS
What were these polls showing? They showed quite a steady picture, steady across
polling organization, and, generally, over time. They certainly were not out of line
with the final polls -indeed, on average they found a bigger Labour lead. Whatever
problems affected the final polls affected the earlier campaign polls as well. The
campaign tracking polls were also broadly in line with the two national panel studies
that were published at the same time, and the results of all of these face-to-face polls
were matched by the small number of telephone polls which were conducted.
8. WHAT WENT WRONG?
Much space in the newspapers was given over after the general election to
This content downloaded from 140.105.48.199 on Tue, 15 Nov 2016 08:24:44 UTC
All use subject to http://about.jstor.org/terms
10
WORCESTER
[Part
1,
'Pollsters admit election mistakes' (April 30th, 1992) and 'The invisible voters who
fooled the pollsters' (May 1st, 1992) in The Independent, as well as to the various
letter-writers' expressions of their own prejudices.
Just a day or two after the election, letter-writers and critics alike were dismissing
any 'late swing lame excuses' from the pollsters. There were many pundits' and
psephologists' commentaries and readers' letters in The Times, The Daily Telegraph,
The Independent, The Guardian, The Financial Times and elsewhere following the
election, expressing various opinions why the opinion polls 'got it wrong' on April
9th. They were all good fun. Their suggestions are set out and, mostly, easily
refuted in the next few sections.
All the polling companies have, naturally, carried out their own reviews of the
problems and an exhaustive two-year inquiry by a committee set up by the Market
Research Society (MRS), consisting of pollsters, academics and other experienced
market researchers who were not involved in opinion polling, was reported in July
1994. Their conclusions and recommendations have a great many lessons for both
pollsters and poll readers (Market Research Society Working Party, 1994).
9. SAMPLE SIZE
One of the most popular reactions to the polls' failure was to blame the sample
size: it is manifestly apparent that the sample size is not the key to the quality of
findings. The ICM poll taken in the third week of the campaign, among a sample of
10460 electors in 330 sampling points and taken over nearly a week including a
week-end, was almost identical in its voting intention with the findings of a smaller
poll conducted over the same period. Interviewing on March 31st-April 3rd, ICM
recorded 36% for the Conservatives, 39% for Labour and 20% for the Liberal
Democrats. At the same time, on March 31 st-April 1 st, NOP interviewed 'only' 1302
electors in 83 sampling points and found 37%, 39% and 19%.
The fact that the final polls of the campaign, all taken a day or two before polling
day, were so consistent -all five had the Tories at 38% ? 1 %, Labour at 40% ? 2%
and the Liberal Democrats at 18% ?2% suggests that neither sample size nor
sampling variation was responsible for the magnitude of the difference between the
final polls and the actual result. Despite this, the British public remain unconvinced:
a Gallup survey in April 1992 found 63% of British adults saying that it is not
possible 'from a survey of 1000 or 2000 people ... to give a reliable picture of British
public opinion'. This is one area in which the pollsters' biggest challenge is probably
to educate the British public.
10. MILLIONS OF EXPATRIATE TORY VOTERS?
Another letter-writer claimed government by fraud, reporting that one of the first
Acts of the 1987 Conservative government was to give the postal vote to 2 million
expatriate Britons, but ignored the fact that only 34454 of these expatriates
registered to vote. At an average of 53 voters per constituency, they could have made
some difference, but not that much the registered overseas electorate exceeded the
majority in only two seats.
This content downloaded from 140.105.48.199 on Tue, 15 Nov 2016 08:24:44 UTC
All use subject to http://about.jstor.org/terms
1996]
POLITICAL
POLLING
11
11. THE MENDACIOUS BRITISH PUBLIC?
A fellow letter-writer spoke up confidently to report that there was 'no eleventhhour swing to the Conservatives', and reported himself to be in a state of profound
depression and dismay; no wonder he is depressed, if he is convinced that the British
(against all my experience to the contrary) are 'a nation of liars'. One academic critic
even titled his paper with implied agreement with this hypothesis (Crewe, 1992a).
Other letter-writers also suggest that millions of people lied to interviewers.
In the case of the three panel studies that were undertaken and which had comparable results both before and after the election, this would involve repeated and
consistent lying. In the campaign tracking polls, it would have to assume that
respondents lied consistently on a bank of other, non-voting, questions that they
answered as well, since there is no evidence of inconsistency between voting and
attitudinal questions. The MRS inquiry found no evidence of widespread lying.
12. RANDOM SAMPLING MYTH
The leader in The Independent on April 30th raised another point, alleging that
'Random polls have proved more accurate' (thati quota polls). This is an enduring
myth with the polls' critics, but in fact in error. British election polls conducted by
random sampling during past national elections where both methods were used
(including the 1975 EEC referendum) were less, not more, accurate, than those
conducted by quota, owing to poor compilation of registers and late swing after the
bulk of the interviewing had been done.
13. INTERVIEWING IN STREET
Nor is there any evidence that other variations in the sampling design would have
improved matters. Differences in interviewing procedures played no role. Although
most of the polls were conducted in street, a few were conducted partly or exclusively
in home. Nearly identical poll results were obtained (Table 4).
14. TELEPHONE POLLING
Although telephone polling in British elections had some problems in its early
days, in 1992 the small number that were published found results that were entirely
comparable with those from the face-to-face polls (Table 5).
TABLE 4
Comparison of in-street and in-home polls: average party sharest
Poll Average share (%) for the following parties: Conservative
lead (%)
Conservative Labour Liberal Other
Democrat
In
street
38
40
18
5
-2
In street and in home 38 40 18 4 -2
In
home
39
40
17
5
-1
tSource: Market Research Society Working Party (1994).
This content downloaded from 140.105.48.199 on Tue, 15 Nov 2016 08:24:44 UTC
All use subject to http://about.jstor.org/terms
12
WORCESTER
[Part
1,
TABLE 5
Comparison of phone polls and face-to-face pollst
Poll Average share (%) for the following parties: Conservative
lead (%)
Conservative Labour Liberal Other
Democrat
Phone polls (7) 38.8 39.5 17.3 4.4 -0.7
Face-to-face polls (47) 38.2 40.0 17.7 4.1 -1.8
tSource: Market Research Society Working Party
TABLE 6
Comparison of one-day, two-day and longer pollst
Poll Average share (%) for the following parties: Conservative lead
(%)
Conservative Labour Liberal Other
Democrat
All
All
All
All
polls
(54)
38
40
18
4
one-day polls (15) 38 40 17
two-day polls (24) 38 40 18
longer polls (15) 38 40 18
Excluding
telephone
polls
One-day
polls
(13)
38
40
17
Two-day
polls
(23)
38
40
18
Longer
polls
(11)
38
40
18
5
5
4
polls
(8)
-2
-2
-2
(47)
5
4
4
-2
-2
-2
38 40 17 5
38
40
18
4
39
40
17
4
-2
-2
Excluding telephone polls and later waves of panel studies (44)
One-day polls (13)
Two-day
polls
(23)
Longer
-2
-1
tSource: Market Research Society Working Party (1994).
15. ONE-DAY, TWO-DAY AND LONGER POLLS
There was no difference in results between those polls conducted as 'one-day quick
polls' and those with fieldwork spread over a longer period (Table 6). Nor was there
any evidence that it made any difference whether or not fieldwork periods included a
week-end (Table 7).
16. SELECTION OF SAMPLING POINTS
But what about the actual constituencies sampled? If they had happened to be
biased to Labour, that might have caused an error. The MRS inquiry compared the
results in 1987 and 1992 in each polling company's selection of constituencies with
the national result. It found that four of the five companies had selections that were
very slightly biased to Labour in 1992, and also that the same bias had existed in
those constituencies in 1987- it was not simply bad luck in picking constituencies
which swung unusually. But the average error was barely 1% on a Conservative lead
This content downloaded from 140.105.48.199 on Tue, 15 Nov 2016 08:24:44 UTC
All use subject to http://about.jstor.org/terms
1996]
POLITICAL
POLLING
13
TABLE 7
Week-end and weekday pollst
Poll Average share (%) for the following parties: Conservative
lead (%)
Conservative Labour Liberal Other
Democrat
All
polls
(54)
38
40
18
4
-2
With some Saturday or Sunday fieldwork (17) 38 40 18 4 -2
With no Saturday or Sunday fieldwork (37) 38 40 18 4 -2
With some Saturday fieldwork (14) 39 39 18 5 0
With some Sunday fieldwork (10) 39 39 18 4 0
tSource:
Market
Research
Working
Party
(1994)
over Labour, and most of the potential effect should have been corrected by the
operation of the quota system to ensure a representative social profile. Besides, the
company whose selection of constituencies was representative fared no better than
the others in 'predicting' the final result. Plainly this was not a major cause of the
error.
17. LATE SWING
The MRS inquiry isolated three root causes of the discrepancy between the poll
findings and the final result. The first of these was late swing. Polls are snapshots at a
point in time, and that point is when the fieldwork was done, not when the results
were published. If after that voters change their minds, the 'don't knows' decide to
vote after all and one party's supporters become so apathetic that they stay at home,
the polls will be wrong.
It has become fashionable in certain quarters to decry the late swing explanation
(Clifford and Heath, 1994), and even to suggest that there was no movement of
opinion throughout the campaign. Late swing was not the only problem (or the exit
polls should have been accurate), but no-one at the time doubted that it was
happening. Forgotten now are the headlines on election day: 'Late surge by Tories
closes gap on Labour in final hours of campaign' was the banner in The Times; 'Tory
hopes rise after late surge' was the headline over the splash in The Guardian. In The
Daily Telegraph it was 'Tories narrow gap', and in the Financial Times the banner
read 'Opinion polls indicate last-minute swing from Labour to Tories' whereas the
Daily Express trumpeted 'Tory surge: polls show late boost for Major'.
Labour's peak came on the day of its Sheffield rally, eight days before polling day,
with published leads sufficient to give them an overall majority. But Labour's
triumphant rally proved the beginning of the end for Labour and its leader Neil
Kinnock. From that point on, it seems to have been downhill. The Conservatives
spent nearly all their advertising money in the final three days (at an annualized
weight greater than the spending of Proctor & Gamble or Unilever on soap powder),
levelling all their guns at the Liberal Democrats' voters 'letting Labour in'. The
testimony of the Liberal Democrats' campaign manager was that this did great
damage to their support in the final hours of the campaign. The Tory tabloids did all
that they could on their front pages, as well as the leader columns, to ensure that the
This content downloaded from 140.105.48.199 on Tue, 15 Nov 2016 08:24:44 UTC
All use subject to http://about.jstor.org/terms
14
WORCESTER
[Part
1,
Conservatives were returned to power. Right to the end of the election, the 'floating
voters' were higher than ever before.
18. THE CLUES WERE THERE!
In retrospect, it might have been possible to guess that a late swing to the Tories
was on the cards. The polls themselves unearthed several clues.
One clue was the image of the leaders. There is a tradition among political
scientists in Great Britain to explain patiently to visiting American academics,
psephologists, pollsters, politicians and political journalists that the main difference
between British and American elections is that in British elections policies, not
people, are the deciding factor. Indeed, MORI research over the past decade has
shown a fairly steady 50%-30%-20% relationship between voting decision-making
mainly conditioned by policies-leader image-party image. Yet during the 1992
election one striking finding stood out: throughout the election, from the beginning
to the end, John Major led Neil Kinnock by from 9 % to 13% as the 'most capable'
Prime Ministerial candidate. With hindsight, I believe that this question and its
replies consistently gave a guide to the outcome; we were just not sufficiently smart to
see it clearly, although The Times's Robin Oakley focused an entire article on this
early in the campaign (Oakley, 1992).
Another clue was that when asked
'Which two or three issues will be most important to you in helping you to make up your
mind on how to vote?',
the National Health Service (NHS), education, unemployment all Labour issues
-led the field. Yet the one time that a different question was asked,
'How much will you be influenced, if at all, in the way you vote at the General Election by
the policies of the political parties on taxation?',
no fewer than 39% of the electorate said that their vote would be influenced 'a great
deal': no lies, no prevarication from this four in 10 of the electorate. They told us
plainly: 'a great deal', they said. Did we hear sufficiently clearly?
Another key finding was in the post-election wave of the MORI panel survey for
the British Broadcasting Corporation's (BBC's) 'On the record', a telephone recall on
people interviewed throughout the campaign, by the MORI telephone survey subsidiary, On-Line Telephone Surveys. The panel consisted of floating voters, and
comprised those who after expressing a voting intention said that they might change
their minds between the time interviewed and polling day, plus the full category
usually described as the 'don't knows', who are the 10% who said that they would
not vote, the 4% who were 'undecided' and the fewer than 1% who refused to say
how they intended to vote. In all, the 'floaters' represented about 38% of the baseline survey from which the panellists were recruited. On-Line reinterviewed 1090 of
the panel on the Friday and Saturday following the election.
When voters were asked
'Thinking about the way you voted, which was stronger, your liking the party you voted
for or your dislike of the other parties?',
a majority, 55%, said that it was antipathy and only 37% liking the party that they
This content downloaded from 140.105.48.199 on Tue, 15 Nov 2016 08:24:44 UTC
All use subject to http://about.jstor.org/terms
1996]
POLITICAL
POLLING
15
voted for, among the floating voters in the panel. Panellists also voted by more than
3:1, 56%:18%, that the Conservative Party rather than Labour could best handle the
economy generally. And by 4:1, 48%:12%, they regarded John Major, rather than
Neil Kinnock, as the most capable Prime Minister. Perhaps we should have taken
these factors into account before the election, in which case we would have been less
surprised by the result.
Of course, the economy is also vital. During the 'autumn election boom' in 1991,
Independent Television News had rung us to find out what the economic optimism
index (EOI) was for September. 'Why?', a colleague asked them. 'Because Central
Office [Conservative Party headquarters] says they'll call the election if it goes over
15, as they did in '83 and '87', was the reply. It was checked, and they had! The 1992
election was finally called with the index negative. But it was little noticed, except
possibly by Central Office and in an article by Ivor Crewe in The Times (Crewe,
1992b), that in our survey taken on March 16th a sharp reversal had taken place in
the national mood, and the -2% EOI recorded in February had been replaced by
+15%, with 36% expressing optimism and only 21% gloom.
Another clue came late in the campaign, but replicated findings of a year and more
earlier. When asked whether they thought a hung Parliament would be good or bad
for Britain, a majority, 56%, said 'bad', and they included an astonishing 44% of
Liberal Democrats. It was certainly this group who were worried by the Tories'
attack on the Liberal Democrats and the prospect of a hung Parliament in which Mr
Ashdown held the balance of power. Certainly captains of industry were in no doubt:
MORI fieldwork for The Financial Times at the beginning of the campaign and
certainly at the end proved that. In the first wave of the 'Captains' panel, six in 10
main board directors of the nation's largest companies said that a hung Parliament
would be bad for their business; by April 6th-7th, when just such an outcome seemed
the most likely, three-quarters said that it would be bad.
A majority, 52%, of electors worried about their belief that the Tories would
privatize the NHS, including 26% of their own supporters. But, at the same time,
seven in 10 of the public, including nearly half, 47%, of Labour supporters, believed
that most people would pay more taxes under a Labour government.
Another clue was the degree of volatility in the electorate. One thing that the
snapshot polls do not show is the movement of the electorate between parties, which
largely cancelled itself out, but which reveals the remarkable changeability of the
electorate's political allegiance. The panel surveys provided this information.
The most instructive analysis should come from the MORI panel study conducted
for The Sunday Times. The panel was interviewed face to face four times during the
campaign, and then reinterviewed after the election by telephone by FDS, an
independent market research firm who were subcontracted to undertake the recalls
(as the MORI telephone survey subsidiary, On-Line Telephone Surveys, was fully
stretched with recalls on the panel for the BBC). FDS, on MORI's behalf, contacted
934 panellists on Friday, April 10th, between 10 a.m. and 9p.m. This represented a
60% recall of the original panel, which is quite good for a one-day recall, especially
when you bear in mind that about 10% of the original 1544 panel were unavailable
because they were not on the telephone. The data were of course weighted to both the
demographic and the political profile of the original panel.
The MORI-Sunday Times panel recall found that only 63% said that they had
made up their minds before the election had been called, down nearly 20% from the
This content downloaded from 140.105.48.199 on Tue, 15 Nov 2016 08:24:44 UTC
All use subject to http://about.jstor.org/terms
16
WORCESTER
[Part
1,
TABLE 8
Changes of mind during election campaigns 1979-92t
Year % of electorate switching % of electorate switching Total % of electorate
between main parties to orfrom others or changing answers
during campaign 'don't knows' during campaign
1979
5.6
6.9
12.5
1983
7.8
7.1
14.9
1987
8.4
10.1
18.5
1992
9.4
11.6
21.0
tSource: MORI-Sunday Times panels.
more usual 80% that was measured in previous elections. And, as noted below, 8%
said that they had only made their mind up in the last 24 hours, and 21 % during the
last week of the campaign.
As The Sunday Times reported week after week, the amount of movement in the
electorate-people who switched from one party to another ('switchers') or in and
out of 'don't know' ('churners') was higher than ever before, and as reported in the
final article about 1 1.1 million electors changed their minds during the campaign out
of the 42.58 million in the electorate (Table 8). The week before polling day, the
panellists indicated their voting intention, at that time as a Labour lead of one point;
when reinterviewed after the election, their actual votes indicated a 21% swing. (Th
other national panel, conducted by NOP for the Independent on Sunday, found an
even bigger swing -4% over the same period.)
Perhaps more relevant, however, is the proportion of the panel that had already
switched allegiance during the campaign -21% even before the final week, far more
than ever before.
These figures suggest that 1992 saw an electorate who made up their minds later,
and shifted their ground in greater numbers, than measured before. After an extensive examination the MRS inquiry concluded:
'After the final interviews there was a . . . swing to the Tories. It seems likely this was the
cause of a significant part of the final error.... We estimate that late swing ... probably
accounted for between a fifth and a third of the total error'
(Market Research Society Working Party, 1994). On the evidence of the panel, the
proportion may have been even higher, perhaps as much as half.
What does this imply for future elections? It will be dangerous to try to doubleguess the electorate and to correct the figures to reflect changes of mind before they
happen. However, it is probably possible to construct models of behaviour to allow
for those 'don't knows' who vote, and certainly all possible methods need to be used
to correct for differential turn-out by using certainty of voting questions. (MORI did
use such a question in 1992, with those not 'certain to vote' excluded from the final
poll figures, which reduced but probably did not entirely eliminate error from
differential turn-out.)
19. 'SPIRAL OF SILENCE'
One of the more simplistic letter-writers explained that the unreliability of the
public opinion surveys (and of the media comment on them) is down to ignoring the
This content downloaded from 140.105.48.199 on Tue, 15 Nov 2016 08:24:44 UTC
All use subject to http://about.jstor.org/terms
1996]
POLITICAL
POLLING
17
'don't knows'. He must have missed the front page of The Times on the day before
the election when Oakley's article, the splash that day, was headlined 'Parties make
final push to capture floating voters':
'However, 24 per cent of Liberal Democrat supporters said they might yet change their
vote. So did a fifth of Conservative supporters and 16% of Labour backers',
Oakley reported.
We certainly did not ignore the 'don't knows'. Nevertheless, those who told us that
they did not know may have contributed to the problem. Investigation has made it
plain that there was differential refusal: Conservative supporters were less likely to
reveal their loyalties than Labour supporters were. This certainly operated through
the reluctance of some of those interviewed to reveal their voting intentions, both by
outright refusal and by answering 'don't know'. A similar and probably numerically
more significant effect probably operated through a refusal by some to be interviewed
at all, although there is no solid evidence to support this -consequently the samples
interviewed were tilted to Labour and Conservative support was underestimated.
This probably arose through the operation of what has been described as 'the Spiral
of Silence' (Noelle-Neumann, 1984): Conservatives, feeling that their party was
unfashionable and that they were outnumbered were more reluctant to disclose their
loyalties. This effect seems to have persisted in the period since the election.
Techniques need to be developed to try to maximize the number of people who are
willing to state their voting intention. Use of information on past voting and other
attitudinal questions to correct for any differential refusal that remains is one
technique to be explored further. The use of secret ballots can also be considered,
although MORI's experiments in this field have not yet convinced us of the value of
introducing secret ballots as a routine methodology. We must also explore ways in
which refusal to participate in surveys could be reduced; we need to enhance public
awareness of the importance of survey research.
20. SAMPLE DESIGN AND QUOTAS
In theory, differential refusal to participate by Conservatives ought not to have
distorted the polls, because if the quota and weighting systems had been operating
perfectly this would have compensated, with other similar voters who were prepared
to disclose their opinions being interviewed instead. This brings us to the third cause
that appears to have contributed to the error: inadequacies in the sampling. This
arose partly because the quotas set for interviewers and the weights applied after
interviewing did not reflect sufficiently accurately the social profile of the electorate at
the time of the election; this should be easy to correct, with care, in future elections,
using accurate, up-to-date sources for setting quotas and weighting. It also arose
partly because the variables used in setting quotas and weights were not correlated
sufficiently closely with voting behaviour to ensure that the polls reflected the
distribution of political support among the electorate. We need to try to identify
other variables more closely related to voting behaviour, to ensure that samples are
as representative as possible. This is more challenging.
One solution being advocated is to weight surveys by declared past vote, as is
routine in some other countries. However, we would be very wary of adopting a
technique that in all previous elections - probably even including 1992- would
This content downloaded from 140.105.48.199 on Tue, 15 Nov 2016 08:24:44 UTC
All use subject to http://about.jstor.org/terms
18
WORCESTER
[Part
1,
have made predictions worse rather than better. British voters have a strong tradition
of failing to recall that in deciding to vote against their usual party they voted for the
Liberal Democrats or their predecessors. Consequently polls usually underestimate
the number of ex-Liberals in their samples and the distortion that this would cause
on weighting by past vote would far outweigh any corrective effects between the two
main parties. Although for a few months following the 1992 election the recalled
Liberal vote, unusually, held up fairly well at a realistic level, it soon thereafter lapsed
into its usual underestimate of the actual third-party share.
A less contentious approach is to expand the number and variety of demographic
variables used in quotas and weighting, particularly by the use of economic variables
such as car ownership. So long as reliable base-line data are available, such measures
can be implemented without much difficulty and may improve the political representativeness of samples.
21. WHAT WE LEARNED
The effectiveness of new polling techniques in the 1994 European elections offers
encouraging evidence that the problems encountered in 1992 can be overcome. The
MRS report expresses hope that the media will encourage this work-as, indeed,
some media clients already do. They must accept that, inconvenient though it may
be, research cannot always be responsibly reported in a single sentence or a succinct
headline; special care is needed in secondary reporting of research, which is at present
too frequently slapdash and can frustrate our efforts to ensure that the real meaning
of our work is understood. Our task is difficult enough without our being judged on
the basis of inaccurate or misleading and simplistic reports, or our being confounded
with the 'voodoo' polls and straw polls which are still far too often reported as if they
were serious surveys, by BBC and other journalists who ought to know better. With
the media's help and co-operation, we believe that the lessons of 1992 can be learned
and that polling in Britain can be stronger as a result.
22. BANNING OPINION POLLS
We occasionally face the question from radio and television interviewers 'Isn't it
time polls were banned, as in France?'. My answer is usually that as 'economic' man I
would favour it, because the parties would continue to conduct their private polls at
an even higher level, and the City would fall all over themselves to commission, use,
leak and even to manufacture polls for their own benefit. As in France now, the
pollsters, the psephologists, the pundits, the political writers, the politicians and the
friends would all know the results of the latest private polls; the only ones who would
be in the dark are those for whose benefit elections are supposed to exist, the voters.
To ban polls would be an illiberal act, removing from the electorate the one objective
and systematic form of information about the view of the electorate they are
provided during an election. Yet calls still come, from letter-writers (a few), pundits
(one or two), one or two politicians and some members of the public.
We need only to look to the USA where, on the Saturday night before the Tuesday
primary elections, the late Senator William Fulbright announced that his private
polls showed him ahead by 2:1. A few days later Dale Bumpers, his opponent, won
This content downloaded from 140.105.48.199 on Tue, 15 Nov 2016 08:24:44 UTC
All use subject to http://about.jstor.org/terms
1996]
POLITICAL
POLLING
19
the election by a margin of 55%:45%. Later Fulbright's aides admitted that no
private polls had been taken. And that was where polls are not and, thanks to the
protection of the First Amendment, will never be banned.
Across the Channel to France, where they are banned in the final days before
polling (see below), in the Giscard election of 1981, as soon as the seven-day banning
of the publication of the polls began the Chirac forces began an orchestrated effort to
suggest that their candidate's campaign was taking off, and that he would be the one
to go through with Giscard into the final round, thus ensuring two conservatives in
the run-off. This frightened the Communist candidate's (Marchais) supporters into
switching over to Mitterand, the Socialist candidate, to ensure that the left had a
challenger in the final contest.
The politicians, pundits and psephologists in Paris all knew what the private polls
are finding; the only people who were kept from the truth of the French public's
views were the French public themselves. More recently we have seen the ludicrous
spectacle of polls being commissioned daily by banks and stockbrokers during the
period of the ban in the week before the 1992 referendum on the Maastricht treaty,
and by German and British media, published in, for example, the Daily Mail on the
Friday before polling and in The Sunday Times on polling day itself, although not in
the French edition, of course. The Head of Policy of the BBC said to me that she saw
no reason why the BBC should not report on the findings of the poll, despite being
heard by some French listeners.
In France for seven days preceding an election, in Spain for five, in Portugal
during elections, in Belgium for 30 days and in Luxembourg there are national bans
on the publication of public opinion polls. In each country polls continue to be taken
and influence the pronouncements of politicians, the pundits and leader writers and
political commentators, the financial market makers and money manipulators; the
only people who are kept in the dark are the public -denied access by politicians
and legislators from the only objective measure of what the public thinks. (For a
detailed analysis of what the regulations are for the publication of opinion poll
results across the globe, see Rohme (1992).)
In 1985 a Council of Europe Parliamentary Assembly Committee held hearings in
several European cities to investigate proposals for the 'harmonization' of laws to
regulate the publication of polls. The Chairman of the French Commission on
Opinions Polls, Pierre Huet, in evidence to the Committee's hearings in Strasbourg
said that, if it were not already enacted, it would not and should not be, saying that
'restrictions of freedom of information were objectionable on principle' (Huet, 1984).
After extensive investigation, a review of the practices of its member governments,
considerable testimony taken from academics, journalists and editors, legislators and
government officials the Parliamentary Assembly concluded
'. . . therefore, having heard all the evidence, the Committee are not of the opinion that
strong controls are shown to be desirable or necessary and there is no need or value in
attempting international harmonisation'
(Council of Europe, 1985).
The Belgian ban, enacted early in the 1980s, was tested in 1985 by the news
magazine Knack whose editor Frans Verleyen commissioned a poll and, four days
before the general election, published it in deliberate violation of the law; the law was
also tested by De Morgen and De Standard. Knack quoted article 18 of the 1831
This content downloaded from 140.105.48.199 on Tue, 15 Nov 2016 08:24:44 UTC
All use subject to http://about.jstor.org/terms
20
WORCESTER
[Part
1,
Belgian constitution in its defence, which states: 'The printed press is free; censorship
can never be introduced . . .'. In a commissioned article by a distinguished Belgian
jurist Knack went further, arguing that the law banning polls was not only in
violation of the constitution but also of article 10 of the European Convention on
Human Rights (Leonard, 1985).
23. CONCLUSION
Let me conclude with two brief quotes, one from Tony Benn,
'If you deny people knowledge of what the community is thinking, then those who think
those thoughts believe they are out of line',
and, finally, John Major quoting in turn Randolph Churchill, 'Trust the People'.
REFERENCES
Clifford, P. and Heath, A. (1994) The election campaign. In Labour's Last Chance? (eds A. Heath, R.
Jowell, J. Curtice and B. Taylor). Aldershot: Dartmouth Press.
Council of Europe (1985) Official Report, 37th Ordinary Session, Parliamentary Assembly, Council of
Europe, Sept. 30th, 1985.
Crewe, I. (1992a) A nation of liars? The opinion polls and the 1992 election. Parl. Aff., 45, no. 4, 475495.
(1992b) One poll victory does not make Kinnock's summer. The Times, Mar. 18th.
Hoffman, P. and Ross, J. (1970) (eds) Burke's Politics: Selected Writings and Speeches of Edmund Burke
on Reform, Revolution and War, p. 106. New York: Knopf.
Huet, P. (1984) Testimony to the Council of Europe Parliamentary Assembly hearings, Strasbourg, Oct.
8th.
Leonard, D. (1985) Belgian leaders should read 'Areopagitica'. Wall Street Journal, Oct. 18th.
Lippmann, W. (1922) Public Opinion. New York: Macmillan.
Market Research Society Working Party (1994) The Opinion Polls and the 1992 General Election.
London: Market Research Society.
Noelle-Neumann, E. (1984) The Spiral of Silence. Chicago: University of Chicago Press.
Oakley, R. (1992) Leadership gap still troubles Labour despite lead in polls. The Times, Mar. 18th.
Payne, S. (1980) The Art of Asking Questions. Princeton: Princeton University Press.
Price, V. (1992) Public Opinion. Newbury Park: Sage.
Rohme, N. (1992) The state of the art of public opinion polling worldwide: an international study based
on information collected from national market and opinion research institutes in April 1992.
Marktng Res. Today, 264-273.
Worcester, R. (1981) Whose government is it, anyway?: polls, surveys and public opinion. Royal Society
for Public Administration Conf. Public Influence and Public Policy, Brighton, Apr.
(1991) British Public Opinion: Guide to the History and Methodology of Opinion Polling. Oxford:
Basil Blackwell.
This content downloaded from 140.105.48.199 on Tue, 15 Nov 2016 08:24:44 UTC
All use subject to http://about.jstor.org/terms