The Use and Abuse of Opinion Polls Peter Kellner, President of

The Use and Abuse of Opinion Polls
Peter Kellner, President of YouGov
24 March 2015
There is an episode of Yes Prime Minister in which Sir Humphry explains how one might go about
fixing an opinion poll on the reintroduction of national service. He suggests a poll seeking a
positive response might run as follows:
“Are you worried about the rise in crime among teenagers?”
“Yes”
“Do you think there is a lack of discipline and vigorous training in our comprehensive schools?”
“Yes”
“Do you think young people welcome some structure and leadership in their lives?”
“Yes”
“Might you be in favour of reintroducing National Service?”
“Yes”
If you were looking for the opposite result you might ask the following sequence of questions:
“Are you worried about the dangers of war?”
“Yes”
“Are you unhappy about the growth of armaments?”
“Yes”
“Do you think there is a danger of giving young people guns and teaching them how to kill?”
“Yes”
“Do you think it’s wrong to force people to take arms against their will?”
“Yes”
“Would you oppose the reintroduction of National Service?”
“Yes”
As with all good satire this is an exaggeration of the truth – but there is something to it. Pollsters try
to ensure that the questions they ask are sensible, but, from time to time, clients, usually those
lobbying for a cause, want pollsters to ask questions which are plainly designed to produce the
results they want. Reputable polling organisations will point out that if a question is obviously
loaded people will see that it is loaded and it will do them no good. Normally they see the sense in
this, but occasionally you do see one of these loaded questions being put out, which is madness.
The way you ask questions really does matter. Sometimes equally legitimate questions can be asked
in different ways and produce different results. Three or four years ago, as an experiment, in a series
of polls over a two or three week period, several different (but all legitimate) questions were put to
the public about the BBC licence fee. At one extreme two out of three people were in favour of the
statement that the licence fee was “good value for money”, while at the other extreme two out of
three people felt it was “bad value for money”. If you say, “The BBC licence fee costs £145 per
year. Is that good or bad value for money?” on the whole people say “bad”. If, however, you ask a
sequence questions about the monthly cost of Sky TV first, and then say, “The BBC licence fee
costs 40p per day,” people are likely to say it is good value for money.
Two recent polls about grammar schools showed a similar effect. When you ask, “Do you think it
would be a good idea to have more grammar schools?” on the whole people say yes, whereas if you
ask, “Would you like to see the reintroduction of the 11-plus, with a quarter of the children taking it
going to grammar schools and three-quarters going to secondary modern?” you get a different set of
replies.
A brief and selective history of opinion polling
In America, modern polling began in the late-1920s or early-1930s, but for some years prior to that
a magazine called The Literary Digest had accurately predicted the result of each presidential
election from 1916-1928. Its strategy was to send out millions of pre-paid cards to addresses across
America asking people their voting intentions. They got a lot of the cards back and they would
publish the results and by this method they were able to predict the outcome. In their October 31st
issue of 1936, just a few weeks before the election, their prediction, based on more than two million
responses, showed Alf Landon getting 57 per cent of the popular vote. What actually happened was
that Landon did not get 57 per cent, he got 37 per cent and Franklin Roosevelt won by a landslide.
Gallup, in their first presidential election, not only predicted that Roosevelt would win, but they did
something very smart – they predicted in mid-October that The Literary Digest would predict a
Landon win. Enough people voted in The Literary Digest poll that Gallup could find out, in a sub
sample, who voted in The Literary Digest poll and use that to predict that they would get it wrong.
Up to 1932 American politics was not particularly polarised between rich and poor, however,
Roosevelt was a polarising president in his first term and in 1936 wealthy people voted for Landon
and less-well-off people voted for Roosevelt. The Literary Digest sample was made up of people on
various subscription lists: car owners, telephone owners, etc. It was tilted, as it had been for 20
years, towards the better off, but previously that hadn’t mattered. In 1936 it did matter. Gallup
polling about 5,000 people got it right while The Literary Digest, with the much higher number of
responses got it hopelessly wrong. A small, good sample is always better than a large, bad sample.
The British election of 1945 was the first to make serious use of opinion polls. Gallup was polling
for The News Chronicle and on the day before the election (an election that everybody knew
Churchill, the great war victor, was going to win) The News Chronicle carried a small, singlecolumn story with the headline: “Labour six per cent ahead”. The News Chronicle itself didn’t
really believe it, but it was right. The election was very close, but Labour did win, and by roughly
the number Gallup predicted.
Polling was beginning to get things right, and scientific polls were replacing the “voodoo” polls of
the past, however, there was still scope for error. A famous mistake came in 1948 in the US when,
based on polls showing Thomas Dewey ahead of Harry Truman, the early edition of the Chicago
Tribune carried the headline “Dewey defeats Truman” on its front page. Of course we know it was
in fact Truman who won, and there is a famous photo of a victorious Truman posing with a copy of
the paper. The error was a classic example of basing the prediction on early results, which, in this
case, were coming from very small, rural precincts that leaned Republican, suggesting the
Republicans were on course to win at the time the paper went to press. The analysis leading up to
the election wasn’t very sophisticated and the polls that said Dewey would win were out of date. In
those days polls were conducted face to face or by cards returned by post, so the numbers missed a
swing back to Truman in the fortnight leading up to the election.
In Britain we have had similar problems, indeed in this generation there have been three elections
that the pollsters have got wrong. In 1970 all bar one said Wilson would be re-elected. The only
exception was a poll published by the Evening Standard on the day before the election where they
re-interviewed about 300 people they polled at the weekend and asked if they had changed their
voting intentions. About 11 had switched from Labour to Conservative and on this basis the
pollsters said they thought Heath would win.
In the following election, in 1974, the Daily Mail predicted “A Handsome Win for Heath”. The
figures for the NOP poll the Daily Mail used were actually pretty good, they showed the
Conservatives four points ahead, and the Conservatives did actually get more votes than Labour in
that election (by one percent), but these numbers are not very big statistically speaking and they
slightly overstated the Conservatives’ performance. It was just one of those rare occasions where
the party with the most votes did not end up with the most seats.
Famously, in 1992, the polls said that Major would lose his majority. One, Gallup, had the
Conservatives fractionally ahead, but the others all had Labour ahead. In the end the Conservatives
won by a pretty comfortable number of votes. The biggest single thing that went wrong in the 1992
polls was that the election was held at almost the worst possible time for a pollster. In 1992 the
census had been conducted a year earlier but there were no census figures out yet, so all the
pollsters were operating on the 1981 census figures plus their guesses as to how Britain had
changed in the intervening 11 years. What had actually happened in the Thatcher decade is that
Britain had moved much more from working class to middle class than anybody had understood.
The polls simply overstated the working class electorate, understated the middle class electorate and
got the result wrong. At the time the raw numbers of most of these polls showed the Conservatives
ahead but the weighted numbers (weighted to give what they thought was the right social balance)
had turned them into Labour leads. So if they had used the right social figures they would have got
the result right.
Polling is not perfect but we have learned from the mistakes of the past and polling has improved
over the years. In 2008 YouGov was able to predict that Boris Johnson would beat Ken Livingstone
and last autumn YouGov picked the correct outcome of the Scottish Referendum, and came pretty
close to the exact results. In predicting the current election we can be confident that we will not
repeat the old mistakes, but there may be new mistakes to learn from.
*****************************************************
In our upcoming election the factor of what happens inside each individual constituency makes it
very complicated. In France you get to 8pm on the night of the election and a picture goes up on the
screen of the president, because exit polling, when it is just between two people, is very accurate.
It is illegal in France to report exit poll figures before all the polls have closed, but once they are
released the result can be predicted very quickly. In the first round in 2012 the vote share for
Marine Le Pen moved quite sharply between 7 and 8pm. This might have been caused by leaked
exit poll numbers on Twitter or other rumours that caused more people to go out and vote against
her.
In France there has been a tendency for people not to like to say to pollsters that they will vote for
the Front National. It seems that this time around a mistake was made by the pollsters because
fewer people voted for FN than they predicted.
In traditional polls, either telephone or face-to-face, where you have an interviewer, there is an
established tendency for some people to give the answer they think the interviewer wants to hear
rather than their real opinion. This might lead people to be reluctant to admit to wanting to vote for
the Front National, or in this country not wanting to admit that they would rather have money in
their pocket than strengthen public services. Some French pollsters used to roughly double the
responses they got for Front National – assuming on the basis of experience that they would
underestimate by about 50 per cent. Perhaps in 2012 voting FN had become more acceptable and
therefore pollsters made too big an adjustment. Online polling in future might address this problem
as people are more willing to give their real opinion over the internet.
Do you have a prediction on how many British general elections there will be this year?
There is a fairly high chance that even if there is a messy result it will go the full term. The Fixedterm Parliaments Act makes it quite hard for a government to precipitate an election. There are two
routes to an early election: a two-thirds vote in the House of Commons, which in practice means the
two main parties have to agree (which will be difficult as the polls at any given time will favour one
or the other party), the second is if there is a vote of no confidence and then there is no fresh vote of
confidence by the government in the following 14 days.
In order for there to be a second election what it comes down to is this: there has to be a broad
acceptance in Westminster, and in the markets and in the press and the public, that the Parliament is
broken and cannot carry on.
Geographically our election system favours Labour for various reasons – If Labour and
Conservative national vote shares were equal, Labour would have more seats. This time that will be
offset by two factors: Scotland, where Labour seats are going to the SNP, and the boost that betterknown incumbent candidates traditionally benefit from, which favours the Conservatives. The
Conservatives are also ahead on two big questions: “Who would you prefer as Prime Minister?”
where Cameron comes out on top, and “Who do you trust more on the economy?” where
Conservatives come out ahead. An opposition has come from behind when it is behind on one of
those, but never when it is behind on both. Why then are the Conservatives not pulling ahead on
voting intention? They still have a terrible brand image as the party of the rich, out of touch, not on
the side of ordinary people. Also, on anything other than the economy – education, health, crime –
the government is thought to have failed.
In the press coverage of the Conservative handling of the economy there has been a great deal
more focus on the deficit than the national debt which has gone up a great deal during this
Parliament. If there had been more discussion of the debt, how might it have influenced the public
opinion on the issue? Also, how has Cameron’s announcement that he would not seek a third term
impacted public opinion of his leadership?
The Cameron interview where he said he wouldn’t seek a third term is a beltway issue – almost no
one outside of Westminster cares about it. In a recent poll asking whether it changes people’s
opinion of him the majority said it didn’t, but of those that said it did, the majority said it made
them feel more positively about him.
There is an argument to be made that Labour didn’t do enough to capitalise on the Conservatives
missing their target on the deficit, but what opinion polls measure are perceptions not reality. When
someone doesn’t like the result of an opinion poll, they will often say, “you asked the question the
wrong way. You should have told the public x, y, z and then asked the question.” Of course if you
tell the public “x, y, z” you would get a different answer, but the job of pollsters is not to educate
the public, it is to find out what the public think on the basis of what they already know.
What role might UKIP play if they perform better than polls suggest?
The real issue is not the seats UKIP might win, three or four possibly, it is what happens with the
UKIP vote in the Conservative/Labour marginals. It is possible that there could be a Ralf Nader
Florida situation with UKIP sucking votes away from the Conservatives and letting Labour in.
There was some controversy about how the framing of the question in the Scottish Referendum, with
some arguing that the way it was asked benefitted those backing independence. Is it possible to see
a statistically significant change in the result based on this? Looking forward to a possible in/out
referendum on the EU, would there be a big difference if the question was asked “do you want to
stay in?” as opposed to “do you want to leave?”
The framing of a question in a poll matters a lot because you are often springing something out of
the blue. When you get to a referendum, both sides have been making their arguments, framing it
however they want, so people are unlikely to be swayed by the framing of the question when they
get to the voting booth. From a technical point of view, the question should have been, “which of
these do you favour: Scotland as an independent country or Scotland remaining part of the UK”.
They should have put the two alternatives rather than a yes/no. The same is true for a potential EU
referendum. But there is not much evidence to show that it has a significant effect.
In the recent election in Israel, Netanyahu was behind in the polls right up until the last moment
and his late surge was attributed to a vigorous position taken against Palestinian statehood late in
the campaign. How credible is it that such a big swing can be attributed to one act?
Israeli law doesn’t allow publication of polls in the last four days of a campaign, so if there is a late
movement the polls can’t catch it. This time the final pre-election polls showed Likud three or four
seats behind, the exit polls showed them level and the result had Likud ahead, which suggests the
pre-election polls missed a late swing but were still wrong. Past Israeli polls have been wrong
sufficiently often that it is not a surprise. Whether it can be put down to a response bias that means
certain types of Israelis are more likely to speak to pollsters, or to some other reason, they seem
often to be skewed.
What is the record like on predicting turnout? What factors do pollsters take into account on this?
Turnout is difficult to predict – people are quite good at predicting of themselves how they will vote
if they vote, but they are not always so good at predicting whether they will vote. It can depend on
the weather, or a family crisis, or what is on television that evening, or any number of other external
factors. The turnout in the last general election was 65 per cent, which was a recovery from 61 in
2005, but way below the 70 or 80 that has been normal for the post-war era. Turnouts have
generally been higher when it has been a close election and when people have felt strongly about
the result. This election will be close and more people feel strongly. Polling in Scotland, which had
a huge turnout in the referendum, seems to suggest that there will be a higher turnout in Scotland
than in England. The move from household to individual registration means that the number of
electors is likely to be about two per cent lower than it was under the old system – however, those
two percent are almost certainly people who wouldn’t have voted anyway. Taking all of those
factors into account it would be reasonable to predict a turnout approaching 70 per cent in this
election.
Lord Ashcroft has focused almost entirely on polling in individual constituencies and has come up
with dramatic predictions. Is it a problem for big national pollsters that the national trends aren’t
much of a guide in this very odd election where every individual constituency is different?
The problem with constituency polls is that they are more prone to error. Theoretically you could
run daily polls in each of 100 constituencies, in practice no one would because it would be
ludicrously expensive. It is worth having both the national polls and the constituency polls, as daily
national polls can provide a sort of temperature check. The data from these daily polls can then be
drawn together in a week’s worth of polls or a month’s worth of polls and analysed more closely.
As we get close to election day these polls can be used to do seat projections by breaking the
national figures down into types of seats.
Looking further ahead, if the next government lasts a full term, will we still be in Europe at the time
of the next election in 2019 or 2020?
The party politics of Britain over the next 18 months is very uncertain, but Britain in 2020 is fairly
easy to predict. We will still be in the EU, with or without a referendum, the deficit targets will have
been missed, there will have been some changes to the immigration laws, but they won’t make any
difference, expensive London properties will be taxed more, the health service will be underfunded
but will not have collapsed. There is vast political uncertainty, but in medium-term policy it is
easier to predict.