Research Assignment Feb 19th - Grants Pass School District 7

Statistics Homework Assignment: Take out a piece of notebook paper to record your answers to these
activity questions. Turn in at the end of the period or stay in during CAVE to finish.
You may work with another person if you like – but each turn in your own assignment.
Why can’t pollsters get it right?
Imagine a large container filled with thousands of beads, some red and the rest blue. This is our population. If
we want to find out what percent of the population is red, we could look at every bead in the container, count
the number of reds, and divide by the total number of beads. That isn’t very practical if the population is large.
Our best alternative is to take a representative sample of beads from the population, count the number of reds,
and divide by the total number of beads in the sample.
Suppose we mix the beads in the container well, and then (without looking) scoop out 400 beads. If
exactly 60% of the beads in the sample are red, what can we say about the percent of red beads in the
population? If we put all the beads back in the container, mix them again, and draw another sample of 400
beads, we probably won’t get exactly 60% reds. That’s the idea of sampling variability.
What can we conclude based on our sample with 60% red beads? We can say that we are “95%
confident” that between 55% and 65% of the beads in the population are red. Why? Because well-chosen
samples of 400 individuals usually get us within 5% of the truth about the population. This 5% figure is known
as the margin of error.
The ideas of sampling extend to sample surveys, like those taken in the days leading up to an election.
Imagine that two candidates, A and B, are in a tight race for an important office. Pollsters would like to predict
which candidate will win. All they have to do is reach into the population, take a representative sample, and
then use the sample result to estimate the population truth, right? In theory, yes. But in practice, there are many
practical difficulties that make sample surveys of people more complicated.
Who is the population for an election survey? Is it all people who are eligible to vote, or only those who
are likely to vote? How should we choose our sample? How will we collect data from the individuals in the
sample? What about people we can’t reach, or those who refuse to answer? Will everyone tell the truth about
their voting intentions?
Should Election Polls Be Banned?
The 2000 presidential election was one of the closest and most hotly contested in U.S. history. CNN ran the
banner “too close to call” for many days following the election. After numerous legal challenges and meticulous
recounting of ballots in some areas, George W. Bush was declared the winner. Some Americans still feel that
polls had a profound effect on the final outcome.
Preelection polls tell us that Senator So-and-So is the choice of 58% of Ohio voters. The media love
these polls. Statisticians don’t love them, because elections often don’t go as forecast even when the polls use
all the right statistical methods.
Exit polls, which interview voters as they leave the voting place, don’t share this problem. The people in
the sample have just voted. A good exit poll, based on a national sample of election precincts, can often call a
presidential election correctly long before the polls close. That fact sharpens the debate over the political effects
of election forecasts.
Some countries have laws restricting election forecasts. In France, no poll results can be published in the
week before a presidential election. Canada forbids poll results in the 72 hours before federal elections. In all,
some 30 countries restrict publication of election surveys.
The argument for preelection polls is simple: democracies should not forbid publication of information. Voters
can decide for themselves how to use the information. After all, supporters of a candidate who is far behind
know that even without polls telling them so. Restricting publication of polls just invites abuses. In France,
candidates continue to take private polls (less reliable than the public polls) in the week before the election.
They then leak the results to reporters in the hope of influencing press reports.
QUESTIONS (Answer from article above and with a little research )
1. Give at least two reasons why a pre-election poll might give inaccurate results.
2. Some people have argued that preelection polls influence voter behavior. Voters may decide to stay home if
the polls predict a landslide—why bother to vote if the result is a foregone conclusion? Comment on this
argument.
3. Research: In the days leading up to the 2000 presidential election, numerous preelection polls were taken.
Which candidate was predicted to win the election?
4. Based on exit polls taken on election day, several TV networks declared a winner. Shortly thereafter,
television news anchors began to withdraw their declarations. Some even declared the other candidate the
winner. Which candidate was initially declared president?
5. Give at least two reasons why the exit polls might have erred in predicting the winner.
6. In Florida, the polls were still open in part of the state when an initial winner was declared. How might this
have affected the final result in Florida?
7. What led to the recount of votes in some Florida counties?
8. How did the final outcome—winner and percent of votes received—compare with the results of the
preelection polls?
9. George W. Bush won the election, but he did not win the popular vote. Has this happened before in a U.S.
presidential election? If so, when?
Research #2 (A famous example of bad sampling)
Literary Digest poll
The Literary Digest was a popular magazine of the 1920s and 1930s. The Digest ran a poll prior to each
presidential election, which in 1936 predicted a 3-to-2 victory for Alf Landon over Franklin Delano Roosevelt.
As we all know, FDR won the election in a landslide. What happened?
Discuss two things. Record them on your homework assignment:
(1) The sampling frame (list of individuals from which the sample was obtained) What effect did that have on
the poll?
(2) Nonresponse. What effect did that have on the poll?
Research #3 (Harry Truman and Thomas Dewey Presidential Election 1948)
1. Research: In the days leading up to the 1948 presidential election, numerous preelection polls were
taken. Which candidate was predicted to win the election?
2. Research: The Chicago Daily Tribune newspaper erroneously reported the winner. Who did they say
was going to win and why did they get it wrong? Explain.
Gallup Polls
Ola Babcock Miller was active in the late nineteenth century women’s suffrage movement, was elected to three
terms as Iowa’s secretary of state, and in her first term founded the Iowa State Highway Patrol. Her election in
1932 surprised the political pundits of the day, as she became the first woman, and the first Democrat since the
Civil War, to hold statewide political office in Iowa. However, her election did not surprise her son-in-law,
George Gallup, who predicted her victory using the first scientifically sampled election poll ever. Three years
later Gallup founded the American Institute of Public Opinion in Princeton, New Jersey, and began publishing
the results of the Gallup Poll in his syndicated column “America Speaks.”
At the Web site for the Gallup Organization at www.gallup.com you will see how the Gallup Poll
provides data on the attitudes and lifestyles of people around the world. Notice that the poll is just part of a
global management consulting and market research company.
Research #4 (Go to the website listed above and find a poll that is interesting to you.)
Research: Describe the outcome of the poll and explain the “margin of error” of the poll. How many
people were surveyed? Do you have any concerns about how the survey was conducted?