Improving Reports of Past Donation Behavior

Improving Reports of Past Donation Behavior
Adam Seth Levine (Cornell University)
Key Words: Political participation, donation, candidate, party
For decades the ANES has contained the following three questions about donation behavior
during the campaign:
During an election year people are often asked to make a contribution to support
campaigns. Did you give money to an individual candidate running for public office?
[If yes:] Which party did that candidate belong to?
Did you give money to a political party during this election year? [If yes:] to which
party did you give money?
Did you give any money to any other group that supported or opposed candidates?
Given that an increasingly popular way in which people express their political voice is by making
donations, the political importance of these questions has never been greater. A record number
of individuals donated to candidates during the 2008 campaign, and donations from individuals
now account for the overwhelming majority of campaign revenues (Ansolabehere et al. 2003,
Corrado et al. 2010, Wilcox 2008). Decisions to donate money are also related to subsequent
decisions to pay attention to the campaign and participate in other ways such as turning out
to vote (e.g. Graf et al. 2006).
In addition to their substantive importance, having these retrospective donation questions
on a nationally-representative survey such as the ANES is critical for methodological reasons.
Several of the most prominent studies on individual donors rely upon either campaign finance
reports or surveys of those who make large donations (e.g. Brown et al. 1995, Francia et al.
2003). As a result, these studies cannot tell us anything about the majority of individual donors
– those who make small donations – nor do they allow us to compare donors and non-donors.
Given that small donations have become a central part of campaign finance (Malbin 2010), leaving out the individuals who make small donations is a key omission. A nationally-representative
survey like the ANES is therefore essential on both methodological and substantive grounds.
1
Having said that, I argue in this proposal that the traditional ANES donation questions suffer
from two major shortcomings. First, they lead to inaccurate and biased reports of donation
behavior. Second, they restrict our ability to make meaningful distinctions among the recipients
of individuals’ donations.
First, the existing questions lead to systematically inaccurate reports of who actually donated
during the campaign. These inaccuracies stem from memory failures that are heightened by
a key contextual feature of the ANES post-election survey relative to the election – the fact
that people make their donations when the election outcome is uncertain, yet report whether
they donated after this uncertainty has been resolved. Moreover, and especially problematic
for explaining donation decisions, these inaccuracies are related to some of the same variables
that affect people’s willingness to donate in the first place.
To be sure, the idea that a retrospective behavioral question would be inaccurate is not novel,
as inaccuracies of reported voting turnout are one of the most frequently observed survey errors
(e.g. Traugott and Katosh 1979). Yet, when it comes to voting, such inaccuracies have been
(at least in part) tied to social desirability because voting is an admired civic activity (Belli et
al. 1999, Belli et al. 2006, Duff et al. 2007, Holbrook and Krosnick 2010).1 As compared to
voting, however, donating money is not an especially admired civic activity. In fact, for many
Americans just the opposite is the case – money in politics is viewed with suspicion (Brown
et al. 1995, Francia et al. 2003, Corrado et al. 2010). Thus, an approach based on social
desirability is not advisable.
The second shortcoming of the existing ANES retrospective donation questions is that they
do not allow researchers to explain why people choose to donate to some types of candidates
relative to others. This shortcoming is particularly problematic for the “individual candidate”
question, as there is no way to distinguish whether people donated to an individual running for
President or Congress (or any other office for that matter). Yet, as Francia et al. (2003) point
out, the regulations governing campaign finance for the Presidency versus that of Congress are
very different, and as a result candidates for President rely more heavily upon donations from
1 To be sure, some researchers have also investigated memory failure in this domain, yet efforts to improve questions based solely
on memory failures have “failed to [improve] the accuracy of reported vote” (Belli et al. 2001:91). Indeed, recent work has argued
that memory failure exists in this domain only in conjunction with social desirability (e.g. Belli et al. 1999, Belli et al. 2006).
2
individuals (relative to those from political action committees). Moreover, candidates for President also rely upon small donations from individuals more than candidates for congressional
office, and those who make small donations tend to differ in systematic ways from those who
make large donations (Malbin and Cain 2007; Wilcox 2008). Overall, then, from a methodological point of view the ANES would be perfectly positioned to explain donor behavior, yet the
traditional questions restrict its usefulness.
Seeing as how the first shortcoming listed above requires more explanation and empirical
motivation, I spend the bulk of this proposal showing how and why the existing ANES questions
lead to biased reports of donation behavior. Afterwards, I propose new questions for the 2012
ANES that both lead to more accurate reports and allow for a greater distinction among the
recipients of individuals’ donations.
How People Answer Questions about Past Donation Behavior
In this section I present a new theory about how people answer questions about past donations. A key point is that people typically make donations during a campaign well before it
ends. At that time of the actual donation, the outcome of the election is usually unknown, and
thus people decide to spend money supporting candidates whose victory is uncertain. Although
the degree of uncertainty differs by campaign, some uncertainty almost always exists in a contested election. Yet, when people are asked in the ANES about whether they made a donation
during the campaign, this uncertainty has been resolved because the outcome is known.
This difference between the electoral context in which people make donations and the electoral context in which they report their donation behavior is consequential in light of how
people answer retrospective behavior questions. When people are asked to report about a past
behavior, they often do not rely upon readily-available memories of specific events (Sudman,
Bradburn, and Schwarz 1996). Instead, they supplement recall by using their current attitudes and current willingness to act as a heuristic for estimating past action (Collins et al.
1985, Sudman, Bradburn, and Schwarz 1996). In some cases, particularly when there is no
change in the relevant decision context, using one’s present willingness to act as a heuristic will
3
lead to accurate reports of past behavior. Yet, in the case of donation behavior in which the
electoral context shifts dramatically between action and reporting that action, I propose that
using one’s current willingness to donate as a proxy for one’s previous donation behavior can
introduce significant inaccuracies.
Thus, I propose an alternative question wording that cues respondents to take this change
into account when they answer retrospective donation questions. This short cue simply reminds
people about how the winner of the election was unknown when they made their donation. It
thus more closely resembles the information that respondents had when they made their actual
donation decisions. I hypothesize that this alternative wording will lead to reported donation
levels that more closely match actual donation levels.2
Data
To test the impact of this alternative question wording, I embedded a survey experiment
inside the January 2011 Vanderbilt Poll, a statewide poll of all voting-age residents of the state
of Tennessee conducted from January 17-23, 2011. The poll was conducted with 700 residents
over the telephone and lasted approximately thirty minutes.3 Some questions during the poll
dealt with the 2010 gubernatorial race, in which Republican Bill Haslam handily defeated
Democrat Mike McWherter in a contest to replace the current, term-limited governor.4
Throughout the campaign, Haslam was a significantly stronger candidate. He raised $17.9
million to McWherter’s $2.9 million and held significant double-digit leads over McWherter
in the polls throughout the entire campaign. He also won by a wide margin, securing 66%
of the vote as compared with McWherter’s 33%. These characteristics are important for the
current research because they indicate how this race involved much less uncertainty regarding
the outcome than many other contests. This sets up a particularly challenging test for my
2 I am agnostic about the direction of the inaccuracy. In the case of voting, the concern has always been overreporting. In the
donation case, it is quite possible that people will over-report their willingness to donate to the winner and under-report their
willingness to donate to the loser (based on Mutz 1995). For now I simply note that my general prediction is that the alternative
wording will lead to more accurate reports of actual donation behavior during the campaign.
3 The margin of error is +/- 3.7%. Households with telephones were randomly sampled from the entire list of non-business
telephone numbers in the state of Tennessee. Upon calling, interviewers requested to speak with the person who had the closest
birthday. All figures presented in this paper are weighted to accurately reflect the demographic distribution of the state.
4 Note that the gap in time between the Vanderbilt Poll and the election is in line with other commonly-used surveys that
use retrospective donation questions. The 2008 ANES, for example, conducted post-election surveys through December 30, 2008,
meaning that many respondents are asked the questions well after the one month time period at which memory failures have been
found to be heightened (e.g. Belli et al. 1999). It is also in line with other studies that have investigated optimal turnout questions
– for example, Belli et al. 2006 report the results of a study from December - February following an election.
4
proposed new question wording, as a change from an uncertain to a certain electoral context is
a key factor driving inaccurate reporting.
The analysis I present focuses on donations to Haslam, as I was able to obtain detailed data
on the actual number of individual donors from the Haslam campaign (including those whose
donations were not itemized on campaign finance reports), and thus can verify the accuracy of
responses to each question wording. Such data were unavailable from the McWherter campaign.
Respondents were randomly assigned to receive one of two versions of a retrospective donation question. One of the questions mirrored the traditional ANES wording (albeit with
appropriate changes to make sense in the context of this campaign) by identifying the relevant
time period, asking about donations to an “individual candidate” during that time, and then
asking a follow-up question about the party of the candidate. It read:
During the election season that ended in November, did you give money to any
individual candidate running for Tennessee governor? [If yes:] Which party did that
candidate belong to?
Other respondents received an experimental question that provided a short reminder about the
conditions of electoral uncertainty that existed when they made their donation. It read:
During the election season that ended in November, before you knew who would win,
did you give money to any individual candidate running for Tennessee governor? [If
yes:] Which party did that candidate belong to?
Results
I first present an aggregate comparison of responses to the traditional and experimental
question-wordings. This comparison appears in Figure 1.5 Those who received the experimental
wording reported significantly fewer donations (p < .001, two-tailed test).
5 Note that these results only include people who, in the branching question, responded that they made a donation to the
Republican candidate. This resulted in the omission of (only) one respondent who reported making a donation to the Democratic
candidate. As mentioned in the main text, the Democratic candidate (McWherter) ran a lackluster campaign with a relatively tiny
amount of fundraising. Thus, the fact that only one respondent in the survey reported donating to him is not terribly surprising.
5
Percentage Reporting
a Donation
10
5
4.59
0.40***
0
Standard
Experimental
Figure 1: Comparing Donation Question Format (*** p < .001, two-tailed test)
The fact that we observe such a striking difference is itself noteworthy, but the key question
is whether one of the numbers is more accurate than the other. Thus, I compared these percentages with fundraising data from the Haslam campaign, which received 18,896 contributions
from 11,872 individual contributors.6 Given the adult population of Tennessee is 4,850,104,7
this means that approximately 0.25% of individuals donated. It is clear that the experimental
question yielded a substantially more accurate estimate of the total number of donors.8
Showing that the experimental wording leads to more accurate overall reports of donation
behavior is important by itself. For those interested in explaining who donates, the degree to
which such inaccuracies vary across individuals is also a concern, particularly if factors that
affect people’s likelihood of being inaccurate are also related to their tendency to donate in
the first place. I turn to this analysis next, focusing on three characteristics that are strongly
related to donation decisions: income, education, and age. The results appear in Table 1,
rounded to two decimal points.
Although I was able to obtain data from the Haslam campaign about the overall number
of individual donors, I was unable to obtain such data by demographic sub-groups. Thus, in
what follows, I assess the data in Table 1 assuming that the sub-group percentages for the
experimental wording represent the accurate percentages of donors in each subgroup (put dif6 Based
on personal communication with a member of the campaign. Almost all of these came from Tennessee residents.
on the 2010 U.S. Census. Note that while minors are allowed to make contributions according to Tennessee campaign
finance law, such contributions are exceedingly rare.
8 Moreover, to the extent that there is a 0.15 percentage point difference between the experimental figure and the actual figure,
this could be due in part to people reporting donations made to Haslam’s Republican primary challengers. It is unlikely that
such donations would change the calculation much, however, as none of the challengers raised significant amounts of money from
individuals relative to Haslam. Moreover, both question wordings cued people to the election season that ended in November, which
focused their attention away from the primary that ended in early August.
7 Based
6
ferently, I am assuming that the ability of the experimental wording to provide an accurate
measure of donation behavior does not differ across the sub-groups). I believe this assumption
is reasonable in light of the fact that, based on the psychology of the survey response, I do not
have a theoretically-driven reason to believe that the experimental wording should differ in its
ability to produce accurate retrospective donation reports across the sub-groups. Given this assumption, I can compare reported donation behavior between the traditional and experimental
question to assess differences in the likelihood of inaccurate reporting.
Table 1 reveals a few noteworthy patterns. First, inaccurate reporting in response to the
traditional question occurs across all of the demographic sub-groups, and each of these differences is statistically significant (based on two-tailed t-tests, p < .05). This pattern provides
support to the idea that people are using their current willingness to donate as a proxy for
past behavior, rather than some other mechanism tied to these demographic variables. Second,
the tendency to be inaccurate in the traditional wording is not the same across all sub-groups.
In each case people who are generally more likely to donate (older, richer, and more educated
people) are also far more likely to inaccurately report their previous donation behavior in the
traditional wording (based on two-tailed t-tests, p < .05). Thus, it appears quite reasonable
that the ANES’s traditional questions overstate the effect of these variables relative to others
(see, e.g. Rosenstone and Hansen 1993 for a comparison of effect sizes among these variables
using the traditional ANES wording).9
Overall, then, this survey experiment shows how traditional retrospective donation questions
lead to inaccurate reports, and that the likelihood of reporting inaccurately is not randomly
distributed across important sub-groups. These findings provide a key motivation for my new
proposed questions in the 2012 ANES Time Series.
9 Although my focus in Table 1 is on demographic subgroups, and despite the fact that Haslam received substantial bipartisan
support, I should also note that both Democrats and Republicans were less willing to report donating when they received the
experimental wording.
7
Table 1: Comparison of Reported Donation Behavior Across Sub-Groups (in percentages)
Category
Traditional Experimental
Wording
Wording
Household Income
Less than $75,000
3.52
0.00
Greater than or equal to $75,000
8.83
1.44
Education
No college degree
3.49
0.00
College degree
7.38
1.39
Age
Less than 50 years old
2.58
0.00
Greater than or equal to 50 years old
7.88
0.73
Proposed Questions for the 2012 ANES Time Series
My proposal involves a question-wording experiment that respects the time series yet also
includes new questions that will address the two shortcomings identified in this proposal: the
biased responses to the traditional question wording, and the fact that the traditional questions
do not allow researchers to distinguish the recipient of individuals’ donations. My proposal
involves only five questions (so, just two more than the current retrospective donation battery),
and even those questions would only be asked of a subset of ANES respondents.
First, for the sake of continuity with the existing time series, I propose that one-third of
respondents receive the three traditional retrospective donation questions that have been part
of the ANES for decades (and that appear on page 1 of this proposal).
Next, I propose that one-third of respondents receive questions that distinguish among various recipients of individuals’ donations:
During an election year people are often asked to make a contribution to support
campaigns. Did you give money to an individual candidate running for President? [If
yes:] Which party did that candidate belong to?
Did you give money to an individual candidate running for Senate in your state? [If
yes:] Which party did that candidate belong to?
Did you give money to an individual candidate running for the House of Representatives in your district? [If yes:] Which party did that candidate belong to?
Did you give money to a political party during this election year? [If yes:] to which
party did you give money?
8
Did you give any money to any other group that supported or opposed candidates?
Finally, I propose that the remaining one-third of ANES respondents receive questions that
distinguish among recipients and include an experimental wording similar to the Tennessee
survey experiment. These questions would be as follows:
During an election year people are often asked to make a contribution to support
campaigns. Before you knew who would win, did you give money to an individual
candidate running for President? [If yes:] Which party did that candidate belong to?
Before you knew who would win, did you give money to an individual candidate running for Senate in your state? [If yes:] Which party did that candidate belong to?
Before you knew who would win, did you give money to an individual candidate running for the House of Representatives in your district? [If yes:] Which party did that
candidate belong to?
Before you knew who would win, did you give money to a political party during this
election year? [If yes:] to which party did you give money?
Before you knew who would win, did you give any money to any other group that
supported or opposed candidates?
Note that my proposal will allow for direct comparison of responses between the latter two sets of
respondents. This is advantageous as it will facilitate a robustness check on the Tennessee results
using a separate election season. It will also allow for an explicit comparison of inaccuracies to
both winning and losing candidates/parties (which was not possible using the Tennessee race).
For example, it is quite possible that, whereas the inaccuracy led to over-reporting in the case
of Haslam (a winner), it would lead to underreporting in the case of a loser (because when
people use their current willingness to donate as a proxy for answering retrospective donation
questions, they are less willing to give money to someone whom they know lost the election).
Overall, then, conducting the proposed question-wording experiment as part of the 2012
ANES time series will require only two more questions, yet lead to significant gains in accuracy
and more sound explanations of who donates to which candidates and parties during political
campaigns.
9
References
Ansolabehere, Stephen, John M. de Figueiredo, and James M. Snyder. 2003. “Why is There so Little Money
in U.S. Politics?” Journal of Economic Perspectives 17: 105-130.
Belli, Robert F., Sean E. Moore, and John Van Hoewyk. 2006. “An Experimental Comparison of Question Forms Used to Reduce Vote Overreporting.” Electoral Studies 25: 751-759.
Belli, Robert F., Michael W. Traugott, Margaret Young, and Katherine A. McGonagle. 1999. “Reducing
Vote Over-Reporting in Surveys: Social Desirability, Memory Failure, and Source Monitoring.” Public Opinion
Quarterly 63: 90-108.
Brady, Henry E., Kay Lehman Schlozman, and Sidney Verba. 1999. “Prospecting for Participants: Rational Expectations and the Recruitment of Political Activists.” American Political Science Review 93: 153-168.
Brown, Clifford W. Jr., Lynda W. Powell, and Clyde Wilcox. 1995. Serious Money: Fundraising and Contributing in Presidential Nomination Campaigns. New York: Cambridge University Press.
Collins, L.M., Graham, J.W., Hansen, W.B. and Johnson, C.A. 1985. “Agreement Between Retrospective
Accounts of Substance Use and Earlier Reported Substance Use.” Applied Psychological Measurement 9: 301-9.
Duff, Brian, Michael J. Hanmer, Won-Ho Park, and Ismail K. White. 2007. “Good Excuses: Understanding Who Votes with an Improved Turnout Question.” Public Opinion Quarterly 71: 67-90.
Francia, Peter L., John C. Green, Paul S. Herrnson, Lynda W. Powell, and Clyde Wilcox. 2003. The Financiers of Congressional Elections. New York: Columbia University Press.
Grant, J. Tobin and Thomas J. Rudolph. 2002. “To Give or Not to Give: Modeling Individuals Contribution Decisions.” Political Behavior 24: 31-54.
Holbrook, Allyson L. and Jon A. Krosnick. 2010. “Social Desirability Bias in Voter Turnout Reports: Tests
Using the Item Count Technique.” Public Opinion Quarterly 74: 37-67.
Malbin, Michael J. 2009. “Small Donors, Large Donors, and the Internet: The Case for Public Financing
after Obama.” Campaign Finance Institute Working Paper.
Malbin, Michael J. and Sean A. Cain. 2007. “The Ups and Downs of Small and Large Donors: An Analysis of Pre- and Post-BCRA Contributions to Federal Candidates and Parties, 1999-2006.” Campaign Finance
Institute Report.
Mutz, Diana C. 1995. “Effects of Horse-Race Coverage on Campaign Coffers: Strategic Contributing in Presidential Primaries.” Journal of Politics 57: 1015-1042.
Rosenstone, Steven J. and John Mark Hansen. 1993. Mobilization, Participation, and Democracy in America.
New York: MacMillan.
Sudman, Seymour, Norman M. Bradburn, and Norbert Schwarz. 1996. Thinking About Answers. San Francisco: Jossey-Bass.
Verba, Sidney, Kay Lehman Schlozman, and Henry E. Brady. 1995. Voice and Equality. Cambridge, MA:
Harvard University Press.
Wilcox, Clyde. 2008. “Internet Fundraising in 2008: A New Model?” The Forum: A Journal of Applied
Research in Contemporary Politics 6, article 6.
10