How Setting the Agenda Can Backfire: The Effect of Rhetoric about

How Setting the Agenda Can Backfire: The Effect of Rhetoric about
Political Inequality on Citizen Engagement
Adam Seth Levine
Department of Government, Cornell University
[email protected]
Robyn L. Stiles
LSU Manship School of Mass Communication
[email protected]
June 2016
Abstract
American political discourse frequently calls attention to social, economic, and political problems. While
this rhetoric often generates concern within the wider citizenry (and thus has an important agenda-setting
influence), we argue that it can also backfire by reducing citizens’ engagement with politics in important
ways. We demonstrate this point by examining rhetoric about one oft-cited topic: political inequality (i.e.
the concern that some citizens’ voices have more influence during the political process than others, which violates widely-shared democratic norms). We use several field experiments on the Google AdWords platform
as well as a concurrently-run survey experiment to demonstrate this rhetoric’s divergent impact on attitudinal and behavioral measures of engagement. Our results suggest that talking about political inequality may,
ultimately, exacerbate it.
We thank our partner organization, Vote.org, and the 2016 Google Grant Program for valuable
insight and research assistance. This is a work in progress. Thank you for not citing this paper
without the authors’ permission.
1
American political discourse often focuses on two topics related to political inequality: unequal influence over the political process and the expanding role of money that is believed to
enable it. Headlines such as “Big money in politics emerges as rising issue in 2016 campaign”
(Gold 2015), as well as frequent rhetoric by candidates and non-partisan groups, all raise concerns about the health and functioning of our democracy stemming from the rise of super PACs,
the influence donors wield over the American political process, and the ways in which elites are
deaf to ordinary citizens’ interests and preferences (Brown 2016).
More generally, a key part of politics involves identifying, prioritizing, and addressing problems, including those related to the political process as well as a broad suite of economic
and social concerns.1 And one goal of political communication by the news media (Iyengar
and Kinder 1987, Scheufele and Tewksbury 2007), interest groups (Kollman 1998, Berry 1999,
Baumgartner et al. 2009), and candidates (Vavreck 2009) is to shape the agenda by bringing
problems to citizens’ attention.
With these considerations in mind, in this paper we ask the following: When elites call
attention to problems, how does that rhetoric affect citizen engagement in the democratic
process? Our argument is that it depends upon the type of engagement and the nature of the
problem. On the one hand, based on behavioral research on agenda-setting, we expect that
calling attention to problems will increase citizens’ expressed concern about them (e.g. Iyengar
and Kinder 1987, Scheufele and Tewksbury 2007). Yet, on the other hand, it is worth noting
that many problems make salient one or more reasons why individual citizens may not wish
to voluntarily spend scarce resources on politics. When a reminder like this occurs, we expect
that behavioral engagement will decrease.
We apply this argument to political inequality rhetoric, which calls attention to violations of
widely-shared democratic norms concerning political voice and influence (McClosky and Zaller
1985). This is a prime example of agenda-setting rhetoric that may have divergent effects on
engagement. We expect that it will increase the degree to which citizens believe that policies to
1
Our labeling of political inequality as a “problem” follows from surveys showing that most
Americans would, in fact, label it a problem (e.g.: http://www.people-press.org/2015/11/23/6perceptions-of-elected-officials-and-the-role-of-money-in-politics/).
1
address it should be prioritized (e.g. campaign finance reforms), yet at the same time decrease
their willingness to voluntarily spend scarce resources on democratic participation such as
seeking out political information.2
We investigate these links between political inequality rhetoric, public opinion, and political
behavior using a series of field and survey experiments. Although past work (such as that cited
above) has examined the effect of agenda-setting rhetoric on some of these individual pieces, to
our knowledge previous work has not examined the possibility that there are important ways
in which it may have divergent effects on attitudinal versus behavioral political engagement.
Looking at the effect on both simultaneously reveals how, in at least one important respect,
the mere fact of talking about political inequality may in fact exacerbate it by encouraging
quiescence.
Field Experiments
We first conducted a series of four field experiments in which we partnered with Vote.org,
a 501(c)3 organization that provides information on voter registration, Election Day voting,
and early voting across all fifty states. During the 2016 campaign Vote.org is the recipient of
a Google AdWords Grant, which allows it to bid for advertisement space adjacent to search
results. We used part of the grant money to conduct experiments that investigated how various
messages related to political inequality influence people’s desire to search for more information
by clicking on the ad.
Given that Google AdWords is not (yet) a common research platform for studies of politics
and communication, we first briefly describe how it works. Each ad contains four lines: a subject
line, two lines of text, and a url. In advance we specified a long list of search terms related
to voter registration (3,168 in total) on which we would bid to have our ads appear.3 When a
user searches using one of our terms, Google conducts an instant auction in which AdWords
customers bid to have their ads shown. The highest bids win. As Google grant recipients, our
2
3
Information-seeking is a common measure of behavioral engagement (e.g. Brader 2006).
For example: Register to vote, voter registration, registration deadline, how to register,
voting website, election ballot, voter registration deadline, verify registration.
2
Table 1: Example of AdWords Experiment
Control group
Free Voter Registration
Registering is quick, easy, & free
Register to vote now!
www.vote.org
Treatment group
Free Voter Registration
Wealthy Buying Elections
Register to vote now!
www.vote.org
maximum bid was limited to $2. Given that our search terms were very popular, we often faced
significant competition for ad space (and prices would routinely reach as high as $17-19). Yet,
when we won the auction, then one of our ads would appear (or, more specifically, one of our
experiments would run) and if the user clicked on the ad then we would pay our bid amount.
The population for our field experiments is thus the set of people searching on Google for
registration information. It is a convenience sample, but one that is arguably of interest for two
main reasons. First, from the perspective of testing hypotheses about political rhetoric that
might reduce information-seeking, ours is a particularly tough test. After all, the people in
our study were already motivated enough to initiate an information search. Second, from the
perspective of expanding the set of active voices in American democracy, this is an especially
meaningful group. This group likely consists of people that are not currently registered where
they live, either because they have never been registered or because they are not registered at
their current address. We view this set of potential non-voters as interesting in and of itself.
An ideal experimental design might randomly assign users to receive either a control group
message or one of several possible treatment group messages that are theoretically of interest. In
our case, however, Vote.org requested that we set up each experiment as a two-group design in
which the control group was always their strongest message to date. This message emphasized
how it is “quick, easy, and free” to register, and the rest of the ad encouraged people to click
the link to obtain relevant voter registration information (see Table 1).
In order to test the effect of various political inequality messages, we then conducted four
two-group experiments. Each one was posted for only 9-12 hours, and they occurred in quick
succession over the course of a few days at the beginning of June 2016.4 Our four experiments
varied the ad’s second line.
4
It was not possible to implement a cluster-randomized design (Ryan 2012) because the
3
The treatment messages varied along two dimensions: first, whether they explicitly referred
to existing inequalities in political influence and, second, whose influence was mentioned. A
summary of our four treatments appears in Figure 1. The three messages in the left-hand
column of Figure 1 all referred to existing inequalities, though differed in terms of which aspect
was highlighted. Each of them suggests that widely-shared democratic norms in which all
citizens’ voices are equally heard (McClosky and Zaller 1985) are being violated, and thus
we expect each to decrease people’s desire to seek information by clicking the link. The fourth
message acts as a counter-point to the other three by testing what happens when rhetoric refers
to ordinary citizens’ political voice but does so without explicitly referring to existing forms
of political inequality. We expect that, in contrast to the first three, this one will not depress
engagement.5 An example of one treatment group ad appears in Table 1 (the full text of all
four treatments appears in the appendix). From an external validity point of view, it’s worth
noting how our ads not only stated a problem but also included a reasonable call to action in
response to it (also, see supplemental experiments in the appendix that varied the third line
using other encouraging messages that, in turns out, produced results consistent with those
presented in the main text).
Figure 1: Summary of treatment language in our AdWords experiments.
Overall, our research design allows us to test three ways of talking about the problem and
then one plausible alternative. Our outcome measure is the most direct way in which people
Google grant did not allow access to the necessary demographic information.
5 We did not test messages in the top-right quadrant because it was difficult to craft a convincing message that adequately satisfied both criteria.
4
would engage with the content of our ads: whether people click on them or not. Following
Ryan (2012), we interpret clicking as a measure of information-seeking behavior.
Before presenting the results, two caveats are in order. First, although political inequality is
a frequent aspect of political discourse, we were mindful that our ability to draw internally-valid
inferences would be diminished if this topic cycled through news headlines to varying degrees
during our brief fielding periods. Thus, we searched a news database for all words in our control
and treatment groups during the fielding period. We found no evidence that they dominated
headlines or lead paragraphs during these brief periods (see appendix for details). Second,
our design does not prevent the same person from being exposed to our ads multiple times,
if he were to perform multiple searches during our brief fielding period. Yet this possibility
was minimized due to the conditions of the AdWords grant. Our experiments involved 3,168
very competitive search keywords and, as noted earlier, our maximum bid was set relatively
low. At any given point throughout the treatment period, winning bids for the most common
terms routinely ranged from just over $2 to between $17-19 (as many advocacy groups and
campaigns were competing for election-related search terms during our fielding period). This
means that even if someone did conduct multiple searches during our brief fielding window, it’s
very unlikely that we would win the auction multiple times.
Table 2 displays the results from the four experiments. We list the number of impressions
(i.e. the number of times that our ads appeared adjacent to search results), the number of
clicks, and the click rate (i.e. proportion of impressions that elicited a click). The pattern is
striking. We see substantial (and substantively large) evidence that rhetoric explicitly calling
attention to political inequality reduced engagement. Our message about the “wealthy buying
elections” reduced clicks by 46.7% relative to the control group, “the system is rigged” reduced
clicks by 43.6%, and “your voice is not yet being heard” reduced clicks by 20.7%. In contrast,
we see no evidence of such a decrease in response to the “Be heard this election” message, as it
produced an almost identical click rate as the control group. This equivalence is substantively
critical. Recall that the control group is Vote.org’s most powerful message, and so it’s unlikely
that we would craft a treatment that would yield a higher click rate. A message that performs
equivalently is extremely noteworthy, as it is likely a powerful way to engage people.
5
Table 2: Results: AdWords Experiments
Impressions
Clicks
Proportion Clicking
Statistical Comparison
Impressions
Clicks
Proportion Clicking
Statistical Comparison
Impressions
Clicks
Proportion Clicking
Statistical Comparison
Impressions
Clicks
Proportion Clicking
Statistical Comparison
Exp 1: “Wealthy Buying Elections”
Control group
Treatment group
4369
3896
320
152
.07
.04
|z| = 6.69, p = .00, two-tailed test
Exp 2: “The System is Rigged”
Control group
Treatment group
3586
3720
289
169
.08
.05
|z| = 6.20, p = .00, two-tailed test
Exp 3: “Your Voice is Not Yet Being Heard”
Control group
Treatment group
2678
2774
365
300
.14
.11
|z| = 3.18, p = .00, two-tailed test
Exp 4: “Be Heard this Election”
Control group
Treatment group
3897
3710
319
286
.08
.08
|z| = 0.77, p = .44, two-tailed test
Survey Experiment
While our field experiments were in progress we also designed a survey experiment to study the
effect of our treatments on other important measures of political engagement. Here subjects
were presented with the following scenario: “Please imagine that you just moved to a new state
and wanted to register to vote. You do a Google search for “voter registration” and a number
of results show up. In addition, as is common with Google searches, an ad appears near the
search results. It reads as follows:”. Subjects were then randomly assigned to receive the full
text of our control group or one of our four treatment groups.
Afterwards subjects received a very brief questionnaire that included measures of engagement
along with demographic questions (see the appendix for precise wording). Two questions asked
about their likelihood of taking action: how likely they would be to click the ad and how likely
they are to vote in November’s election for President. The former question acted as a test of
face validity for the hypothetical survey-based scenario. The voting question addressed one
6
potential concern from the field experiments, which is that perhaps people were less interested
in seeking information by clicking on the ads but their broader desire to participate in the
electoral process did not decrease. Our final engagement measure assessed agenda-setting by
asking respondents how much they believed the federal government should prioritize passing
laws that would impose new limits on campaign spending. Overall, we expected that our three
inequality messages would reduce people’s likelihood of clicking and voting. We also expected
to observe at least some evidence of agenda-setting, most likely in response to the “wealthy
buying elections” message given that it is most closely related to the proposed policy solution
we asked about.
We recruited a diverse national sample (N=515) through the online platform Amazon Mechanical Turk (AMT). While AMT samples are not nationally representative – in particular,
they tend to be younger and more Democratic – they are nevertheless reasonably diverse and
also useful for drawing inferences from experiments in situations like ours when we do not expect the age or ideology of our respondents to condition how they respond to the treatments
(Berinsky et al. 2012, Krupnikov and Levine 2014). Our sample mirrored these previous samples (see appendix for details, along with evidence that age and party identification did not
condition the results).
Figure 2 shows the average values for each experimental group across our three engagement
measures.6 Three patterns stand out. First, mirroring our field experiment results, we find
that each of the three treatments mentioning existing forms of inequality in political influence
reduced people’s stated likelihood of clicking (Wealthy: diff=-0.23, t = −5.00, p = .00; System:
diff=-0.22, −4.78, p = .00; Your Voice: diff=-0.09, −1.97, p = .05). These results serve as a
valuable robustness check to the AdWords results. Second, each of the three inequality messages
also reduced the likelihood that people said they would vote this coming November (Wealthy:
diff=-0.09, t = −2.01, p = .05; System: diff=-0.07, −1.72, p = .09; Your Voice: diff=-0.10,
−2.35, p = .02), showing how these messages can undermine people’s attitudes toward electoral
engagement in a much broader sense than simply their desire to seek information from voter
registration-focused ads. Lastly, we find strong evidence for agenda-setting in response to the
6
See appendix for more details on the results and robustness checks.
7
“wealthy buying elections” message (diff=0.08, t = 2.17, p = .03), which is the one in which
the problem statement most closely matched the proposed solution. Looking across measures
of engagement, the “wealthy buying elections” message provides the most vivid illustration of
how agenda-setting rhetoric can simultaneously heighten and diminish engagement.
Figure 2: Survey experiment results (N=515). All outcome measures re-coded to be 0-1. 90% confidence
intervals shown. See main text and appendix for details on question wording & statistical comparisons.
Conclusion and Implications
Oftentimes a key goal of political communication is to raise the salience of problems within
the broader citizenry. Focusing on one problem often raised in American political discourse
– political inequality – we have argued that although this kind of rhetoric can impact citizens’ perceptions of the political agenda it can also undermine their desire to engage in the
political process. In terms of voter decision-making, even though this rhetoric is arguably
designed to spur democratic participation (Brown 2016), our results are consistent with surveybased evidence showing that perceptions of unequal influence reduce electoral engagement (e.g.
Rosenstone and Hansen 1993). And, stepping back, they underscore a broader point about
8
political rhetoric, which is that the mere fact of talking about problems can often make them
worse (Levine 2015).
Our results also have important normative implications. As Druckman (2014:481) notes,
elite political communication powerfully shapes people’s preferences (toward policies, priorities,
and political action) which in turn are a critical basis for democratic responsiveness: “The
important question is to what extent political communication – broadly defined to include
information provided by the mass media, interest and advocacy groups, and political elites –
helps individuals affected by a policy to recognize that they are affected, and how they are
affected, and then to what extent it affords them the opportunity to take appropriate action
in response.” Previous studies of agenda-setting rhetoric have mostly focused on how calling
attention to them affects citizens’ political priorities (Scheufele and Tewksbury 2007), yet our
findings show why it is critical to study their (potentially-divergent) impact on action as well.
Possible extensions to our findings are twofold. One is to investigate engagement when
rhetoric calls attention to other problems – for example, other issues that also could be seen
as violating democratic norms (and the law) such as contracting fraud, lobbying scandals, and
voter suppression. A second is to vary the source by, for example, studying the impact of
such rhetoric when it is comes directly from a candidate versus a non-partisan source. For
instance, candidates often accompany rhetoric about problems with not only an opportunity to
act (like in our ads), but also statements about how they personally will solve the problem. It
is reasonable to ask how these source and message combinations may affect citizens’ responses.
Lastly, our results underscore the power of elite rhetoric in debates about campaign finance
(cf. Grant and Rudolph 2003) and suggest caution when communicating with the public about
the unequal influence that provides a central rationale for reform. If agenda-setting is the goal,
then calling attention to the “wealthy buying elections” can be highly persuasive. But doing so
involves other costs, and rhetoric like “Be heard in this election” that avoids explicitly calling
attention to existing inequalities may sometimes be preferable, especially depending upon the
type of citizen engagement that is intended.
9
Bibliography
Baumgartner, Frank R., Jeffrey M. Berry, Marie Hojnacki, David C. Kimball, and Beth L.
Leech. 2009. Lobbying and Policy Change. Chicago: University of Chicago Press.
Berinsky, Adam J., Gregory Huber, and Gabriel Lenz. 2012. “Evaluating Online Labor Markets
for Experimental Research: Amazon.com’s Mechanical Turk” Political Analysis 20: 351-368.
Berry, Jeffrey M. 1999. The New Liberalism. Brookings.
Brader, Ted. 2006. Campaigning for Hearts and Minds. Chicago: University of Chicago Press.
Brown, Heath. 2016. Pay to Play Politics: How Money Defines the American Democracy.
Praeger.
Druckman, James N. 2014. “Pathologies of Studying Public Opinion, Political Communication,
and Democratic Responsiveness.” Political Communication 31: 467-492.
Gold, Matea. “Big Money in Politics Emerges as Rising Issue in 2016 Campaign.” Washington
Post, April 19, 2015.
Grant, J. Tobin and Thomas J. Rudolph. 2003. “Value Conflict, Group Affect, and the Issue
of Campaign Finance.” American Journal of Political Science 47: 453-469.
Iyengar, Shanto and Donald R. Kinder. 1987. News that Matters. University of Chicago Press.
Kollman, Ken. 1998. Outside Lobbying. University of Chicago Press.
Krupnikov, Yanna and Adam Seth Levine. 2014. “Cross-Sample Comparisons and External
Validity.” Journal of Experimental Political Science 1: 1-21.
Levine, Adam Seth. 2015 American Insecurity. Princeton University Press.
McClosky, Herbert and John Zaller. 1984. The American Ethos. Harvard University Press.
Mutz, Diana. 1998. Impersonal Influence. Cambridge University Press.
Rosenstone, Steven J. and John Mark Hansen. 1993. Mobilization, Participation, and Democracy in America. MacMillan.
Ryan, Timothy J. “What Makes Us Click? Demonstrating Incentives for Angry Discourse with
Digital-Age Field Experiments.” Journal of Politics 74: 1138-1152.
Scheufele, Dietram A. and David Tewksbury. 2007. “Framing, Agenda Setting, and Priming:
The Evolution of Three Media Effects Models.” Journal of Communication 57: 9-20.
Vavreck, Lynn. 2009. The Message Matters. Princeton University Press.
10
Appendix: Summary of AdWords experiments in main text and supplemental experiments/analyses
Table 3 contains a visual summary of the ads from each of our four main experiments.
Table 3: Full Text of Four Primary AdWords Experiments
Experiment 1: “Wealthy Buying Elections”
Control group
Treatment group
Free Voter Registration
Free Voter Registration
Registering is quick, easy, & free
Wealthy Buying Elections
Register to vote now!
Register to vote now!
www.vote.org
www.vote.org
Experiment 2: “The System is Rigged”
Control group
Treatment group
Free Voter Registration
Free Voter Registration
Registering is quick, easy, & free
The System is Rigged
Register to vote now!
Register to vote now!
www.vote.org
www.vote.org
Experiment 3: “Your Voice is Not Yet Being Heard”
Control group
Treatment group
Free Voter Registration
Free Voter Registration
Registering is quick, easy, & free Your Voice is Not Yet Being Heard
Register to vote now!
Register to vote now!
www.vote.org
www.vote.org
Experiment 4: “Be Heard this Election”
Control group
Treatment group
Free Voter Registration
Free Voter Registration
Registering is quick, easy, & free
Be Heard this Election
Register to vote now!
Register to vote now!
www.vote.org
www.vote.org
Note that, in addition to the above four experiments, we also conducted two others that used
“wealthy buying elections” and “the system is rigged” but then substituted “Join millions who
agree” in the third line (see Table 4 below). The motivation behind these experiments was to
see if this kind of impersonal cue that signaled an encouraging descriptive norm of behavior (cf.
Mutz 1998) might overcome the negative effect of the political inequality language.
It did not. Compared with the control group, an ad with “Wealthy buying elections//Join Millions who Agree” yielded a significantly lower click-rate (0.040 versus 0.064, |z| = 4.93, p = .00
two-tailed test), as did one with “The system is rigged//Join millions who Agree” (0.040 versus
0.054, |z| = 4.87, p = .00, two-tailed test). To be sure, there is some indication that the encouraging impersonal cue “worked”, as the decrease in clicks was proportionately smaller when
including the encouraging message as opposed to not including it, but the broader point is that
we have no evidence that the impersonal cues were sufficient for overcoming the negative effect
generated by these two forms of political inequality rhetoric.
In addition, while our studies were in the field, using the Access World News database we
conducted a search of five highly-circulated newspapers (USA Today, LA Times, NY Post, Daily
News, and AM News) for the hours that our ads were in the field, searching for mentions of the
11
Table 4: Full Text of Supplemental AdWords Experiments
Supplemental Experiment 1: “Wealthy Buying Elections”
Control group
Treatment group
Free Voter Registration
Free Voter Registration
Registering is quick, easy, & free
Wealthy Buying Elections
Register to vote now!
Join Millions Who Agree
www.vote.org
www.vote.org
Supplemental Experiment 2: “The System is Rigged”
Control group
Treatment group
Free Voter Registration
Free Voter Registration
Registering is quick, easy, & free
The System is Rigged
Register to vote now!
Join Millions Who Agree
www.vote.org
www.vote.org
ad terms (and related terms) either in a headline OR mentioned in the lead/first paragraph.
For each of our four experiments, we found zero instances in which our terms appeared.
12
Appendix: Survey Experiment Details
We recruited 515 subjects via Amazon’s Mechanical Turk in early June 2016. Relative to the
nation, our respondents were more likely to be Democratic (45% identify as either strong or not
very strong Democrats), younger (median age of 31), more educated (only 53% did not have
a college degree), less female (38% female), and had slightly lower median household income
(between $40,000 and $45,000).
Here are details about the treatments and questions included as part of our survey experiment.
All subjects received the following:
Please imagine that you just moved to a new state and wanted to register to vote. You do
a Google search for “voter registration” and a number of results show up. In addition, as is
common with Google searches, an ad appears near the search results. It reads as follows: [Respondents were then randomly assigned to be shown one of the five ads (control group or one of
our four treatment groups) from our main field experiments].
Here are our three measures of engagement:
–How likely or unlikely would you be to click on the ad? Extremely likely...Extremely unlikely
–[Based on Gallup likely voter question:] Next, we’d like you to rate your chances of voting
in November’s election for President on a scale of 1-10. If 1 represents someone who definitely
will not vote and 10 represents someone who definitely will vote, where on this scale of 1 to 10
would you place yourself? 1 – Definitely WILL NOT vote ... 10 – Definitely WILL vote
–At any given time government officials have many problems to deal with. To what extent do
you think federal government officials should prioritize crafting policies that would impose new
limits on campaign spending? Top priority...Not a priority at all
What is your gender? Female, Male, with 1=female and 0=male
What is your age? Open-ended, with range of 18-75, recoded to a 0-1 range
What is the highest level of education that you have earned? Less than high school degree, high
school degree or equivalent, associate’s degree, some college, bachelor’s degree, graduate degree;
coded from 0-1
Generally speaking, do you think of yourself as a Republican, a Democrat, an Independent, or
what? [If Democrat or Republican:] Would you call yourself a strong [D/R] or a not very strong
[D/R]? [If Independent or other:] Do you think of yourself as closer to the Republican Party or
the Democratic Party? Coded from 0-1, with 0=“Strong Republican” and 1=“Strong Democrat”
What is your best guess of the income of all members of your family living with you, before
taxes? 25 categories ranging from “less than $3,000” to “$150,000 or more”; Coded from 0-1
13
Appendix: Additional Survey Experiment Results
Results from a one-way ANOVA test comparing gender, age, income, education, and party
identification across our five experimental groups uncovered mild evidence of imbalance in age
and income. Thus, here we show that the results reported in the main text are robust to controlling for these attributes.
In addition, at first blush it also seemed reasonable to suppose that the effect of our treatments
might differ by age (i.e. perhaps younger people might be more affected by our treatments
because they have not yet become habitual voters) or party identification (i.e. perhaps because
Democratic elites have traditionally been more outspoken about the role of money in politics).
Although we did find evidence that these variables were in some cases related to our outcome
measures (see Table 6 below), when we estimated models that included interaction terms between our treatments and age, and in addition between our treatments and party identification,
we found no evidence of any heterogeneous treatment effects. The lack of an interaction with
party identification reinforces the fact that, especially during the 2016 presidential race during which our experiment was in the field, rhetoric about political inequality and money in
politics has come from candidates and organizations across the ideological spectrum (including
Clinton, Sanders, O’Malley, Trump, Cruz, Kasich, and Christie, among others; see Brown 2016).
Table 5: Effect of Talking about Problems on Citizen Engagement (without control variables)
Click?
Vote?
Priority?
Coef.
(s.e.)
Coef.
(s.e.)
Coef.
(s.e.)
“Wealthy buying elections”
-0.23*** (0.05) -0.09** (0.04) 0.08** (0.04)
“The system is rigged”
-0.22*** (0.05)
-0.07*
(0.04)
0.03
(0.04)
“Your voice is not yet being heard” -0.09** (0.04) -0.10** (0.04)
-0.03
(0.04)
“Be heard this election”
0.01
(0.04)
-0.01
(0.04)
-0.04
(0.04)
Constant
0.43*** (0.03) 0.85*** (0.03) 0.63*** (0.03)
N
515
515
515
R2
0.10
0.02
0.03
Notes: *p < .10, **p < .05, ***p < .01 (two-tailed tests). Ordinary least squares estimation. All dependent
variables are coded from 0-1, with higher numbers referring to increased likelihood of clicking, increased
likelihood of voting in November’s election for President, and belief that campaign finance reform should
receive higher priority. Each of the independent variables are indicators that take on a value of 1 if the
individual received that text and 0 otherwise.
14
Table 6: Effect of Talking about Problems on Citizen Engagement (with control variables)
Click?
Vote?
Priority?
Coef.
(s.e.)
Coef.
(s.e.)
Coef.
(s.e.)
“Wealthy buying elections”
-0.22*** (0.05) -0.11** (0.04) 0.08** (0.04)
“The system is rigged”
-0.22*** (0.05) -0.10** (0.04)
0.03
(0.04)
“Your voice is not yet being heard” -0.09** (0.04) -0.10** (0.04)
-0.04
(0.04)
“Be heard this election”
0.02
(0.04)
-0.02
(0.04)
-0.06
(0.04)
Female
0.02
(0.03) 0.08*** (0.03)
-0.04
(0.02)
Age
-0.06
(0.08) 0.21*** (0.07)
-0.01
(0.06)
Income
0.09
(0.07) 0.15** (0.06)
-0.06
(0.05)
Education
-0.11*
(0.06)
0.10*
(0.06)
-0.02
(0.05)
Party Identification
0.07
(0.05) 0.09** (0.04) 0.20*** (0.04)
Constant
0.43*** (0.07) 0.56*** (0.06) 0.56*** (0.05)
N
508
508
508
R2
0.11
0.09
0.10
Notes: *p < .10, **p < .05, ***p < .01 (two-tailed tests). Ordinary least squares estimation. All dependent
variables are coded from 0-1, with higher numbers referring to increased likelihood of clicking, increased
likelihood of voting in November’s election for President, and belief that campaign finance reform should
receive higher priority. Each of the independent variables are either indicators that take on a value of 1 if the
individual received that text and 0 otherwise, or they are coded 0-1 as noted earlier in the appendix.
15