Social Desirability Bias in Measures of Partisanship

Social Desirability Bias in Measures of Partisanship
Samara Klar
Assistant Professor
Department of Political Science
University of Arizona
[email protected]
Yanna Krupnikov
Assistant Professor
Department of Political Science
Northwestern University
[email protected]
Please note: This paper is part of a much larger project that is currently in progress; therefore,
this manuscript is likely to be updated and revised. Please do not cite without permission.
Abstract: Scholars have long suggested that as a particular view or perspective becomes the
social norm, we run the risk of overestimating the prevalence of this perspective due to social
desirability bias. In this manuscript we address this possibility of this effect in one of the most
fundamental measures in American politics: the measure of partisanship. In particular, we argue
that social desirability pressures may lead individuals to misrepresent their partisan affiliations,
and instead report that they are independent. Further, we suggest that this tendency to eschew
partisanship is most likely to happen when individuals are reminded of elite partisan
disagreement and can be further exacerbated by question-wording in measures of partisanship.
We make our case through two survey experiments. The first study traces the presence of social
desirability in measures of partisanship and shows that individuals believe that identifying as an
independent makes a better impression. This effect is particularly likely when individuals are
reminded of partisan disagreement. The second study relies on a national sample and considers
this social desirability effect under different lead-ins to the partisanship question. At a time
when surveys show record rates of individuals identifying as independent, our results have
critical implications for the way scholars measure partisanship, particularly during points when
elite partisan disagreement is highly prevalent.
As a particular perspective becomes the social norm, we run a risk of overestimating
public opinion due to social desirability bias (Berinsky 2002). Psychological (Schlenker and
Weingold 1989) and sociological (Goffman 1959) theories of self-construction suggest that
individuals aim to portray themselves in such a manner that will be pleasing to others (Holbrook
and Krosnick 2010). When asked matters of opinion, we are thus susceptible to answering in a
manner that will be most pleasing – and least controversial – for the sake of optimal selfpresentation (Presser 1990). Indeed, social desirability repeatedly proves itself as a persistent
obstacle for measuring opinions and behaviors (Belli et al. 2001; Karp and Brockington 2005;
Streb et al 2008; Tourangeau and Yan 2007). In this study, we consider the potential for social
desirability bias in the measurement of one of the most fundamental questions in American
politics: partisanship.
The origins of partisanship are subject to considerable debate, with some arguing that it is
the sum of a running tally of political evaluations (Bartels 2002), others demonstrating that it is a
social identity group (Green et al. 2002), and still others pointing to ideological roots
(Abramowitz and Saunders 1998). All agree, however, that partisanship is influential, persistent,
and resistant to change (Campbell et al. 1960, p. 146). It is not our goal to cast doubt on the
stability of partisanship, but rather we examine the potential for social desirability bias in
measurement of partisan affiliation. We argue that certain categories of partisanship may be
viewed as particularly socially desirable, thus introducing social desirability bias into this
measurement. Specifically, we argue and show that political independence is viewed as an
aspirational ideal. With an experimental study, we demonstrate that this effect is exacerbated by
exposure to partisan disagreement. Furthermore, we find that certain forms of survey question
wording play a key role in stimulating social desirability bias.
The Social Desirability of Independence
Why might political independence be viewed as socially desirable? The effect lies in
political context. In recent decades, negative political news coverage has nearly tripled (Patterson
2010) and evidence suggests that we have reached a point of unprecedented vitriol in politics
(Wagner et al. 2011). In particular, American media casts partisan politics with a
disproportionately negative light (e.g. Patterson 1994; Lichter and Noyes 1997; Hallin and
Mancini 2004). Even if people avoid news coverage all-together, they are often exposed to
disagreement incidentally via Internet discussion (Wojcieszak and Mutz 2009).
In turn, media coverage often has significant influence over people’s political priorities
and impressions (Zaller 1992; Bartels 1993). McQuail (1979) explains: “We can expect the mass
media to tell us about different kinds of social roles and the accompanying expectations …" (p.
14). Taken together, negative coverage of politics, and in particular partisan politics, can lead
individuals to view partisan identification as socially undesirable, motivating them to shy away
from reporting an allegiance to a party and instead opting toward a politically “independent”
category (self-categorizing as either a “pure” or “leaning” independent). We do not suggest
exposure to partisan disagreement will necessarily weaken actual party identification – though it
may do that for certain individuals –rather we suggest that exposure to partisan disagreement
sullies the perceived desirability of partisanship.
Question Wording
A rich body of research demonstrates significant consequences of question wording on
responses (Smith 1987), and even party identification falls prey to such influence. In particular,
Borrelli et al. (1987) demonstrate that a party identification question beginning with “as of
today,” known as the Gallup item (Green and Schickler 1993) – as opposed to something broader
like “generally speaking,” known as the Michigan scale (Green and Schickler 1993) – results in
over-reports of support for the party with the best current standing in a race. “As of today,”
explain the authors, taps “short-term feelings” rather than focusing attention away from
contemporary political circumstances (Borrelli et al. 1987, p. 116). Others have found substantial
support for the notion that responses to the Gallup item are unduly influenced by recent
considerations (Abramson and Ostrom 1994; Clarke and Stewart 1998; MacKuen et al. 1992).
The Michigan (and ANES) wording of “generally speaking” has, on the other hand, been
described as measuring “enduring group identification rather than transitory feelings of partisan
preferences” (Miller and Shanks 1996, 125) that leads individuals to weigh long-term partisan
loyalties more heavily. We thus expect that the Gallup item, as opposed to the Michigan/ANES
item, will exacerbate the effects of contemporary social desirability norms.
Social Desirability Effects and Question Wording
Study 1
Our first study considers the possibility that political independence is seen as socially
desirable. We follow a test for social desirability described by Holbrook, Green and Krosnick
(2003) in which respondents are randomly assigned to one of two groups. In one group (“Fake
Good”), respondents are instructed to answer a question as if they are trying to make the best
possible impression on another person. In the other group (“Fake Bad”), respondents are
instructed to answer the same question as if they are trying to make the worst possible
impression. Given that social desirability pressures arise because individuals want to make the
most positive impression possible, significant differences in response patterns between these
groups indicate social desirability of a particular response category. Relying on a sample of
undergraduate students, Holbrook et al. (2003) use this approach to show social desirability
effects in questions dealing with race.
While the main study in this manuscript relies on a nationally-representative sample, we
use the “Fake Good”/ “Fake Bad” approach in a preliminary study which relied on a sample
recruited via the Amazon Mechanical Turk (see Berinsky et al. 2012 for discussion of the
sample; for examples of research that uses this sample see Kriner and Shen 2012, Doherty 2013
and Huber and Paris 2013, among others).1 In Table 1, we present the demographics of our
sample and, for purpose of comparison, include demographics for the 2012 American National
Election Study (ANES), the US Census, and a recent Pew survey.2
INSERT TABLE 1 ABOUT HERE
Baseline Social Desirability: Result 1
As a first step, we consider whether the “Fake Good” or “Fake Bad” instructions produce
differences in partisan response patterns. We conducted the study in two waves. First,
respondents answered a series of questions about their political preferences – including
partisanship (conducted September 30 – October 5, 2012). Respondents were later re-contacted
and randomly assigned to either the Fake Good/Fake Bad manipulation (October 10, 2012 –
1
2
Study N=149 (accounts for both result 1 and result 2).
For additional information on the use of Mechanical Turk samples see Kriner and Shen 2013
October 18, 2012).3 We use the first wave to show that there are no pre-treatment differences in
partisanship between the experimental conditions in the second wave.
When discussing the studies, we use the term “identify” merely to denote which category
a subject selected in the partisanship question. Given that subjects were asked to intentionally
make either a good or bad impression on someone, we do not mean to suggest that they changed
their partisan identifications.
INSERT FIGURE 1 ABOUT HERE
Initial results show that the Fake Good/Fake Bad manipulation produces significant
differences in partisan patterns (Figure 1). In the Fake Good condition, nearly 60% more
individuals identify as pure, non-leaning independents than in the Fake Bad condition. These
effects grow once we consider individuals who report that they lean toward one party or the
other. In this case, the percentage of respondents identifying as either leaning or pure
independents nearly double from those who are instructed to give a “bad” impression (26.3%) to
those instructed to give a “good” impression (50%).4 In sum, these results suggest a social
desirability effect: when asked to respond in a way that makes the most positive impression,
respondents were more likely to select as “independent”.
Social Desirability and Partisan Information: Result 2
While this effect suggests social desirability, we also want to consider whether political
context exacerbates this effect. To do so, we again turn to the Fake Good/Fake Bad approach,
this time introducing an additional component. Specifically, we exposed a different group of
3
In total, 66% of subjects from the initial wave of the study completed the second wave. Given that we have
political preferences from that initial first wave we were able to consider whether there are any systematic patterns
to predict second wave participation – we did not identify any such patterns. N in wave 2 is 140.
4
Group difference is significant at p<0.05, two tailed.
subjects to a news article about partisan disagreement and then randomly assigned them to either
the Fake Good or Fake Bad group. If the results follow predictions, then social desirability
effects will be exacerbated among those who read the news article. Given that this was done
parallel to our first analysis, we can use our initial results as a helpful baseline.5
Figure 2 compares our partisan disagreement group to our original baseline results
demonstrating the differences between those who identify as independent in the Fake Good
versus Fake Bad condition along the x-axis. On the left side of the figure, we show that when
respondents do not read about partisan disagreement, the increase of pure independents among
those instructed to provide the Fake Good as opposed to Fake Bad response is, as we report
above, 9.2%. The grey bar on the left-hand side shows that this difference increases dramatically
from 9.2% to 25.8% among those respondents who read about partisan disagreement before
answering the partisanship question.6
INSERT FIGURE 2 ABOUT HERE
This effect further grows when we consider subjects who identify as leaning
independents. After exposure to an article about partisan disagreement, nearly 47% more
subjects identify as either a pure or leaning independent in the Fake Good condition as opposed
to an increase of 23.6% among those who did not read the news article.7
Overall, our results indicate a social desirability component to partisanship -- a tendency
that is exacerbated by information about partisan disagreement. As a next step, we consider
whether question-wording affects this social desirability bias.
5
Subjects in the study were assigned randomly to either a partisan disagreement or no information condition, and
then within the informational conditions to either a fake bad/fake good group.
6
Difference significant at p<0.05, two tailed.
7
Difference significant at p<0.001, two tailed.
Study 2
Our next study relies on subjects recruited via YouGov, a panel of over two million
Americans who are randomly invited to participate in studies. Using a process of “sample
matching,” YouGov obtains nationally-representative samples from this panel (see Rivers (2006)
for details). Numerous scholars (Barabas and Jerit 2010, Iyengar, Sood and Lelkes 2012, Huber
et al 2013; Jessee and Malhotra 2013) and surveys such as the Cooperative Congressional
Election Study and the Cooperative Campaign Analysis Project (Jackman and Vavreck 2009)
have relied on YouGov. Research shows that this technique produces samples of similar or
higher quality to traditional telephone surveys (Berrens, Bohara et al. 2003, Sanders, Clarke et al.
2007). Our study was conducted February 18-23, 2013; Response Rate 3 is 41.2%. We show the
demographics of our sample in Table 1.
We first randomly assigned individuals to receive information about partisan
disagreement, partisan unity, or to a non-political control condition (all stimuli in Appendix 1).
We then randomly assigned subjects to a particular type of partisanship question: either the
Michigan/ANES partisanship question with the “Generally speaking…” lead-in, or the Gallup
version of the partisan question, using “as of today…” This produces six experimental groups
(Table 2).
INSERT TABLE 2 ABOUT HERE
Differences in Partisan Identification: Results
We consider our results in two ways. First, we consider the patterns across different types
of information within a particular question wording. Second, we consider patterns within a
particular type of information across question wording.
We first note that the informational manipulation produces different results depending on
question wording. When we rely on the question beginning with “Generally speaking…,” we see
that exposure to different types of information has relatively little substantive effect on the
percentage of individuals who identify as either a pure or leaning independent.
In contrast, when we rely on the question using “as of today…” the percentage of
individuals identifying as either pure or leaning independents substantially grows in the partisan
disagreement condition. When asked for their party identification, the number of individuals
identifying as independent in the partisan disagreement group increases by 10 percentage points
relative to the control condition.
Even more important, however, are comparisons within informational type. In Figure 3,
we demonstrate the difference in the percentage of independents among the “as of today” versus
“generally speaking” conditions, subtracting those in the “generally speaking” group from those
in the “as of today” group. Looking at the control group on the left, -4 indicates that there is an
increase of 4 percentage points in the percent identifying as “independent” in the “as of today”
group. For those who read the news article about partisan unity, there is a 3.4 percentage point
increase in independents for the “as of today” group. Neither of these differences is substantively
large, nor is either one statistically significant.
INSERT FIGURE 3 ABOUT HERE
In contrast, in the partisan disagreement condition we see a 7.7 percentage point increase
in independents among those asked the “as of today” versus “generally speaking” party
identification question. This is both substantively and statistically significant. The data thus
suggest that social desirability bias exists in the measurement of party identification and is
moreover exacerbated by question wording, such as “as of today” lead-ins, which emphasize
short-term considerations.
Finally, we consider this effect more rigorously by relying on a model that predicts the
likelihood of identifying as either a pure or leaning independent. Here we can also control for
various covariates that might influences our result. Although our conditions are not imbalanced
(and thus do not necessitate this approach), a model provides a helpful check on our findings. In
this model, we interact each condition with question type and control for respondent
demographics, including ideology as measured a year prior.8 We also control for self-monitoring
– or the extent to which a person is conscious of the image they project to the outside world
(Berinsky and Lavine 2012).
Our results (Table 3) show a critical relationship between exposure to partisan
disagreement information and question wording. In our model, the “generally speaking” question
is coded as 1, while the “as of today” question is coded as 0. First, we see that the positive
coefficient on the partisan disagreement condition alone highlights that, when presented with the
“as of today” question, exposure to disagreement generally increases the likelihood of
identifying as independent. In contrast, the negative, significant coefficient on the interaction
between partisan disagreement and question type shows that the “generally speaking” lead-in
significantly decreases the likelihood that an individual will identify as some type of
independent, as compared with “as of today.” We see a similar effect when we consider only
individuals who identify as pure Independents – excluding those who identified as leaning.
INSERT TABLE 3 ABOUT HERE
8
We have these measures of ideology due to the panel structure of YouGov
As we rely on interactions to consider the joint effects of question wording and
informational exposure, we also calculate the marginal effect of question type. Our results show
that question wording has the strongest effect when individuals are exposed to information about
partisan disagreement. Under exposure to information about partisan unity, a shift from the “as
of today” to the “generally speaking” item decreases the likelihood of identifying as either a
leaning or pure independent by 0.0006 – a decline that is neither substantively important nor
statistically significant.
In contrast, we see that under exposure to partisan disagreement, shifting from the “as of
today” to the “generally speaking “item decreases the likelihood of identifying as either a leaning
or pure independent decreases by 0.12 – an effect that is both statistically significant and
substantively important.9 In sum, our marginal results reinforce our bivariate results: under
exposure to information about partisan disagreement – a condition that exacerbates the social
desirability of independence -- question wording is particularly critical.
Conclusion
In this manuscript, we argue and show that measures of partisanship are susceptible to
social desirability, with messages regarding partisan disagreement increasing this effect.
Moreover, we find that this social desirability bias toward identifying as independent is
exacerbated when questions regarding party identification emphasize short-term considerations
over general partisan preferences.
The likelihood that individuals may encounter information regarding partisan
disagreement before answering questions of party identification is quite strong given that
9
Significant at p<0.1, on tailed, p<0.05 two tailed.
questions regarding partisan disagreement often appear in surveys before respondents are asked
for their party identification. For example, in September of 2013, Gallup asked respondents:
“Which of the following statements comes closer to your view about the budget debate between
Barack Obama and the Republicans in Congress -- it is an important battle over principles and
the future direction of government (or) it is mostly an attempt by both sides to gain political
advantage?” The findings we present in this research note suggest that questions of this nature –
when combined with “as of today” question lead-ins – may lead to greater reports of
independents (both leaning and pure) than we might find otherwise.
It is not our intent to make normative statements regarding the superiority of one question
format over another; our findings merely suggest that questions focusing on the short-term will
reflect social desirability biases associated with contemporary norms. When comparing trends in
political identification over time, one must be cognizant of both real shifts that might be
occurring among the American population but also of social desirability biases, which we show
can exacerbate independence among otherwise partisan respondents.
REFERENCES
Abramowitz, Alan I., and Kyle L. Saunders. 1998. ‘‘Ideological Realignment in the U.S.
Electorate.’’ Journal of Politics 60 (3): 634–52.
Abramson, Paul R., and Charles W. Ostrom. 1994. “Question wording and partisanship: change
and continuity in party loyalties during the 1992 election campaign.” Public Opinion Quarterly
58:21–48.
Barabas, Jason and Jennifer Jerit (2010). "Are Survey Experiments Externally Valid?" American
Political Science Review 104(2): 226-242.
Bartels, Larry M. 1993. “Messages Received: The Political Impact of Media Exposure.” The
American Political Science Review 87(2): 267-285.
Bartels, Larry M. 2002. “Beyond the Running Tally: Partisan Bias in Political Perceptions.”
Political Behavior 24(2): 117-150.
Belli, R. F., Traugott, M. W., & Beckmann, M. N. 2001. “What leads to voting overreports?
Contrasts of overreporters to validated votes and admitted nonvoters in the American National
Election Studies.” Journal of Official Statistics 17(4): 479-498.
Berelson, Bernard, Paul F. Lazarsfeld, and William McPhee. 1954. Voting. Chicago: University
of Chicago Press.
Berinsky, Adam J. 2002. “Political Context and the Survey Response: The Dynamics of Racial
Policy Opinion.” Journal of Politics 64:567–584..
Berinsky, Adam J., and Howard Lavine. 2012. “Self-Monitoring and Political Attitudes.” In
Aldrich, John H., and Kathleen M. McGraw (Eds.), Improving Public Opinion Surveys:
Interdisciplinary Innovation and the American National Election Studies. Princeton University
Press.
Berinsky, Adam J., Gregory A. Huber and Gabriel S. Lenz (2012). "Evaluating Online Labor
Markets for Experimental Research: Amazon.com's Mechanical Turk." Political Analysis 20(3):
351-368.
Berrens, R. P., A. K. Bohara, H. Jenkins-Smith, C. Silva and D. Weimer (2003). "The Advent of
Internet Surveys for Political Research: A Comparison of Telephone and Internet Samples."
Political Analysis 11(1): 1-22.
Borrelli Stephen A., Brad Lockerbie, and Richard G. Niemi. 1987. “Why the DemocratRepublican partisanship gap varies from poll to poll.” Public Opinion Quarterly 51(1):115–119.
Campbell, Angus, Philip E. Converse, Warren E. Miller, and Donald E. Stokes. 1960. The
American Voter. New York: John Wiley & Sons.
Clarke, Harold D., and Marianne C. Stewart. 1998. “The Decline of Parties in the Minds of
Citizens.” Annual Review of Political Science 1: 357-378.
Doherty, David. 2013 “To Whom Do People Think Representatives Should Respond: Their
District or the Country?” Public Opinion Quarterly. 77 (1): 237-255.
Gerber, Alan, Gergory Huber, David Doherty, Conor Dowling and Seth Hill (2013) “Who
Wants to Discuss Vote Choices with Others? Polarization in Preferences for Deliberation” Public
Opinion Quarterly. 77 (2): 474-496.
Goffman, Erving. 1959. The Presentation of Self in Everyday Life. New York, NY: Anchor
Books.
Green, Donald, and Eric Schickler. 1993. “A Multiple Method Approach to the Measurement of.
Party Identification.” Public Opinion Quarterly 57: 503-535.
Green, Donald, Bradley Palmquist, and Eric Schickler. 2002. Partisan Hearts and Minds. New
Haven, CT: Yale University Press.
Hallin, D. C. and P. Mancini. 2004. Comparing Media Systems: Three Models of Media and
Politics. Cambridge University Press.
Holbrook, Allyson L., and Jon A. Krosnick. 2010. “Social desirability bias in voter turnout
reports: Tests using the item count technique.” Public Opinion Quarterly 74: 37-67.
Holbrook, Allyson L., Melanie C. Green, and Jon A. Krosnick. 2003. “Telephone vs. face-toface interviewing of national probability samples with long questionnaires: Comparisons of
respondent satisficing and social desirability response bias.” Public Opinion Quarterly 67:79125.
Hopkins, Daniel. 2009. “No More Wilder Effect, Never a Whitman Effect: When and Why Polls
Mislead about Black and Female Candidates.” Journal of Politics 71: 769-781.
Huber, Gregory and Celia Paris. 2013. “Assessing the Programmatic Equivalence Assumption in
Question Wording Experiments: Understanding Why Americans Like Assistance to the Poor
More Than Welfare” Public Opinion Quarterly. 77 (1): 385-397
Iyengar, Shanto. 1991. Is anyone responsible? How television frames political issues. Chicago:
University of Chicago Press.
Iyengar, Shanto, and Donald Kinder. 1987. News that matters: Television and American opinion.
Chicago: University of Chicago Press.
Jackman, Simon and Lynn Vavreck (2009). The 2008 Cooperative Campaign Analysis Project
2.1. YouGov/Polimetrix. Palo Alto: CA, YouGov/Polimetrix.
Jessee, Stephen and Neil Malhotra (2013) “Public (Mis)Perceptions of Supreme Court Ideology:
A Method for Directly Comparing Citizens and Justices” Public Opinion Quarterly. 77(2) 619634.
Karp, Jeffrey A., and David Brockington. 2005. “Social Desirability and Response Validity: A
Comparative Analysis of Overreporting Voter Turnout in Five Countries.” Journal of Politics
67(3): 825-840.
Kriner, Douglas and Francis X. Shen. 2012. “How Citizens Respond to Combat Casualties: The
Differential Impact of Local Casualties on Support for the War in Afghanistan” Public Opinion
Quarterly. 76(4) 761-770.
Lazarsfeld, Paul F., Bernard Berelson, and Hazel Gaudet. 1948. The People's Choice, 2nd ed.
New York: Columbia University Press.
Lichter, S.R., and R.E. Noyes. 1997. Good Intentions Make Bad News: Why Americans Hate
Campaign Journalism. Thousand Oaks California: Sage.
Mackuen, Michael, Robert Erikson and James Stimson. 1992. “Peasants or Bankers: The
American Electorate and the US Economy.” American Political Science Review. 86: 597-611.
McQuail, Denis. 1979. “The Influence and Effects of Mass Media.” In Mass Communication and
Society (Eds. J. Curran, M. Gurevitch, and J. Woolacott), pp. 70-93. Sage Publications.
Miller, Warren E., and J. Merrill Shanks. 1996. The New American Voter. Cambridge, MA:
Harvard University Press.
Patterson, Thomas E. 1994. Out of Order. Vintage Books.
Patterson, Thomas E. 2010. We the People. McGraw-Hill.
Postman, N. 1985. Amusing ourselves to death: Public discourse in the age of show business.
New York: Penguin Books.
Presser, Stanley. 1990. "Can Context Changes Reduce Vote Overreporting?" Public Opinion
Quarterly 54:586-93.
Schlenker, Barry R., and Michael F. Weigold. 1990. “Self-consciousness and self-presentation:
Being autonomous versus appearing autonomous.” Journal of Personality and Social
Psychology, 59(4): 820-828.
Smith, Tom W. 1987. “That which we call welfare by any other name would smell sweeter: An
analysis of the impact of question wording on response patterns.” Public Opinion Quarterly
87(51):75-84.
Tourangeau, Roger, and Ting Yan. 2007. “Sensitive Questions in Surveys.” Psychological
Bulletin 133(5): 859-883.
Wagner, M W., D Mitchell, and E Theiss-Morse. 2011. "The Consequences of Political
Vilification." Paper prepared for presentation at the annual meeting of the American Political
Science Association, Seattle, WA, September 1-4.
Wojcieszak, M. E., and D. C. Mutz. 2009. "Online Groups and Political Discourse: Do Online
Discussion Spaces Facilitate exposure to Political Disagreement?" Journal of Communication
59: 40-56.
Zaller, John. 1992. The Nature and Origins of Mass Opinion. Cambridge: Cambridge University
Press.
APPENDIX
[Control Condition]
Every February, Americans await for Groundhog Phil in the little town of Punxsutawney, Pa.
According to folklore, Phil’s sighting of his own shadow means there will be 6 more weeks of
winter. If Phil does not see his shadow, it means “there will be an early spring.” The official
website of Punxsutawney Phil, perhaps not impartial, claims the Groundhog has issued a correct
forecast 100% of the time.
[Partisan Disagreement Condition]
As President Barack Obama begins his second term, the Democrats and Republicans in
Washington appear to be as divided as ever. Political experts predict that Americans can expect
even more of the partisan bickering that has characterized Washington in recent years. The
profound debate that has raged between the two parties has not been settled in the least. The next
two years may very well bring a continuous cycle of two parties battling it out in Washington.
[Partisan Unity Condition]
As President Barack Obama begins his second term, the Democrats and Republicans in
Washington appear to be more unified than ever. Political experts predict that Americans can
expect a new era of bipartisanship in Washington. The profound debate that has raged between
the two parties appears to be settling. The next two years may very well bring progress towards
two parties cooperating in Washington.
Table 1: Demographic Comparisons
Study 1
Study 2
ANES
Pew 2013 US
(Internet
(YouGov)
2012
Census
Sample)
2010
Democrat (measured
41.5%
32.6%
34.6%
30.9%
--pre-treatment)
Republican(measured
15.8%
23.32%
27.11%
24.5%
--pre-treatment)
% with BA
20.4%
25.3%
29.1%
26.9%
28.7%
Median age
27
47.5
50-54*
48.4
---**
% Male
63.64%
48.4%
47.9%
48.6%
49.2%
* ANES 2012 restricts actual age, 50-54 is the median group.
**A comparison cannot be made here as surveys only include participants over 18.
Table 2: Experimental Groups, Study 2
Article Type
Control
Disagreement
Unity
“Generally speaking”
Group 1
Group 2
Group 3
“As of today”
Group 4
Group 5
Group 5
Question Type
Table 3: Likelihood of identifying as Independent
Leaning and Pure Pure
Independents
Independents
Only
Question Type
0.190 (0.286)
0.323 (0.377)
(under the
control
condition)
Unity Condition 0.013 (0.280)
-0.510 (0.366)
Unity X
-0.185 ( 0.397)
0.041 (0.518)
Question Type
Disagreement
0.270 (0.275)
0.163 (0.360)
condition
Disagreement x
-0.659 (0.397)* -0.860 (0.526) *
Question Type
Ideology
0.214 (0.065)
0.324 ( 0.078)
Education
0.058 (0.058)
-0.052 (0.072)
Race
-0.038 (0.063)
0.065 (0.089)
Gender
-0.508 (0.167)** -0.242 (0.221)
Political Interest 0.127 (0.079)
0.374 (0.098)**
Self-Monitoring -0.051 (0.038)
-0.015 (0.068)
Registered Voter 0.612 (0.260)**
0.430 (0.299)
Constant
-0.415 (0.696)
-3.093 (0.865)
N
796
796
ITEMIZED LIST OF FIGURES
Figure 1. Percentage Identifying as Independent in “Fake Good” and “Fake Bad”
Conditions
Figure 2. Difference in Independents between “Fake Good” and “Fake Bad” Groups
Across Control and Partisan Disagreement Conditions
Figure 3. Difference in Independence between “As of Today” and “Generally Speaking”
Across Experimental Conditions