- Survey Practice

Racial Attitudes and Race of Interviewer Item Non-Response
Matt Barreto1, Loren Collingwood2, Chris Parker1, Francisco I. Pedraza3
1
2
University of Washington
University of California, Riverside
3
Texas A&M University
Corresponding Author: Loren Collingwood, [email protected]
Word Count: 2,693 (not including tables, figures, and their references)
1
Racial Attitudes and Race of Interviewer Item Non-Response
Abstract. We evaluate the relationship between race of interviewer (ROI) and racial
attitudes, using original survey data that includes a response to the question: “What is my
race?” A large percentage of our respondents answer, “don’t know.” Traditional racial
attitude models tend to exclude ROI altogether, whereas alternative racial attitude models
include perceived ROI, but drop “don’t know” respondents. We propose a new modeling
strategy that includes “don’t know” respondents and find that in general this modeling
strategy is preferred because it leads to better model fit and fewer type II errors. We
suggest that researchers control for “don’t know” ROI responses in any racial attitude
analysis.
2
A good deal of survey research shows that interviewer characteristics can
generate meaningful differences in responses. Studies of interviewer effects have in
common two elements. First, they conclude that, in general, characteristics of
interviewers (e.g., race, gender) are most likely to affect survey questions that are related
to that interviewer characteristic (Hatchett and Schuman 1975; Schaeffer et al 2010;
Sudman and Bradburn 1974). For example, the race of the interviewer (ROI) influences
racial attitude items (Davis 1997a; Davis 1997b; Silver 1998; Dawson 2001).1 Second,
the preponderance of the race of interviewer (ROI) literature excludes from the analysis
respondents who say they “don’t know” the race of their interviewer (item non-response).
In this research note, we build on the ROI literature by examining whether “don’t know”
responses to an interviewer’s question, “What is my race?” affect White respondents’
answers to racial attitude questions about African-Americans.
Previous work has tended to exclude telephone respondents who answer ROI
queries with “don’t know,” and we argue that this exclusion may lead to omitted-variable
bias when analyzing racial attitude items. We find that White respondents who “don’t
know” the ROI generally express more racially liberal views than respondents who
perceive their interviewers to be either black or White. Additional analysis reveals that
ROI non-response among Whites is associated with being interviewed by a Black caller
and refusal to answer questions about income. In sum, our findings motivate a novel
caution for analysts of racial attitudes by suggesting that “don’t know” responses to
perceived ROI queries can offer a minor, but not trivial improvement to statistical models
of racial attitudes.
The ROI literature can be strengthened with a focused study of how ROI item
non-response affect respondents’ stated racial attitudes. Previous research treats these
responses as sincere, and generally assumes that item non-response is non-directional.
But can we just delete anyone who says, “don’t know,” to the question, “What is my
race?” This seems a risky strategy, as Brehm (1993) has demonstrated that respondents
and non-respondents are different across many issue and demographic domains. In the
next section we review the ROI literature. We then discuss our data, present results, and
conclude with the study’s implications for minimizing type I and type II errors in
hypothesis testing related to racial attitudes.
Race of Interviewer Effects
Race of interviewer (ROI) effects serve as a useful indicator of interpersonal
tension between racial groups and are more than an artifact of survey research (Davis
1997b). By imagining the survey experience as sampling real-world social interactions,
the ROI literature has developed beyond earlier efforts to match participants and
interviewers by race, a practice rooted in the finding that responses from cases in which
the race of the survey participant matched that of the interviewer are “more frank” than
responses collected from unmatched-race interviews (Hatchett and Schuman 1975).
1
However, see Anderson, Silver, and Abramson (1988a), and Davis and Silver (2003) for discussions of
findings where ROI effects are not apparent for every race-related item in a survey. Also, Schuman and
Converse (1971) use the idea of “mere presence” to explain how interviewer’s race can influence the
formulation of a response to a survey item, even when that question is not about race, per se.
3
Many studies document the effects of the actual race of the interviewer (ROI) on
a variety of political and social attitudes in the telephone interview (Schuman and
Converse 1971; Hatchett and Schuman 1975; Weeks and Moore 1981; Cotter, Cohen and
Coulter 1982; Anderson, Silver and Abramson 1988b; Davis 1997a; Davis 1997b; Finkel,
Guterbock and Borg 1991). ROI effects are generally interpreted as an effort on the part
of members of ‘opposite’ racial groups to relate positively (or avoid projecting a
threatening image) to out-group members. The socially desirable strategy is to modify
responses to items that trigger in-group, out-group differences based on race, like racial
attitudes, vote choice in contests including an African American and a White candidate,
redistributive policies, etc. (Catania et al. 1996; Ford and Norris 1997; Hatchett and
Schuman 1975).
Within the ROI literature Davis (1997a) and Davis and Silver (2003) replicate
these analyses using perceived race of interviewer as a variable. Whether the actual ROI
matches or not, these scholars find that expressed racial attitudes vary depending on
perceived ROI. Davis and Silver (2003) find that 30% of their White respondents do not
answer the question, “Finally, what do you think is my racial background?” In our
analyses based on 2008 data we find that 37% of Whites say “don’t know” to the same
question. Whiles these studies find a negative association between “don’t know”
responses and political sophistication among Black respondents, it remains an open
question whether this carries over to White respondents or applies to racial attitudes. The
absence in the literature of an analysis that involves up to one-third of White respondents
is surprising given that many scholars have discovered item non-response can bias survey
responses (Krosnick et al. 2002; Bradburn and Sudman 1988; Krosnick 1991; Giljam and
Granberg 1993). We turn now to filling this gap with empirics.
Data and Methods
We use survey data of Washington State registered voters from October 2008
with a racially diverse set of interviewers. In full we completed 872 interviews including
615 among White, non-Hispanic respondents, who are the focus of this.2 We use only the
sample of White respondents, because Whites are 90% of our Washington State sample,
and following Berinsky (2004) we assume that Blacks and Whites are subject to different
types of social pressures in the survey interview experience. We further subset our data to
include respondents who had only a White or Black interviewer, and to respondents who
perceived their interviewer to be White, Black, or chose the “Don’t Know/Refused”
option. This results in a dataset with 357 interviews.
Table 1 shows the raw distribution of responses across actual and perceived race
of interviewer. About 6% of the respondents refused to answer, and 31% said “don’t
know.” Grouping these two choices together, fully 37% of our respondents fail to answer
the ROI query. Eight percent (8%) of respondents said the interviewer was “Black,” and
56% said “White.” Of the times that White respondents were interviewed by a White,
2
We also had an oversample of 197 African American respondents, however, the cell sizes
became too small for in-depth analysis.
4
(152/226) 67.3% correctly indicated that their interviewer was White, less than 1%
indicated that their interviewer was African American, and 31.8% said “don’t know” or
refused to answer the question. Among Whites who were interviewed by an African
American, (26/131) 19.8% correctly indicated that their interviewer was black, but
interestingly 35.1% indicated incorrectly that their interviewer was White, and 45% said
“don’t know” or refused. Another way to think about this (row-wise) is that 76.7% of
White respondents who perceived their interviewer to be White were correct in their
perception, whereas 92.8% of Whites who said that their interviewer was Black were
correct.
Perceived Race of Int.
Table 1: Distribution of Interviews by Actual and Perceived Race of Interviewer
(Among White Respondents)
Actual Race of Interviewer
White Black
Total
White
152
46
198
Black
2
26
28
Don’t Know/Refused
72
59
131
Total
357
226
131
We conduct two analyses. First, we assess the impact of interviewer effects across
three different racial attitude items.3 Second, we model ROI non-response to help explain
our initial findings. The dependent variables related to racial attitudes are:



If Blacks would only try harder, they would be just as well off as Whites.
Blacks are more violent than Whites.
Most Blacks who are on welfare programs could get a job if they really tried.
Each variable is coded from 0 – 3, where 0 = strongly disagree and 3 = strongly agree.
We anticipate finding significant ROI differences on the racial attitudes questions
because perceived ROI affects how respondents evaluate questions that are clearly
connected to race.
In addition to our main interest in questions pertaining to the race of interviewer,
we also include standard control variables including age, education level, household
income, gender, party, and ideology (See Appendix A), known correlates of racial
attitudes. We present a series of three different ordered logit regression models for each
racial attitude item. We also present a logistic regression model to examine the predictors
The Cronbach’s Alpha for these items is 0.69, and one factor is generated with a factor analysis. A factor
analysis corroborates these results. However, we present individual items to show how results may differ
by item.
3
5
of ROI item non-response. We include the same covariates as the previous model but also
include an indicator for actual black interviewer,4 as it is postulated that the actual race of
the interviewer will influence a respondent’s willingness to answer ROI.
The Findings
The first model is what we call the traditional model, where ROI items are not
included. The second model – the traditional ROI model -- includes a dummy indicator
for ROI perceived White (ROI perceived Black is the comparison category) but drops
respondents who do not respond to the ROI query. This is akin to assuming that ROI
responses are non-directionally related to racial attitudes. The final model is the same as
the second model but includes a dummy indicator for ROI perceived Black and treats
ROI “don’t know” as the comparison category.
Table 2, 3, and 4 report ordered logit results for these three models where the
dependent variable is “If Blacks would only try harder, they would be just as well off as
Whites.” First, ideology, gender, age, education, low income, and Republican are all
statistically significant and in the expected direction. Moving beyond these traditional
predictors, two findings are relevant to the ROI discussion. First, the AIC model fit score
for the ROI “don’t know” model (model 3) is lower than the model fit for the traditional
model (model 1), indicating that the former is a slightly better fit model. Including ROI
variables improves statistical models of racial attitudes. Second, while we cannot
compare AIC scores for models two and three because they are not nested, there appears
to be no ROI effect for model 2 -- the coefficient for perceived White is statistically
insignificant. However, model 3 indicates that perceived White is statistically significant,
whereas perceived Black is not. This indicates that White respondents who perceived
their interviewer to be White give more racially conservative responses relative to their
“don’t know” counterparts who report more liberal responses.5
4
We initially included dummy indicators for Interviewer Actually White, but this variable was not
statistically significant, and its exclusion does not change the substantive findings.
5
We ran a similar model 3 where we treat perceived Black as the comparison category to examine whether
perceived White and perceived Black are statistically different from one another. These two variables are
not statistically different from one another for racial attitude items numbers one and three, but are
statistically different for racial attitude item number 2 (“Blacks are more violent than Whites.”)
6
Table 2. Traditional Model without ROI Variables
Ideology (Lib - Cons)
Female
Age
Education
Income <$40K
Income <$40K - $60K
Income <$60-$100K
Income Missing
Democrat
Republican
Coefficient
0.257
-0.608
0.018
-0.419
0.666
0.049
-0.198
0.444
-0.114
0.471
Std. Error
0.084
0.212
0.007
0.142
0.357
0.337
0.311
0.342
0.259
0.27
T-Value
3.072
-2.862
2.575
-2.947
1.868
0.144
-0.637
1.296
-0.442
1.744
Intercepts
0|1
1|2
2|3
Coefficient
0.710
2.090
3.507
Std. Error
0.538
0.549
0.569
T-Value
1.318
3.805
6.166
Model Fit
AIC
N
889.539
357
“Please tell me whether you strongly agree (3), somewhat agree (2), somewhat disagree (1), or strongly
disagree (0): If blacks would only try harder, they would be just as well off as Whites.”
7
Table 3. Traditional ROI Model Without “Don’t Know”
Ideology (Lib - Cons)
Female
Age
Education
Income <$40K
Income <$40K - $60K
Income <$60-$100K
Income Missing
ROI Perceived White
Democrat
Republican
Coefficient
0.261
-0.716
0.022
-0.661
0.222
-0.100
-0.408
0.662
0.360
0.082
0.705
Std. Error
0.103
0.257
0.009
0.174
0.418
0.402
0.365
0.440
0.307
0.306
0.340
T-Value
2.528
-2.785
2.482
-3.801
0.531
-0.249
-1.119
1.507
1.171
0.267
2.072
Intercepts
0|1
1|2
2|3
Coefficient
0.629
2.143
3.648
Std. Error
0.668
0.681
0.706
T-Value
0.941
3.148
5.167
Model Fit
AIC
N
620.799
247
“Please tell me whether you strongly agree (3), somewhat agree (2), somewhat disagree (1), or strongly
disagree (0): If blacks would only try harder, they would be just as well off as Whites.”
8
Table 4. ROI Model With “Don’t Know” Included
Ideology (Lib - Cons)
Female
Age
Education
Income <$40K
Income <$40K - $60K
Income <$60-$100K
Income Missing
ROI Perceived Black
ROI Perceived White
Democrat
Republican
Coefficient
0.269
-0.616
0.018
-0.407
0.674
0.129
-0.181
0.618
0.223
0.593
-0.148
0.442
Std. Error
0.085
0.213
0.007
0.143
0.359
0.340
0.313
0.350
0.381
0.223
0.260
0.273
T-Value
3.174
-2.888
2.507
-2.851
1.877
0.379
-0.578
1.764
0.585
2.658
-0.569
1.620
Intercepts
0|1
1|2
2|3
Coefficient
1.126
2.522
3.957
Std. Error
0.571
0.584
0.604
T-Value
1.974
4.323
6.547
Model Fit
AIC
N
886.228
357
“Please tell me whether you strongly agree (3), somewhat agree (2), somewhat disagree (1), or strongly
disagree (0): If blacks would only try harder, they would be just as well off as Whites.”
The next racial attitude item is “Blacks are more violent than Whites.” Using the
same approach as the previous analysis, Tables 5-7 report the results, which are similar to
the overall findings presented in Tables 2-4. The AIC model fit value is lower for the
alternative ROI model (model 3) compared to the traditional model (model 1). However,
the perceived White variable is statistically significant in the traditional ROI model
indicating that respondents who perceive their interviewer to be White answer more
conservatively than those who perceive their respondents to be black. Model 3 reports
similar results as Table 4. Perceived White is once again significant, but perceived black
is not. We also ran this model with perceived black as the comparison group (like model
2), and find significant results for perceived White, however the coefficient is larger and
the t-value is higher (see appendix). Again, model 3 appears to be the preferred model.
9
Table 5. Traditional Model without ROI Variables
Ideology (Lib - Cons)
Female
Age
Education
Income <$40K
Income <$40K - $60K
Income <$60-$100K
Income Missing
Democrat
Republican
Coefficient
0.229
-0.579
0.031
-0.267
0.209
-0.185
-0.049
-0.061
-0.084
-0.149
Std. Error
0.086
0.226
0.008
0.146
0.381
0.356
0.328
0.369
0.272
0.282
T-Value
2.661
-2.563
4.040
-1.833
0.548
-0.520
-0.150
-0.166
-0.310
-0.529
Intercepts
0|1
1|2
2|3
Coefficient
2.009
3.580
4.882
Std. Error
0.579
0.601
0.636
T-Value
3.472
5.960
7.679
Model Fit
AIC
N
767.401
357
“Please tell me whether you strongly agree (3), somewhat agree (2), somewhat disagree (1), or strongly
disagree (0): Blacks are more violent than Whites”
10
Table 6. Traditional ROI Model Without “Don’t Know”
Ideology (Lib - Cons)
Female
Age
Education
Income <$40K
Income <$40K - $60K
Income <$60-$100K
Income Missing
ROI Perceived White
Democrat
Republican
Coefficient
0.209
-0.684
0.031
-0.327
0.113
0.003
0.138
-0.059
0.657
0.044
0.105
Std. Error
0.104
0.271
0.010
0.176
0.453
0.425
0.389
0.483
0.347
0.317
0.356
T-Value
2.007
-2.523
3.269
-1.855
0.249
0.008
0.354
-0.123
1.893
0.138
0.294
Intercepts
0|1
1|2
2|3
Coefficient
2.485
4.010
5.233
Std. Error
0.742
0.770
0.805
T-Value
3.349
5.208
6.500
Model Fit
AIC
N
552.968
247
“Please tell me whether you strongly agree (3), somewhat agree (2), somewhat disagree (1), or strongly
disagree (0): Blacks are more violent than Whites.”
11
Table 7. ROI Model With “Don’t Know” Included
Intercepts
0|1
1|2
2|3
Coefficient
0.242
-0.601
0.034
-0.251
0.147
-0.137
-0.027
0.090
0.685
0.729
-0.129
-0.166
0.242
Coefficient
2.702
4.304
5.631
Model Fit
AIC
N
761.369
357
Ideology (Lib - Cons)
Female
Age
Education
Income <$40K
Income <$40K - $60K
Income <$60-$100K
Income Missing
ROI Perceived Black
ROI Perceived White
Democrat
Republican
Std. Error
0.087
0.228
0.008
0.147
0.386
0.358
0.331
0.377
0.426
0.238
0.273
0.285
0.087
Std. Error
0.630
0.655
0.692
T-Value
2.778
-2.634
4.260
-1.709
0.382
-0.383
-0.082
0.240
1.608
3.063
-0.470
-0.581
2.778
T-Value
4.292
6.569
8.139
“Please tell me whether you strongly agree (3), somewhat agree (2), somewhat disagree (1), or strongly
disagree (0): Blacks are more violent than Whites.”
The final question we model is “Most blacks who are on welfare programs could
get a job if they really tried.” With respect to model fit this item proves the exception, as
the AIC indicates that the perceived ROI indicators do not improve statistical fit.
However, compared to the traditional ROI model (model 2), the alternative ROI
specification indicates a statistically significant relationship between ROI and racial
attitudes. This is essentially the same as our findings from our first set of models. In
terms of adequately specifying the link between perceived ROI and racial attitudes, the
traditional model fails to capture the ROI dynamics revealed by the full specification of
model 3, leading to a type II error. Thus, our efforts to account for perceived ROI bring
value to our understanding of how respondents modify their answers to racial attitude
questions as a function of interviewer characteristics, and minimize type II error.
12
Table 8. Traditional Model without ROI Variables
Ideology (Lib - Cons)
Female
Age
Education
Income <$40K
Income <$40K - $60K
Income <$60-$100K
Income Missing
Democrat
Republican
Coefficient
0.182
-0.408
0.007
-0.769
-0.339
-0.360
-0.165
-0.130
-0.252
0.608
Std. Error
0.081
0.208
0.007
0.148
0.362
0.329
0.301
0.341
0.253
0.269
T-Value
2.263
-1.963
1.057
-5.205
-0.938
-1.095
-0.547
-0.382
-0.995
2.262
Intercepts
0|1
1|2
2|3
Coefficient
-1.913
-0.386
1.249
Std. Error
0.539
0.531
0.535
T-Value
-3.550
-0.726
2.332
Model Fit
AIC
N
931.825
357
“Please tell me whether you strongly agree (3), somewhat agree (2), somewhat disagree (1), or strongly
disagree (0): Most blacks who are on welfare programs could get a job if they really tried.”
13
Table 9. Traditional ROI Model Without “Don’t Know”
Ideology (Lib - Cons)
Female
Age
Education
Income <$40K
Income <$40K - $60K
Income <$60-$100K
Income Missing
ROI Perceived White
Democrat
Republican
Coefficient
0.136
-0.429
0.018
-0.868
-0.399
-0.759
-0.100
-0.033
-0.023
-0.156
1.048
Std. Error
0.136
-0.429
0.018
-0.868
-0.399
-0.759
-0.100
-0.033
-0.023
-0.156
1.048
T-Value
0.136
-0.429
0.018
-0.868
-0.399
-0.759
-0.100
-0.033
-0.023
-0.156
1.048
Intercepts
0|1
1|2
2|3
Coefficient
-1.733
-0.090
1.657
Std. Error
0.679
0.670
0.678
T-Value
-2.552
-0.134
2.444
Model Fit
AIC
N
636.365
247
“Please tell me whether you strongly agree (3), somewhat agree (2), somewhat disagree (1), or strongly
disagree (0): Most blacks who are on welfare programs could get a job if they really tried.”
14
Table 10. ROI Model With “Don’t Know” Included
Coefficient
0.189
-0.405
0.007
-0.757
-0.348
-0.325
-0.146
-0.010
0.205
0.409
-0.273
Std. Error
0.081
0.208
0.007
0.148
0.362
0.331
0.301
0.346
0.384
0.215
0.253
T-Value
2.339
-1.944
1.040
-5.114
-0.959
-0.983
-0.484
-0.028
0.534
1.904
-1.081
Intercepts
0|1
1|2
2|3
Coefficient
-1.624
-0.084
1.560
0.189
Std. Error
0.566
0.562
0.568
0.081
T-Value
-2.867
-0.150
2.747
2.339
Model Fit
AIC
N
932.172
357
Ideology (Lib - Cons)
Female
Age
Education
Income <$40K
Income <$40K - $60K
Income <$60-$100K
Income Missing
ROI Perceived Black
ROI Perceived White
Democrat
Republican
“Please tell me whether you strongly agree (3), somewhat agree (2), somewhat disagree (1), or strongly
disagree (0): Most blacks who are on welfare programs could get a job if they really tried.”
Finally, we present logistic regression results in Table 11, which model
respondents’ propensity to give an ROI item non-response. Two significant indicators
emerge. First, respondents with a black interviewer are much more likely to give a nonresponse compared to their counterparts with a White interviewer. Second, there is
marginal evidence that respondents who fail to report their annual income are more likely
to give an ROI item non-response (p-value = .10). Taken together, these results suggest
that ROI item non-response is not random and in fact driven by characteristics of both the
respondent (personal caution as measured by their refusal to report their income) and the
interviewer (their race).
15
Table 11 ROI Don’t Know Model
Intercept
Ideology (Lib - Cons)
Female
Age
Education
Income <$40K
Income <$40K - $60K
Income <$60-$100K
Income Missing
Int. Actually Black
Democrat
Republican
Coefficient
-1.953
0.021
-0.028
0.007
0.164
-0.145
0.377
0.166
0.653
0.652
-0.043
-0.058
Model Fit
AIC
N
447.97
357
Std. Error
0.662
0.095
0.249
0.008
0.167
0.451
0.401
0.378
0.402
0.240
0.312
0.318
T-Value
-2.950
0.216
-0.111
0.884
0.979
-0.322
0.942
0.440
1.625
2.710
-0.138
-0.184
“What is my race? 0 = Not “Don’t Know/Refused”, 1= “Don’t Know/Refused”
16
Discussion
Scholars of public opinion have long known that social desirability, unit and item
non-response bias, and interviewer effects can influence survey data. This issue of nonrandom measurement error is serious, and many approaches have been introduced to
address it (see Schuman and Converse 1971; Anderson, Silver, and Abramson 1988a,
1988b; Bollen 1989). Much work examines whether the race of the telephone interviewer
elicits respondents to give socially desirable responses. Davis (1997a, 1997b) finds that
race of interviewer effects exist with respect to the racial attitudes of Blacks. Anderson,
Silver, and Abramson (1988a) also finds that race of interviewer effects can go beyond
racial attitude questions, including reported voting, political interest, and actual voting
among African Americans. But no one has satisfactorily answered whether ROI “don’t
know” responses affect racial attitude items.
We find that 37% of our respondents fail to guess the race of their interviewer,
indicating a widespread consternation among respondents on this sensitive question. In
general, when modeling racial attitudes, inclusion of ROI “don’t know” respondents
tends to improve model fit, although this may vary based on the content of the racial
attitude item. It is important to note that racial attitude items are different – some elicit
strong ROI effects while others may not. Thus, researchers should check for ROI effects
(including “don’t know” responses) when they analyze racial attitudes.
Another finding – specific to our data -- is that perceived ROI has no statistical
relationship to racial attitudes in two out of the three items we measured under the
traditional ROI model where “don’t know” respondents are dropped. This leads to type II
errors and ultimately biases the analysis of racial attitude items. Indeed, ROI becomes
statistically related to racial attitude items only when ROI “don’t know” respondents are
included. Finally, we modeled ROI item non-response and discovered that ROI item nonresponse is not random and in fact driven by characteristics of both the respondent
(personal caution as measured by their refusal to report their income) and the interviewer
(their race). In the final analysis, it appears that the presence of a black interviewer may
lead already cautious/private respondents to hedge their answers on racial attitude items
thereby enhancing social-desirability bias. This problem is largely corrected, we submit,
by including ROI “don’t know” responses when modeling racial attitudes.
17
Appendix A: Variable Coding
Dependent variables:
(A) If blacks would only try harder, they would be just as well off as Whites.
(B) Blacks are more violent than Whites.
(C) Most blacks who are on welfare programs could get a job if they really tried.
Each variable is coded from 0 – 3, where 0 = strongly disagree and 3 = strongly agree.
Independent variables:
ROI Perceived Black 0 = R did not perceive I to be Black, 1 = R perceived I to be Black
ROI Perceived White 0 = R did not perceive I to be White, 1 = R perceived I to be White
ROI Perceived DK 0 = R perceives I to be White or Black, 1 = R answers DK
(OMITTED CATEGORY)
Independent variables [controls]
Ideology
0=Very liberal; 1=Somewhat Liberal; 2=Lean liberal; 3=Moderate;
4=Lean conservative; 5=Somewhat conservative; 6=Very conservative; 3
= Don’t Know/Refused
Democrat
0 = Not Democrat, 1 = Democrat
Republican
0 = Not Republican, 1 = Republican
Independent/Other
0 = Not Independent/Other, 1 =Independent/Other (OMITTED
CATEGORY)
Female
Dummy variable 1=Female
Age
Continuous, ranges from 18 – 96
Education
0=Less than high school; 1=Some college; 2=College grad/Postgrad
Income Low
Income dummy 1= Less than $40,000
Income Low-Med
Income dummy 1= $40,001 - $60,000
Income Med
Income dummy 1= $60,001 - $100,000
Income High
Income dummy 1= Over $100,001 (OMITTED CATEGORY)
Income Missing
Income dummy 1= Respondent did not disclose income
18
Appendix B
Table 12 ROI Model With “Don’t Know” Included – Perceived Black Comparison
Category
Ideology (Lib - Cons)
Female
Age
Education
Income <$40K
Income <$40K - $60K
Income <$60-$100K
Income Missing
ROI Perceived DK
ROI Perceived White
Democrat
Republican
Coefficient
0.249
-0.602
0.031
-0.251
0.185
-0.127
-0.032
0.078
0.198
0.745
-0.147
-0.204
Std. Error
0.087
0.228
0.008
0.147
0.385
0.357
0.330
0.375
0.367
0.347
0.272
0.286
T-Value
2.855
-2.644
4.029
-1.713
0.479
-0.355
-0.097
0.207
0.539
2.145
-0.538
-0.716
Intercepts
0|1
1|2
2|3
Coefficient
2.593
4.189
5.508
Std. Error
0.654
0.678
0.712
T-Value
3.967
6.181
7.741
Model Fit
AIC
N
763.592
357
“Please tell me whether you strongly agree (3), somewhat agree (2), somewhat disagree (1), or strongly
disagree (0): Blacks are more violent than Whites.”
19
Appendix C
The sampling frame is all Washington State registered voters prior to the 2008 general
election. A sample of registered voters was drawn from the Washington State voter file
via random stratified sampling. The survey was conducted between October 20 –
November 4, 2008, with a response rate of 19.3 (using AAPOR RR4 definition).
Dependent Variables Question Wording
The next statements are about life in America today. As I read each one, please tell me
whether you strongly agree, somewhat agree, somewhat disagree, or strongly disagree.
If blacks would only try harder, they would be just as well off as whites.
Blacks are more violent than whites.
Most blacks who are on welfare programs could get a job if they really tried.
Independent Variables Question Wording
And finally, what is my race?
(DON'T ASK) Gender
What is the highest level of education you completed? Just stop me when I read the
correct category -- Grades 1-8, some high school, high school graduate, some college or
technical school, college graduate, or post-graduate.
Generally speaking, do you think of yourself as a Democrat, a Republican, an
independent, or what?
When it comes to politics, do you usually think of yourself as a Liberal, a Conservative, a
Moderate, or haven't you thought much about this?
In what year were you born?
What was your total combined household income in 2007 before taxes. This question is
completely confidential and just used to help classify the responses. Just stop me when I
read the correct category. Less than $20,000; $20,000 to less than $40,000; $40,000 to
less than $60,000; $60,000 to less than $80,000; $80,000 to less than $100,000; $100,000
to less than $150,000; More than $150,000.
20
References
Alvarez, R.M. and Brehm, J. (2002). Hard choices, easy answers: Values, information,
and American public opinion. Princeton, NJ: Princeton University Press.
Alvarez, R.M. and Wilson, C. (1994). Uncertainty and political perceptions. Journal of
Politics, 56(3): 671-688.
Anderson, B.A., Silver, B.D., & Abramson, P. R. (1988a). The effects of race of the
interviewer on measures of electoral participation by blacks in SRC national
election studies. Public Opinion Quarterly, 52(1):53-83.
Anderson, B.A., Silver, B.D., & Abramson, P. R. (1988b). The effects of race of the
interviewer on race-related attitudes of black respondents in SRC/CPS national
election studies. Public Opinion Quarterly. 52(3), 289-324
Bollen, Ken. (1989). Structural equations with latent variables. New York: John Wiley
and Sons.
Bradburn, N., Sudman, S. and Associates. (1979). Improving interview method and
questionnaire deisgn: Response effects to threatening questions in survey
research. San Francisco: Josse-Bass.
Bradburn, N., Sudman, S. (1988). Polls and surveys: understanding what they tell us. San
Francisco: Jossey-Bass.
Brehm, J. (1993). The phantom respondents: Opinion surveys and political
representation. Ann Arbor: University of Michigan Press.
Callegaro, M., F. De Keulenaer, J.A. Krosnick, and R.P. Daves. (2005). Interviewer
effects in a RDD telephone pre-election poll in Minneapolis 2001: An analysis of
the effects of interviewer race and gender. American Association of Public
Opinion Research – Section on Survey Research Methods. 3815-3820.
Catania, J.A., Binson, D., Canchola, J., Pollack, L.M., Hauck, W. & Coates, T.J. (1996).
Effects of interviewer gender, interviewer choice, and item wording on
responses to questions concerning sexual behavior. Public Opinion Quarterly.
60(3), 345-375.
Cotter, P.R., Cohen, J., & Coulter, P.B. (1982). Race of interviewer effects in telephone
interviews. Public Opinion Quarterly. 46(2), 278-284.
Davis, D. (1997a). The direction of race of interviewer effects among AfricanAmericans: Donning the black mask. American Journal of Political Science. 309322.
21
Davis, D. (1997b). Nonrandom measurement error and race of interviewer effects among
African Americans. Public Opinion Quarterly. 183-207.
Davis, D.W. and Silver, Brian D. (2003). Stereotype threat and race of interviewer effects
in a survey on political knowledge. American Journal of Political Science. 47(1)
33-45.
Dawson, M.C. (2001). Black visions: The roots of contemporary African-American
Political Ideologies. Chicago: The University of Chicago Press.
Finkel, S.E., Guterbock, T.M., & Borg, M.J. (1991). Race-of-interviewer effects in a
preelection poll in Virginia. Public Opinion Quarterly. 55(3), 313-330.
Ford, K. and Norris, A. (1997). Effects of Interviewer Age on Reporting of
Sexual and Reproductive Behavior of Hispanic and African American Youth.
Hispanic Journal of Behavioral Sciences. 19(3): 369-376.
Giljam, M. and Granberg, D. (1993). Should we take don’t know for an answer? Public
Opinion Quarterly 57: 348-57.
Hatchett, S. and Schuman, H. (1975). White respondents and race-of-interviewer effects.
Public Opinion Quarterly. 39(4), 523-527.
Jacoby, B. (1982). Unfolding the party identification scale: Improving the measurement
of an important concept. Political Methodology. 8(2): 33-59.
Kaiser, H.F. (1960). The application of electronic computers to factor analysis.
Educational and psychological measurement. 20: 141-151.
Krosnick, J.A. (1991). Responses strategies for coping with the cognitive demands of
attitude measures in surveys. Applied Cognitive Psychology. 5:213-36.
Krosnick, J., Holbrook, A.L. Berent, M.K., Carson, R.T., Hanemann, W.M, Kopp, R.J.,
Mitchell, R.C., Presser, S., Ruud, P.A., Smith, V.K., Moodey, W.R., Green, M.C.,
Conaway, M. (2002). The impact of “no opinion” response options on data
quality: non-attitude reduction or an invitation to satisfice? Public Opinion
Quarterly. 66:371-403.
Krysan, M. and M. P. Couper. (2003). Race in the Live and the Virtual Interview:
Racial Deference, Social Desirability, and Activation Effects in Attitude Surveys.
Social Psychology Quarterly. 66(4): 364-383.
Labov, W. (1972). Language in the inner city: Studies in the Black English vernacular.
Philadelphia: University of Pennsylvania Press.
Luelsdorf, P. A. (1975). A segmental phonology of Black English. The Hague: Mouton.
22
Marquis, K. H. and Cannell, C.F. (1969). A study of interviewer-respondent interaction in
the urban employment survey. Survey Research Center. The University of
Michigan. 1969.
Pauker, K, and Ambady N. (2009). Multiracial faces: How categorization affects memory
at the boundaries of race. Journal of Social Issues. 65(1), 69-86.
Purnell, T., Isardi, W., and Baugh, J. (1999). Perceptual and phonetic experiments on
American English dialect identification. Journal of Language and Social
Psychology. 18(1): 10-30.
Reese, S. D., Danielson, W.A., Shomaker, P., Chang T., and Hsu, H. (1986). Ethnicityof-interviewer effects among Mexican-Americans and Anglos. Public Opinion
Quarterly. 50: 563-72.
Rubin, D. B. (1987). Multiple Imputation for Nonresponse in Surveys. Hoboken, NJ: John
Wiley and Sons, Inc.
Schaeffer, N. C., J. Dykema and D. W. Maynard. (2010). Interviewers and Interviewing.
In P. V. Marsden and J.D. Wright (Eds) Handbook of Survey Research. 2nd ed.
Bingley, UK: Emerald Group Publishing
Schuman, H. and Converse, J.M. (1971). The effects of black and White interviewers on
black responses in 1968. Public Opinion Quarterly. 35(1), 44-68.
Schuman, H. and Kalton, G. (1985) Survey methods. Chapter 12 in G. Lindzey & E.
Aronson (Eds.) Handbook of social psychology (3rd edition) (Vol. I) (pp. 635-698)
New York, NY: Random House.
Sudman, S, and Bradburn N.M. (1974). Response effects in surveys. Chicago: Aldine
Thurston, L.L. (1947). Multiple factor analysis. Chicago: University of Chicago Press.
Tourangeau, R., Rips, L.J., and Rasinski, K.A. (2000). The psychology of survey
response. Cambridge: Cambridge University Press.
Tversky, A., and Kahneman, D. (1974). Judgment under uncertainty: Heuristics and
biases. Science. 185: 1124-1131.
Weeks, M.F. and Moore, R. (1981). Ethnicity-of-interviewer effects on ethnic
respondents. Public Opinion Quarterly. 45(2), 245-249.
Weisberg, H.F. and Fiorina, M.P. (1980). Candidate preference under uncertainty: An
expanded view of rational voting. In John C. Pierce and John L. Sullivan, eds.,
The electorate reconsidered. Beverly Hills, California: Sage: 237-256.
23