Tale of Two Questionnaires: Good People Thwarted by Bad

Tale of Two Questionnaires:
Good People Thwarted by Bad Questionnaire Design
FAQ from SSI’s March 2016 webinar on new research into the impact of questionnaire
design errors. Pitting a ‘good’ questionnaire design against a ‘bad.’
QUESTION
ANSWER
What's the base size of this survey? Date? Sample?
Was it one questionnaire or a series?
For this webinar, we collected 6374 responses across
three countries split 50:50 across the good and bad
surveys. The study was fielded March 12-March 14 in
Australia, the United Kingdom and the United States.
Bold and/or underlining are good ways to emphasize
key points, but always remember that questions are
being read at great speed. Bolding is not always as
bold as you might think. SHOUTING is a good way to
emphasize that you are suddenly NOT talking in the
positive anymore.
If you use real respondents to check your
questionnaire for comprehension and completeness,
then you might want to think about interviewing
them personally (maybe by phone?) to get feedback.
You don't need to do too many. Otherwise, you have
to adjust your survey to allow feedback space and
analyze that feedback—which may create more
questions than it answers. Alternatives are to rest
internally (assuming you have a wide range of people
to get opinions from).
Not specifically, but we like the thought of doing a
readability check. The colors, etc., we rely on
standard templates. We’re not sure if these have
ever been assessed.
Our research shows that even when presented a
longer survey, the abandon rate does not increase
much. Click here to read the white paper.
What is your stance on bolding and/or underlining
specifics in questions?
If you are using a pilot test to discover question
problems, is a debrief with each respondent critical?
Do you also design and check for accessibility in the
questions? For example, color contrast—do you
check the reading level of the questions?
People will not drop out of the survey—is this an
effect caused or increased by incentives? Have you
validated this assumption in surveys where
incentives are not offered?
Offices worldwide | [email protected] | surveysampling.com
1
How do you get people engaged to answer an "open
end" question? For example, if you ask a question
like "Describe your skin in the winter" and they just
answer with: "dry."
Is there a limit to the number of answers for one
question?
What is multi-code?
What do you mean "too high" and "too low?"
Regarding scales, is there a difference between
asking degree of positive as you did for “likely” and
a scale with 3 positive and 3 negative (likely to
unlikely)?
What about the order of agree/disagree—should
agree be on the left or should disagree be on the
left?
Is it ever advisable to add a "don't know" as well as
a "care not to answer" category for scales, or does
that seem to be overkill?
Was there any research done on 5-point versus 7point scales?
Gamification techniques work well here. You need to
think about the impact of rules. For example, word
the question as: "In no more than 50 words, please
describe your skin condition in winter." Or write in
scenarios, like: "Imagine you visit a dermatologist in
winter. They write a 50-word report on your skin
condition, what would it say?" People like to “play by
the rules,” and scenarios help with their imagination.
Not technically, but practically you would struggle to
find a question that has so many different answers to
it!
A question where more than one answer can be
selected from a list.
The results of the multi-code question versus the
yes/no question.
Only some constructs are bipolar in nature (i.e. have
an "un-" side to them). What, for example is the
difference between "slightly unlikely" and "slightly
likely?" In both cases, you have some (small) chance
of doing the thing. Likely is a unipolar construct
running from "definitely will not do it"/"not at all
likely to do it" through to "definitely will do it."
Textbook tells us that there are no differences
between data outcomes whichever way the scale is
presented, but that agree (left) is faster than agree
(right) because of expectation of the order. It is hard
to find in practice, however. You are probably safe to
show both directions to balance out any potential
bias, but the order must be maintained per
respondent.
Please read our POV on Don't Know Response Option
Best Practices.
Lots—all in academia. Here is a link to a discussion
and a set of references.
Offices worldwide | [email protected] | surveysampling.com
2
QUESTION
ANSWER
If you are trying to measure likelihood to
We prefer fully labelled scales rather than numbers
recommend, what do you suggest? Is it preferable to since numbers are subject to cultural bias. Therefore,
use a 10-point scale with two anchors?
we struggle with 10-point scales, as there aren't 10
different words to describe "recommend." That said,
if you are going to use a numerical scale, you must
anchor it.
In general, what is your POV on a middle ground
Maybe get two measures: the estimate and the the
between where respondents are asked to give their level of "sureness." Then, do a distribution of the
best estimate or best guess?
"very sure" people and exclude from the data anyone
outside 2 SD of that mean. Doing so means you get
maximum data with maximized surety.
Please define "construct specific" more clearly.
“Construct specific” means that the answer is the
same "thing" (construct) that is being asked about.
For example, if you want to ask "how happy are
you?" the answers are naturally "very happy," "not at
all happy," etc. An alternative is to ask, "Would you
agree or disagree that you are very happy?" Then,
the answers are "agree strongly" to "disagree
strongly." The question then arises what does it
actually mean, in terms of your happiness, when you
tell me: "I disagree slightly that I am very happy."
Is there a good, user-friendly book for how to write
For a straightforward and comprehensive guide:
questionnaires? Please suggest one.
Questionnaire Design: How to Plan, Structure and
Write Survey Material for Effective Market Research
(Market Research in Practice) by Ian Brace.
That book is a little light on online survey
considerations. If you want to get very academic
while learning from a master of self-completion
surveys, check out: Internet, Phone, Mail and MixedMode Surveys: The Tailored Design Method, by Don
A. Dillman and Jolene D. Smyth.
About SSI…
SSI is the premier global provider of data solutions and technology for consumer and business-to-business survey
research, reaching respondents in 100+ countries via Internet, telephone, mobile/wireless and mixed-access
offerings. SSI staff operates from 30 offices in 20 countries, offering sample, data collection, CATI, questionnaire
design consultation, programming and hosting, online custom reporting and data processing. SSI’s 3,600
employees serve more than 2,500 clients worldwide. Visit SSI at www.surveysampling.com.
Offices worldwide | [email protected] | surveysampling.com
3