Usability Testing of Web Surveys

Usability Testing of Web Surveys: Key issues in the
design of web questionnaires
Keywords: Usability Testing; Web Survey; Integrated System for Data Collection;
Questionnaire Design; Respondent Burden; LFS
1.
INTRODUCTION
General Population Surveys in official statistics are undergoing major changes. The need
for fast and cost efficient data collection as well as declining response rates are
challenging traditional ways of data collection (CATI, CAPI etc.). In order to meet these
demands more and more National Statistical Institutes (NSIs) are implementing web
surveys (CAWI). Even if many NSIs already use CAWI for business surveys, the use of
CAWI for general population surveys is rather new.
CAWI as a way of data collection in official statistics has the potential to be more
efficient and cost saving compared to CAPI and CATI. Moreover, CAWI provides
respondents with greater flexibility in terms of time and place while answering the
questionnaire leading to increased compliance as well as higher response rates.
However, CAWI has also its downsides. Respondents answering a self-administered web
questionnaire could experience higher respondent burden. While completing a web
survey, respondents do not only face cognitive demands with regard to answering and
understanding the question, they are also required to perform the task on their own
devices (computer, tablet, mobile phone) constituting additional cognitive burden.
Hence, it is up to the NSIs to develop data collection tools with a focus on usability in
order to decrease respondents’ cognitive burden. To evaluate the user-friendliness
usability testing is captured as golden standard.
In our planned contribution we want to give an insight in usability testing of an in-house
data collection system at Statistics Austria. Moreover, we aim to give suggestions in
order to improve the usability of web questionnaires at NSIs.
2.
METHODS
Statistics Austria has implemented an in-house integrated system for data collection
(STATsurv). STATsurv was developed upon general principles for designing
questionnaires. Usability testing was used to evaluate the web questionnaires from the
respondents’ perspective. Future design and usability choices should be built upon the
results of the usability testing.
One of the aims of this study was to gather real life data rather than to observe a
standardized (unrealistic) situation. For this purpose, we conducted the usability testing
at respondents’ homes. Moreover, respondents were asked to use their own devices to
and their preferred Internet browser. Another aim was to ensure a rather heterogeneous
sample with regard to educational level, age and computer experience in order to
represent the wide variety of future respondents. In sum, 18 persons took part in the
usability testing.
We used different qualitative and quantitative methods to test the user friendliness of the
web questionnaire. At the beginning of the usability testing respondents received an
aviso-letter. The aviso letter introduced them to the topic and asked them to start the web
1
questionnaire by their own (find the homepage, login). After respondents logged in, the
web questionnaire started. Respondents had to answer questions of the Labour Force
Survey (LFS) pictured with different types of questions (radio buttons, look-up,
dropdown, etc.). After finishing the web questionnaire, respondents completed a
standardized online debriefing and additional cognitive interviews were held. While
answering the web questionnaire respondents’ reactions were videotaped and the
cognitive interviews were recorded.
3.
RESULTS
In this section we will focus on key-results of the usability testing. By doing so, we aim
to give practical suggestions in order to improve the user-friendliness of web
questionnaires.
3.1. Finding the webpage and login
The importance of these topics is obvious, respondents who are not able to find the
webpage or to login, are not able to complete the questionnaire and hence contribute to
higher non-response rates.
The web page was presented within the invitation letter (e.g. www.statistik.gv.at/XY)
and respondents were asked to open the web page in order to login. Out of the 18
respondents four persons had problems identifying the webpage correctly (mean age = 52
years). Even if we used Arial 11 pt., we now recommend using larger characters.
Further five persons did not type the web address in the address bar of their browser,
rather they “googled” it (mean age = 56 years, a rather low educational level). This
procedure showed them a lot of results (like onlinequestionnaire.org etc.). Not all of the
respondents were able to see a difference between the presented addresses and the
searched one and so they tried to login at other questionnaire providers. Accordingly, we
recommend amplifying the invitation letter with a login description [1]. Moreover, if
there are no security issues respondents should be sent an email-letter of invitation with a
hyperlink and their login details presented.
50% of respondents had some problems with the login process. Login procedure had to
be done using a login name (e.g. [email protected]) and an 8 letter
password including special characters. Most of the respondents had problems entering
the password as the entered password was not visible (***). We recommend providing
the possibility to make the password visible so that respondents are able to control it
(especially, if special characteristics are used and upper and lower case have to be
differentiated).
The usability test further showed that it is not easy to read the login details and at the
same time to type. We recommend amplifying the invitation letter with a card showing
the login details.
3.2. Dropdown
Respondents had to indicate their main activity (MAINSTAT) using a dropdown box.
Respondents could either scroll to see all response options or type the first letter to jump
to the wished part.
None of the respondents typed a letter to jumped to preferred part. Even if respondents
should have scrolled down to capture all response options, 43.8% of the respondents did
2
not scroll down. Respondents rather tend to take the best “fitting” response option, an
indication for a measurement error known as primacy effect.
In the online debriefing we presented the same question using a dropdown box or radio
buttons and asked respondents which question type they prefer. 52.9% of the respondents
preferred the radio buttons. Comments were: “I love to see all possible answers at once”
and “it is easier”. Based on these results, we recommend using radio-buttons rather than
dropdown boxes, especially, if less than 15 response options are presented.
3.3. Lookup
Respondents had to indicate their job title using the standard occupation classification
(ISCO). After typing 3 letters an implemented lookup list was presented. As the
respondent types the job title, the list of possibilities narrows until the respondent picks
the suitable one or continues typing.
Most of the respondents had no problems using the lookup list. However, 26% of the
respondents did not make use of the presented lookup list. They rather typed their whole
job title without choosing a presented alternative. This procedure is rather sensitive to
trigger error massages and thus should be prevented. During cognitive interviewing 60%
of respondents even indicated that they were surprised, when the look-up list was
presented.
We recommend giving brief and clear instructions when presenting lookup lists. Those
instructions should be part of the questions such as “Please type in the first letters of your
job title. Then select the job title from the list that appears” [2]. Moreover, the closed
question should be expanded by an open text box alternative for those who were
unsuccessful using the lookup list.
3.4. Household grid
A simplified grid format of the LFS variables was tested. Respondents should indicate
the full name, optional an academic title, the date of birth and sex of each household
member (see figure 1). Respondents had to click on the blue “+”-button to add another
household member.
Figure 1 Household grid tested in the web questionnaire
The date of birth was rather problematic. Respondents were asked to indicate the date in
the desired format DD.MM.YYYY. However, some respondents did not type 0 for a
birth date between 1 and 9, some respondents wrote “86” to indicate the year of birth.
Based on the results we recommend presenting the date of birth using three boxes (one
for day, month and year) with an adjusted size (the largest box for year – 4 digits) [3].
Moreover, boxes should be labelled with symbols to the left or right (DD, MM, YYYY).
Some of the respondents overlook to indicate the sex of household members. During the
cognitive interviews respondents mentioned that the used dropdown box was not optimal
3
as the question was presented differently compared to the other questions (white boxes
vs. grey box). Hence, we recommend presenting a white box or to use radio buttons.
17% of the respondents did not enter all household members. We recommend asking
respondents in a first step the number of household members. Based on the mentioned
number for each household member one line should be automatically presented in the
household grid. However, the possibility to enter additional lines should still be
presented. Moreover, we recommend installing a check whether the number of household
members complies with the number of lines in the household grid. Also the presented
symbols (“+” - to add a line, “x” – to clear the line) should be careful considered,
especially, their placement.
3.5. Instructions and explanations
Instructions and explanations are very important for CAWI as the replace the interviewer.
However, the importance is often overrated as respondents make rarely use of them.
Just 22% (N = 4) of the respondents made use of the explanations while completing the
web questionnaire. The reasons were as followed: Just one of them had a problem with
the question; another one just wanted to take a brief look and didn’t need any help; the
two remaining had technical problems. We asked the remaining respondents, why they
didn’t make use of the explanations: Even if most of them knew where to get help (83%),
they said that they didn’t need any help (even if was observable that they had some issues
with some questions).
Taking this fact into account, it is obvious that important parts of the question should not
be transported within instruction or explanations. Key information needed to fully
understand the question’s intent should be put directly into the question text.
4.
CONCLUSIONS
The described examples above illustrate how challenging the implementation of web
surveys could be. To make web questionnaires more user friendly and thus to reduce
respondent burden careful thoughts are needed. The design of questions makes a
difference and helps respondents to provide well-formed answers.
Moreover, we think that an implementation of an in-house system for CAWI needs to be
evaluated based on results of usability testing. In line with this argument, we want to
emphasize the importance to conduct the usability tests in “real life” situations and not at
the premises of the national statistics agency. Otherwise, many impressions could be
missed based on standardized circumstances.
Usability testing is an invaluable component during the implementation of in-house
integrated system for data collection.
[1] Dillman, D. (2000). Mail and telephone surveys: The tailored design method. New
York.
[2] Couper, M., & Zhang, C. (2016, April). Helping Respondents Provide Good
Answers in Web Surveys. Survey Research Methods, 10 (1), 49-64.
[3] Christian, L. M., Dillman, D. A., & Smyth, J. D. (2007). Helping respondents get it
right the first time: the influence of words, symbols, and graphics in web surveys.
Public Opinion Quarterly, 71(1), 113-125.
4