The Advisory Panel on Online Public Opinion Survey Quality

The ADVISORY PANEL on
ONLINE PUBLIC OPINION
SURVEY QUALITY
Final Report
June 2008
0100110100
010101000101001010100101000110100101001
00101000000110101001010
100101001010010101001010
010011010010
Published by Public Works and Government Services Canada
June 2008
For more information, please contact
[email protected]
or at 613-995-9837.
Ce document est également disponible en français sous le titre
Le Comité consultatif sur la qualité des sondages en ligne sur l’opinion publique
– Rapport final juin 2008
Catalogue Number: P103-6/2008E-PDF
ISBN: 978-1-100-10766-0
© Her Majesty the Queen in Right of Canada, represented by
the Minister of Public Works and Government Services Canada, 2008
This publication may be reproduced for personal or internal use only
without permission provided the source is fully acknowledged. However,
multiple copy reproduction of this publication in whole or in part for
purposes of resale or redistribution requires the prior written permission
from the Minister of Public Works and Government Services Canada,
Ottawa, Ontario K1A 0S5 or [email protected].
The ADVISORY PANEL on
ONLINE PUBLIC OPINION
SURVEY QUALITY
Final Report, June 2008
POR No 263-07
Contract No: EP363-070027/001/CY
Awarded on November 11, 2007
Prepared for:
Public Works and Government Services Canada
June 4, 2008
Le sommaire exécutif de ce rapport est disponible en français
Table of Contents
1 Executive Summary
3 Standards and Guidelines for Pre-field Planning,
Preparation And Documentation
3
5
8
9
10
Statement of Work
Proposal Documentation
Questionnaire Design
Survey Accessibility
Pretesting
13 Standards and Guidelines for Sampling
14
14
16
20
21
23
24
General Standard for Sampling Procedures
Sampling Standard for Probability Surveys
Nonprobability Surveys
Guidance on Setting Sample Sizes for Nonprobability Surveys
Guidance on Currently Appropriate – and Inappropriate –
Uses of Nonprobability Surveys for GC Public Opinion Research
Multi-Mode Surveys
Attempted Census Surveys
25 Standards and Guidelines for Data Collection
25
31
33
33
Required Notifications to Survey Respondents
Use of E-mail for Identifying or Contacting Potential
Survey Respondents
Access Panels
Standards for Access Panel Management
39 Standards and Guidelines with Respect to Use of
Multiple Panels in the Execution of a Survey
Incentives/Honoraria
Fieldwork Monitoring
Monitoring of Online Survey Fieldwork
Detecting and Dealing with Satisficing
Attempted Recontacts
Validation of Respondents
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
40
42
42
43
45
47
i
49 Standards and Guidelines for Success Rate
49
50
52
53
Calculation of “Success Rate”
Nonresponse Bias Analyses
Qualified Break-Offs
Response/Success Rate Targets
55 Standards and Guidelines
for Data Management and Processing
55 Coding
57 Data Editing/Imputation
59 Standards and Guidelines
for Data Analysis, Reporting and Survey Documentation
60
62
62
63
65
ii
Data Analysis/Reporting
Inferences and Comparisons
Nonprobability Samples
Back-up, Retention & Security of Data
Survey Documentation
´ The Advisory Panel on Online Public Opinion Survey Quality
Executive Summary
Introduction
The Department of Public Works and Government
Services Canada (PWGSC) required a panel composed
of practitioners, methodologists and scholars with a
high degree of expertise in the field from the private
sector, the academic sector, Statistics Canada and
Government of Canada departments and agencies
that conduct public opinion research to advise on
standards and guidelines for public opinion surveys
conducted online.
PWGSC’s specific objectives are:
´
´
To use this knowledge to establish requirements
for Government of Canada public opinion online
survey data quality under the next wave of
contracting tools (Standing Offer) planned
for 2008
To provide Government of Canada departments
and agencies commissioning online survey
research with standard contract requirements that
each could choose to incorporate into contracts
with public opinion research suppliers
To provide Government of Canada departments
and agencies undertaking internal online
survey research with specific benchmark
levels of quality indicators
´
Limited to quantitative surveys, both probabilitybased and non-probability based; qualitative
research is specifically excluded
´
Online surveys of the public, business and
other populations
´
Online surveys conducted by the Government
of Canada
´
Issues related to the distinction between
probability and nonprobability-based surveys
and the acceptability of nonprobability based
surveys; this specifically includes issues related
to statistical inference
´
Issues related to the reporting of results
´
Issues related to the assessment of data quality
and the measures of success and response to
online surveys
´
Guidelines related to accessibility and literacyrelated issues
´
Specific standards and guidelines for
statements of work, proposal documentation,
questionnaire design and pre-testing as they
relate to online research
The work of the Advisory Panel on Online Public
Opinion Survey Quality builds on the work of the
2006-2007 Advisory Panel on Telephone Public Opinion
Survey Quality. The Online Advisory Panel focused
on areas where standards and guidelines specific to
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
´
Per the Statement of Work, the scope of the
Panel’s enquiry was the following:
1
online surveys are required; for areas where standards
and guidelines apply equally to telephone and online
surveys, those recommended by the Telephone
Advisory Panel are restated in this report.
The role of the Panel was also to reach consensus
where possible, although this was not an essential
outcome of the work of the Panel.
Method
The Panel consisted of nine members representing the
Government of Canada, the market research industry
and the fields of social science and business research
in the academic community, and was chaired by a
representative of PWGSC.
Line Patry
Manager, Public Opinion Research
Industry Canada
Darryl Somers
Manager, Channel Intelligence and Innovation
Web Channel Office
Service Canada
Sage Research acted as facilitator and prepared
the report on the Panel’s deliberations. The Panel met
twice (one conference call and one in-person meeting)
and participated in a series of three online discussion
boards. Other methods including telephone calls and
e-mails were used to consult with members of the
Panel, as appropriate.
The Panel’s work took place between December
2007 and March 2008.
Market Research Industry
Doug Church
Marketing Research and Intelligence Association (MRIA)
Phase 5 Consulting Group
Cam Davis
Marketing Research and Intelligence Association (MRIA)
Social Data Research
Academic Community
Scott Bennett
Faculty of Public Affairs
Carleton University
Sylvain Sénécal
Department of Marketing
HEC Montréal
Government of Canada
Cathy Ladds
Senior Communications Strategist
Research and Analysis
Treasury Board of Canada Secretariat
Normand Laframboise
Report Overview
The report summarizes the recommendations of
the Panel, expressed as standards or guidelines:
Standards
Practices that should be REQUIREMENTS for all
online studies conducted by the Government of Canada.
Guidelines
Practices that are RECOMMENDED, but would
not be requirements; that is, known good practices
or criteria that serve as a checklist to ensure quality
research but are not necessarily applied to every study.
While it was not the mandate of the Panel to reach
consensus, the Panel did do so on most aspects of data
quality standards and guidelines.
The standards and guidelines are organized under
six main sections:
´
Pre-field Planning, Preparation
and Documentation
´
Sampling
Jacqueline (Jackey) Mayda
´
Data Collection
Director, Special Surveys Division
Statistics Canada
´
Success Rate
´
Data Management and Processing
´
Data Analysis/Reporting and
Survey Documentation
Senior Communications Advisor
Research and Advertising
Industry Canada
2
´ The Advisory Panel on Online Public Opinion Survey Quality
Standards and Guidelines
For Pre-Field Planning, Preparation and Documentation
This section details the standards and
guidelines for pre-field components
´ Statement of Work
´ Proposal Documentation
´ Questionnaire Design
´ Survey Accessibility
´ Pretesting
Most of the standards and guidelines for the
Statement of Work (SOW) apply to both online and
telephone surveys. Accordingly, most are taken from
the report by the Advisory Panel on Telephone Public
Opinion Survey Quality (Telephone report, for short)
(www.pwgsc.gc.ca/por or www.tpsgc.gc.ca/rop) with
some minor modifications to adapt these to online
surveys specifically.
As well, the following commentary from the
Telephone report is pertinent:
´
The Telephone Advisory Panel endorsed a
principle stated by the European Society of
Opinion and Market Research (ESOMAR) on
the role of the Statement of Work (SOW) in
the research process:
The more relevant the background information
the client can give the researcher, the greater the
chances are that the project will be carried out
effectively and efficiently.
´
The SOW is an important document as an
internal Government of Canada (GC) tool,
because it is:
-
A guide to the overall research process for
the department
-
A central document stating the needs of
the department
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
of an online survey project:
Statement of Work
3
STANDARDS
FOR THE STATEMENT OF WORK
A Statement of Work must be a written plan
that provides the research supplier with the
following information:
Data Collection Method
´
If relevant, ask for input on other data
collection approaches.
Deliverables
´
List major project milestones with
anticipated timelines.
Background
´
To provide context for the research, describe
events/decisions that led to why research is
required/being considered.
´
Include information/available resources to help
the contractor better understand the subject
matter of the survey (e.g., past research, web sites).
´
Sample Size Assumptions
´
Provide information on the types of decisions
or actions that are to be based on the findings,
i.e., (a) what activities it will support; (b) how;
(c) who will use the information.
Include any internal or external commitments
regarding scheduling/timelines that may rely
on the research findings (e.g., reporting
requirements, events).
´
Include, in the information requirements, the
broad research questions that the study needs to
answer. This will help in the development of the
survey questionnaire, the data analysis and the
report outline.
If relevant, prioritize the information required to
ensure data quality in the event of budgetary or
scheduling constraints.
´
Wherever necessary and possible, indicate:
- The demographic, behavioural and/or
-
Whether or not Internet non-users are part of
the target population
-
Information or estimates available on the size/
incidence of these groups
4
´ The Advisory Panel on Online Public Opinion Survey Quality
Level of precision required, if applicable
Study budget
Sample Considerations
´
Provide any relevant information on the sampling
frame, e.g., the availability of lists.
´
Indicate the expected sampling method –
i.e., probability, nonprobability, or
attempted census.
´
Indicate any requirements to be taken into
consideration in finalizing the total sample size
and structure/composition of the sample, e.g.,
regional requirements, demographic groups,
population segments (those aware vs. those not
aware; users vs. non-users, etc.).
attitudinal characteristics of the target
population for the survey
Sample size required
Other useful information that may be included in
a Statement of Work include the following:
Target Population
´
-
-
-
GUIDELINES
FOR THE STATEMENT OF WORK
Objectives, research questions
´
To help the supplier generate a reasonable
sample size assumption for costing purposes,
at least one of the following indicators must
be included:
Purpose, how the research will be used
´
At minimum, details of reporting should reference
all requirements identified by Public Opinion
Research Directorate (PORD).
Data Analysis
´
Identify any need for special analyses,
e.g., segmentation.
Many of the standards for Proposal Documentation
found in the Telephone report apply equally to online
surveys, although some modifications specific to online
surveys were necessary.
As general context for the standards on Proposal
Documentation, the following commentary from the
Telephone report is pertinent:
´
There is a clear delineation between the SOW
and the Research Proposal:
-
The SOW is what the GC needs to
know, from whom and when it needs
this information
-
The Research Proposal is what the research
firm will do to meet the needs of the GC and
how this will be done
Therefore, there is much more detail required
from research firms in the Proposal Documentation
than is required from the GC in the SOW.
´
There is a need to find a balance between all the
information required by the GC as a response to a
SOW, and ensuring all data quality issues are also
covered, but without overburdening either the
research supplier or the GC.
´
There is a need for consistency in Proposal
Documentation to make it easier to assess/
confirm that the research firm has provided all the
categories of information and the detail required
in each proposal.
In proposal documentation, the different ways
the Government of Canada contracts public opinion
research also must be considered. Some contracts
for online surveys will be issued to firms using a
Standing Offer, while others may be awarded through
competition on MERX or as sole source contracts
(e.g., syndicated studies, omnibus surveys). To get on
a Standing Offer, firms are required to go through a
rigorous competitive bidding process. Firms selected
through such a process will have already committed
to certain practices which are also required elements
in a proposal. For example, there may be various
quality control procedures required in proposal
documentation to which firms on a Standing Offer
will have already committed as their standard
practices. In these cases, it is suggested the research
firms not be required to describe these again in each
proposal they submit.
The approach used in the Telephone report is
maintained here – that is, an asterisk has been placed
next to items that might already have been addressed
by firms in their Standing Offer submissions and
which they would not be required to address again in
each proposal submission against a call-up. Firms on a
Standing Offer would be required to address only the
non-asterisked items.
Firms awarded online survey contracts who are
not on a Standing Offer would be required to address
all required elements in their proposals.
STANDARDS
FOR PROPOSAL DOCUMENTATION
The Research Proposal must be a written document
that uses the following headings and provides the
following information, at a minimum. Note that
an asterisk (*) identifies the areas that apply only
to proposals from firms not awarded PWGSC’s
Quantitative Standing Offer.
A:Introduction
Purpose
´
Describe the firm’s understanding of the problem/
issues to be investigated and how the GC will use
this information.
Research Objectives
´
Detail the information needs/research questions
the research will address.
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
Proposal
Documentation
5
B: Technical Specifications
of the Research
Response/Success Rate and Error Rate
´
State the target response/success rate or the target
response/success rate range for the total sample for
online and multi-mode surveys and, if relevant,
for key sub-groups.
´
For probability samples, state the level of
precision, including the margin of error and
confidence interval for the total sample and any
key sub-groups.
´
For nonprobability samples, state whether or
not nonresponse biases are planned, If they are
planned, the general nature of the analyses should
be described. If they are not planned, a rationale
must be stated.
´
Indicate any other potential source of error based
on the study design that might affect the accuracy
of the data.
Overview
´
Provide a brief statement summarizing:
- Data collection method, including rationale
for proposed methodology
- Total sample size
- Target population
Sample/Sampling Details
´
Provide details related to target population:
- The definition of the target population
in terms of its specific characteristics and
geographic scope, including the assumed
incidence of the population and any key
sub-groups
- Whether or not Internet non-users are part of
the target population
- The total sample size and the sample sizes of
any key sub-groups
´
Describe the sample frame, including:
- The sample source
- Sampling procedures, including what
sampling method will be used – i.e.,
probability, nonprobability, attempted census
- Any known sampling limitations and how
these might affect the findings
´
Explain respondent selection procedures.
´
Indicate the number of recontact attempts and
explain recontact attempt procedures.
´
Define respondent eligibility/screening criteria,
including any quota controls.
Description of Data Collection
´
State the method of data collection.
*For Access Panels, a description of the following
must be provided, at minimum:
- Panel size
- Panel recruitment
- Project management
- Panel monitoring
- Panel maintenance
- Privacy/Data protection
Note: When multiple panels are to be used in the
execution of the survey, this must be disclosed and
standards for use of multiple panels followed.
´
Provide details on any incentives/honoraria,
including rationale.
´
Describe how language requirements will
be addressed.
´
For nonprobability samples:
- Provide the rationale for choosing a
´
*Describe quality control procedures related to
data collection, including at minimum:
- Describe the steps that will be taken to
- Detecting and dealing with satisficing; as a
nonprobability sample
maximize the representativeness of the
nonprobability sample
guideline it is recommended the cost impact
of any measures taken to detect and deal with
satisficing be described
6
´ The Advisory Panel on Online Public Opinion Survey Quality
- Fieldwork validation methods and procedures
´
*Describe how:
- The rights of respondents will be respected,
including if relevant the rights of children,
youth and vulnerable respondents
- Respondent anonymity and confidentiality
´
Describe any weighting required.
´
*Describe quality control procedures related
to data processing/data management, including
at minimum:
- Coding/coding training
- Data editing
- Data tabulation
- File preparation/electronic data delivery
will be protected
´
´
Describe any accessibility provisions in
the research design to facilitate participation
by respondents who are visually or
physically disabled and who may be
using adaptive technologies.
For multi-mode surveys, provide a rationale
for using a multi-mode method rather than a
single-mode method.
Data Analysis/Reporting
´
Describe how the data will be analyzed related to
the objectives/research questions, including any
special analyses (e.g., segmentation).
´
Provide an outline of the sections of the report.
Questionnaire Design
´
´
´
Provide either an outline of the survey
questionnaire or list the topics that will be covered
in the questionnaire, including specifying the
number of open-ends.
Provide an estimate of the length of the
questionnaire. If the survey is estimated to require
more than 20 minutes to complete, state the
rationale for the length.
Deliverables
´
List all deliverables including their coverage,
scope, format, means of delivery and number of
copies, including at minimum:
- Questionnaire(s), including pre-test,
if relevant
Describe how the questionnaire will be
pre-tested, including:
- The objectives of the pre-test
- The method for the pre-test
- The total number of pre-test questionnaires to
of copies, language of report
- How the results of the pre-test will be
documented and communicated to the GC
Note: A rationale must be provided if:
- No pre-test is to be conducted
- Less or more than 30 pre-test questionnaires
are to be completed
Project Schedule
´
- The nature, location and number of
presentations, including the language
of presentations
be completed in total and by key sub-groups
(e.g., language, age, gender)
- Data tabulation/processing
- The report format(s), including the number
Provide a detailed workplan with dates and
identify responsibilities.
C:Project Cost
Project Cost
´
Cost information must be presented in the format
designated by PWGSC.
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
Description of Data Processing/
Data Management
7
Questionnaire Design
The starting point for the Panel’s deliberations was
the section in the Telephone report on Questionnaire
Design and the standards and guidelines that had been
developed for telephone surveys. The Panel was asked
to consider:
STANDARDS
FOR QUESTIONNAIRE DESIGN
´
Survey questionnaires must be designed:
a)
To collect only the information essential to the
objectives of the study, and
To minimize the burden placed on
respondents while maximizing data quality
´
If any changes were required to the general
standards and guidelines for online surveys
b)
´
The appropriateness of the guideline on
questionnaire length (a) for online surveys as
opposed to telephone surveys, and (b) in light of
new guidelines about questionnaire length issued
by the Marketing Research and Intelligence
Association (MRIA) in their Code of Conduct and
Good Practice, December 2007
´
The following are required elements
of all Government of Canada online
survey questionnaires:
a)
Inform respondents of (i) the subject and
purpose of the study and (ii) the expected
length of the interview
b)
Identify the research firm and either the
Government of Canada or the department/
agency sponsoring the survey
c)
Inform respondents that their participation
in the study is voluntary and the information
provided will be administered according to
the requirements of the Privacy Act
d)
Inform respondents briefly of their rights
under the Access to Information Act, most
importantly the right to access a copy of the
report and their responses
´
Unless the client provides the translation, firms
are required to translate the questionnaire into
the other official language (unless interviewing
is to be unilingual), and where required into
other languages. All translations must be in
written form.
´
Government of Canada approval of an online
survey questionnaire must include approval of the
appearance and functionality of the questionnaire
in its online form – i.e., as it would be experienced
online by respondents.
´
The need for a standard on questionnaire
approval for online surveys
The Online Advisory Panel agreed that, with
the exception of the guideline on survey length, the
standards and guidelines for questionnaire design for
telephone surveys apply equally to online surveys, the
general principle being that only very broad standards
and guidelines are required.
With regard to survey length, the Panel supported
maintaining the designation of 20 minutes as a
“reasonable” length for online surveys, but suggested
adding language to flag that shorter online surveys
may be preferable. A few Panelists suggested referring
to the possibility of doing longer online surveys, albeit
with the understanding that this should be more the
exception than the rule.
There was consensus by the Panel to add a
standard on questionnaire approval for online surveys.
This would serve as a reminder to GC research buyers
of the importance of approving both the wording/
content of the survey and its online appearance
and functionality.
There was general agreement by the Panel
on the following standards and guidelines for
Questionnaire Design.
8
´ The Advisory Panel on Online Public Opinion Survey Quality
The following strategies may be used to achieve
the standards:
The questionnaire is of reasonable length, i.e.,
requiring 20 minutes or less to complete. Shorter
surveys are preferred over longer surveys.
Longer surveys may be acceptable in some
circumstances, depending on such factors as
the target group, the subject, the possibility of
respondents completing the questionnaire in
parts, or where permission has been obtained in
advance. However, the risk posed by an overly
long questionnaire is that it may well result in
significant nonresponse or drop-offs, which in
turn can adversely affect data quality.
The rationale for surveys longer than
20 minutes should be discussed in the
Research Proposal.
´
´
The introduction to the survey and the respondent
screening section are well-designed and as short
as possible in order to maximize the likelihood
people will agree to complete the questionnaire.
´
Questions are clearly written and use language
appropriate to the target group.
´
Methods to reduce item non-response are adopted
(e.g., answer options match question wording;
“other,” “don’t know” and “refused” categories
are included, as appropriate).
´
Survey Accessibility
The Treasury Board of Canada Secretariat has issued
accessibility standards for GC websites and these
standards will impact online surveys hosted on GC
websites. Depending on how the mandate and scope
of the standards are interpreted, they may also impact
GC surveys hosted on third-party websites and
possibly syndicated online surveys purchased by
the GC.
Accessibility is a very important matter for GC
online surveys. However, the Advisory Panel did
not include representatives from the group within
Treasury Board of Canada Secretariat that enforces
GC accessibility standards, nor did the Panel have
access to legal advice pertaining to accessibility
requirements. Therefore, the Advisory Panel did not
feel it could unilaterally attempt to interpret how the
Government of Canada’s Common Feel and Look
requirements should be applied to online surveys,
whether for surveys hosted on a GC website or for
surveys hosted by other parties.
The Panel did agree to make the following
recommendations to the Public Opinion Research
Directorate (PORD), and also agreed to add a
standard to Proposal Documentation (Description of
Data Collection) requiring online survey proposals to
address any survey accessibility provisions.
Recommendations
´
Recommendation to PORD re clarification: The
Advisory Panel recommends that PORD explore
with the relevant program and legal authorities
within Treasury Board of Canada Secretariat
both what are best practices with respect to online
survey accessibility, and what are minimum
acceptable practices.
´
Recommendation re Standing Offer
requirements: The Advisory Panel recommends
that in the upcoming Request for Standing Offers
that bidders for providing online survey research
discuss in their proposals how they will work
with respondents who are visually or physically
disabled and who may be using adaptive
technologies online. Bidders must demonstrate
how these individuals can be included in
the research.
The questionnaire is designed for clear and
smooth transition from question to question and
from topic to topic.
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
GUIDELINES
FOR QUESTIONNAIRE DESIGN
9
With regard to these recommendations, there
were two additional comments made by some
members of the Panel:
´
There should be provision for consultation with
the research industry to better understand the
types of software currently in use by third parties
(and in which some companies have heavily
invested) before setting minimum requirements.
´
Once minimum acceptable standards are
established, there will be a need for PORD
to implement a plan to communicate these to
potential online survey providers.
There was discussion among the Panel on the
potential value of a step-wise approach when
pretesting an online questionnaire – that is, start
by completing a subset of pre-tests, then modify
the questionnaire as appropriate based on the
preliminary results, then complete another subset
of pre-tests using the modified questionnaire. The
specification of a target minimum number does
not preclude a step-wise pre-test process. The only
requirement is that the total target number of
pretests be completed.
´
Related to pre-test documentation, the Panel felt
the summary of pre-test results needs to include
documentation of specific aspects of the pre-tests
(e.g., both observational and factual data). Specific
reporting requirements have been added to the
standard below.
Pre-testing
There was consensus by the Panel to adopt the
standards and guidelines for pre-testing that had
been recommended for GC telephone public opinion
surveys, with the following modifications:
´
´
10
Related to the role of pre-testing for online survey
questionnaires, language has been added to reflect
that there are multiple aspects of an online survey
that should be considered in a pre-test, including
both content and appearance/functionality.
Related to stating a minimum number of pretest questionnaires to be completed, the Panel
generally supported specifying a number of
pre-test questionnaires that must be completed.
However, there was some debate about what the
minimum number should be, particularly in those
instances when the sample is limited or when
major revisions may be required to various aspects
of a survey. Language has been added to the
standard below stating a minimum target number
for pre-testing which notes that other pre-test
numbers may be justifiable.
´ The Advisory Panel on Online Public Opinion Survey Quality
STANDARDS
FOR PRE-TESTING
In-field pre-testing of all components that may
influence data quality and respondent behaviour
is required for a new online survey questionnaire
or a substantially revised questionnaire used in a
previous survey.
For online surveys, this includes both the
content/wording of the questionnaire and the
online appearance/functionality of the survey.
´
- A periodic review of questionnaires used in
ongoing or longitudinal surveys is required.
A minimum of 30 pre-tests are to be completed in
total, 15 in English and 15 in French.
When less or more than 30 pre-tests are to
be completed, this must be justified in the
Research Proposal.
´
´
The result(s) of the pre-test(s) must be
documented, i.e., at minimum:
- A description of the pre-test approach and the
´
Pre-tests should not be included in the final
dataset unless (a) there were no changes to
the questionnaire, and (b) the pre-test was
implemented in the exact same manner as
in the final survey design.
´
Cognitive pre-testing (using qualitative methods)
should be considered prior to field testing for new
survey questionnaires or where there are revisions
to wording or content of existing questionnaires,
and particularly for complex surveys, highly
influential surveys or surveys that are planned
as ongoing or longitudinal. The main uses of
cognitive pre-testing are:
- To provide insight into how respondents react
number of pre-tests completed
- A summary of results including:
- Observations on how respondents
answered the questions
- Occurrence and description of drop-offs
- Questionnaire completion time
- Responses to any special pre-test questions
(e.g., respondents’ comments on the
survey questionnaire/experience)
- A record of the decisions/changes made as a
result of the pre-test findings
´
For syndicated online studies, research firms are
required (a) to demonstrate that the survey
questionnaire has been pre-tested, and (b) to
provide details on the pre-test approach and
number of pre-tests completed.
to a questionnaire:
- Their understanding of the
wording of questions and the flow
of the questionnaire
- Their ability to respond to questions
accurately
- Their thought processes as they answer
the questions
- To identify the impact of changes to an
existing questionnaire (e.g., a tracking survey)
GUIDELINES
FOR PRE-TESTING
´
´
For complex studies, highly influential surveys
or surveys that are planned to be ongoing or
longitudinal, a more complete in-field test of
other components of a survey, not just the survey
questionnaire, may be desirable. This may be a
pilot test that, on a small scale, duplicates the final
survey design including such elements as data
capture, analysis of results, etc.
´
If there is a need to pre-test the questionnaire on
criteria other than language, at least 4 pre-tests
should be completed with each sub-group.
Whenever possible, schedule and budget
permitting, omnibus survey questions should at
least be pre-tested in-field. Whenever a pre-test
has been conducted, the details of the pre-test
should be documented, including the number
of pre-tests completed.
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
11
12
´ The Advisory Panel on Online Public Opinion Survey Quality
Standards and Guidelines
For Sampling
The starting point for the Panel was a
section in the Telephone report titled
Development of Sampling Frames
-
Whether or not the Advisory Panel should
provide guidance on setting sample size
-
Modification of a standard pertaining to
nonprobability survey sampling that was in
the Telephone report
-
A standard for justification of use of
nonprobability surveys
and Sampling.
The following topics were considered by the Panel:
´
Modification of a general “sampling procedures”
standard applicable to all sampling methodologies
-
A standard for maximizing representativeness
of nonprobability survey samples
´
Modification of the sampling standard for
probability surveys
-
´
The circumstances under which a website visitor
intercept sample qualifies as a probability sample
Whether or not the Advisory Panel should
provide guidance on appropriate/acceptable
uses of nonprobability surveys
´
Nonprobability surveys: There is no online
equivalent of RDD (Random Digit Dialing)
telephone sampling methodology for drawing
probability samples from the general public, and
as a result it is possible nonprobability surveys
may be more prevalent in the online survey
environment. Several sampling topics were
considered with respect to nonprobability surveys:
-
Attempted census surveys: The Panel considered
whether or not attempted census surveys
should be broken out as a separate sampling
methodology for purposes of specifying standards
and guidelines. Key to this decision was whether
or not the statistical treatment of data from
attempted census surveys is different from that
appropriate for probability surveys.
Statistical treatment: reporting of a margin
of sampling error, and statistical significance
tests of differences
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
´
13
General Standard for
Sampling Procedures
In the Telephone report, the following standard is
stated in connection with the heading “Sampling
Procedures”: All research firms must clearly state
the target group (universe) definition for the research
study and then clearly state the method used to obtain a
representative cross-section sample of this target group.
The Online Panel recommended a revised
version of this standard which adds the following
requirements: (1) there must be explicit indication
of whether or not Internet non-users are part of the
target population definition for a survey, and (2) the
sampling method must be stated – i.e., probability,
attempted census, or nonprobability.
Sampling Standard for
Probability Surveys
In the Telephone report, sampling standards are given
for random probability sampling. The Advisory Panel
recommended the adoption of these standards for
online probability surveys, with wording changes to
reflect online rather than telephone methodology.
STANDARDS
PROBABILITY SURVEYS
´
The list or sample source must be clearly stated,
including any of its limitations/exclusions in
representing the universe for the target sample
and the potential for bias.
´
A full description of the sample design and
selection procedures will be stated including:
All research firms must:
´
- Sample stratification variables (if any)
- Any multi-stage sampling steps taken
- At each sampling stage, the method of
STANDARDS
GENERAL SAMPLING PROCEDURES
´
Clearly state the target group (universe) definition
for the research study; in the case of online surveys
this includes explicit identification of whether
or not Internet non-users are part of the target
group definition
Clearly state the method(s) used to obtain a
sample of this target group, including whether the
method was a probability survey, a nonprobability
survey, or an attempted census
attaining a random selection shall be
explained, and any subsets of the universe
that have been excluded or underrepresented
shall be stated (e.g., Internet non-users)
Note: Whenever possible, an estimate of
the percentage of the universe that has
been excluded or underrepresented should
be provided.
- The number of attempted recontacts and
procedure for attempted recontact should
be stated
- Respondent eligibility/screening criteria
will be defined, including any oversampling
requirements (e.g., region, gender)
´
14
´ The Advisory Panel on Online Public Opinion Survey Quality
Assuming that proper probability sampling
procedures have been followed, the sampling error
should then be stated based upon a given sample
size at a given confidence level, but research firms
must take care to:
- Ensure that clients know that sampling
error based upon a subset of the total sample
will not be the same as that based on the
total sample
- Where possible, express sampling error in
terms relevant to the specific nature of the
most important or typical variables in a survey
´
State that there are many potential non-sampling
sources of error and include reference to other
possible sources of error in the study in order
to not give a misleading impression of overall
accuracy and precision.
For website visitor intercept studies, survey results
cannot be used to generalize to populations other
than the one for which the sample was designed. This
is because the definition of the target group directly
impacts on whether the sample is to be treated as a
probability sample or as a nonprobability sample for
analysis and reporting purposes. That is, while the
intercept sampling process itself may be consistent
with probability sampling, the definition of the target
population also directly impacts whether or not for
analysis and reporting purposes the sample is to be
treated as a probability sample or as a nonprobability
sample. For example:
´
If the survey fieldwork was done over a one
month period, but the survey report defines the
target population as “past-year visitors to the
website”, then the sample would have to be treated
as a nonprobability sample.
´
If the survey intercepts visitors at one particular
website, but the survey report defines the target
population as “visitors to government websites”,
then the sample would have to be treated as a
nonprobability sample.
Website Visitor Intercept Samples
The Advisory Panel was asked to comment on the
circumstances under which a website visitor intercept
sample qualifies as a probability sample.
There was a consensus among Panel members
on both when a website visitor sample qualifies as
a probability sample, and on some guidelines for
conducting website visitor intercept surveys.
A website visitor intercept sample qualifies as a
probability sample if both of the following conditions
are met:
´
Over the time period the fieldwork is conducted,
the number of visitors to the website can be
estimated, and survey invitations are given to a
random sample of these visitors.
´
The population is defined as visitors to the website
over the time period during which the fieldwork
was conducted.
The above describes the criteria under which
a website visitor sample qualifies as a probability
sample. A website visitor intercept sample qualifies
as an attempted census if in the first criterion
survey invitations are given to all visitors during
the fieldwork period, rather than to a random sample
of visitors.
The latter point is an important one to keep in
mind when planning a website visitor intercept survey,
and indicates the need to give careful consideration to
defining the time period for the survey. For example, it
may be desirable to have an extended fieldwork period
in order to broaden the target population.
´
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
15
GUIDELINES
FOR CONDUCTING WEBSITE
VISITOR INTERCEPT SURVEYS
The following are recommended best practices
when conducting a website visitor intercept survey:
´
Examine website visitor statistics to determine
common entry points to the site. Simply placing
the invite redirect on the home page may not
be enough to get a good sample of visitors to
the website.
´
Use an appropriate methodology to maximize
accessibility, e.g., a page redirect method.
´
Take steps to minimize the likelihood a visitor
will get invited multiple times to take the survey.
Nonprobability Surveys
Overview
A significant challenge for doing online surveys of the
public is generating a sample from which actionable
and statistically sound results can be obtained. Notably,
there is no online equivalent of the RDD (Random
Digit Dialing) telephone sampling methodology for
drawing probability samples of the public.
Access panels operated by research suppliers as
well as those developed and operated by departments/
agencies within the Government of Canada are
significant from a public opinion research (POR)
perspective, because they can potentially be used
to conduct online surveys of the public. However,
these panels are often considered to be based on
nonprobabilistic sampling, and there are statistical
limitations that result from using a nonprobability
sample: accuracy is problematic, no margin of
sampling error can be reported, and often no
significance testing of differences among subgroups
can be reported.
Considerable work is being done by the research
industry to overcome these statistical limitations,
and there are promising developments and results
16
´ The Advisory Panel on Online Public Opinion Survey Quality
– e.g., prediction of U.S. voting outcomes (note: this
is cited as an example because this is an area where
the industry has published accuracy data). The
accuracy achieved in published results is impressive,
particularly in that it is predicting an outcome for a
population that includes Internet non-users. However,
further methodological advancements and empirical
validation are needed before nonprobability surveys
can be used with the same confidence as probability
surveys in terms of accuracy and precision in
describing a target population.
At the present time, the results of nonprobability
surveys should be used with caution:
´
Where the stakes are high in terms of impact on
key policy, program or budget decisions, use of
a probability sample in the research design is to
be strongly preferred; nonprobability surveys are
good for exploratory research, to help in building
understanding of the range and types of public
opinion on a topic, and for experimental designs to
compare impact of different stimuli (e.g., different
ad concepts, different web designs, etc.).
´
The standards and guidelines recommended
below for nonprobability sampling:
-
Formalize the cautions in using
nonprobability samples, in terms of
requiring consideration of certain issues
and disclosure of these considerations
(e.g., Justification standard, Sampling standard,
Statistical Treatment standard, Assessment
of Representativeness guideline)
-
Encourage attention to maximizing
the potential accuracy of the results
(Maximizing Representativeness standard,
Assessment of Representativeness guideline)
The Advisory Panel recommends the GC monitor
methodological developments in online nonprobability
surveys, and actively participate in the evolution of
this survey methodology by doing research based on
existing research using its own body of POR surveys.
There are grounds for optimism that the scope of
appropriate uses for nonprobability surveys, given
certain methodological conditions are met, will expand
in the future.
There was extensive consideration of various
topics associated with nonprobability surveys by the
Online Advisory Panel. The topics considered can be
grouped under two headings:
Standards and guidelines
Sampling for Nonprobability Samples
´
Statistical treatment: margin of sampling
error; statistical significance tests of
differences, including reporting of differences
among subgroups
´
As for probability sampling, the list or sample
source must be stated, including its limitations in
representing the universe for the target sample.
´
´
AAPOR (American Association for Public
Opinion Research) statement on why margin
of sampling error should not be reported
A full description of the regional, demographic
or other classification variable controls used
for balancing the sample to attempt to achieve
representivity should be described.
´
General sampling standard for
nonprobability surveys
´
´
Justification of use of nonprobability surveys
´
Maximizing representativeness of
nonprobability surveys
The precise quota control targets and screening
criteria should also be stated, including the
source of such targets (e.g., census data or other
data source).
´
Deviations from target achievement should be
shown in the report (i.e., actual versus target).
“Guidance”
Maximizing Representative-ness of
Nonprobability Samples
Guidance on setting sample sizes
´
Guidance on appropriate/acceptable uses of
nonprobability surveys
´
To the extent survey results will be used to make
statements about a population, steps must be taken
to maximize the representativeness of the sample
with respect to the target population, and these
steps must be documented in the research proposal
and in the survey report. (In this context, the
word “representativeness” is being used broadly.)
These steps could include, for example, a choice of
sampling method that gives greater control over
the characteristics and composition of the sample
(e.g., access panel vs. “river-sampling”), use of
demographic and other characteristics
in constructing the sample, and use of
weighting schemes.
´
The survey report must discuss both the likely
level of success in achieving a representative
sample with respect to the key survey topic
variables, and the limitations or uncertainties with
respect to the level of representativeness achieved.
Note: All of the above items are addressed in this
section. The recommendations which affect other
sections of the report – e.g., Proposal Documentation
and Survey Documentation – have also been
incorporated into those other sections.
The Panel recommends the following standards
and guidelines related to nonprobability surveys.
STANDARDS
FOR NONPROBABILITY SURVEYS
Justification of Use of Nonprobability Surveys
´
When a choice is made to use a nonprobability
sample, that choice must be justified, in both
the research proposal and the research report.
The justification should take into account the
statistical limitations in reporting on data from
a nonprobability sample, and limitations in
generalizing the results to the target
group population.
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
´
17
Statistical Treatment of Nonprobability Samples
GUIDELINES
FOR NONPROBABILITY SURVEYS
´
There can be no statements made about margins
of sampling error on population estimates when
nonprobability samples are used.
´
The survey report must contain a statement on
why no margin of sampling error is reported,
based on the following template: “Respondents
for this survey were selected from among those
who have [volunteered to participate/registered to
participate in (department/agency) online surveys].
[If weighting was done, state the following sentence
on weighting:] The data have been weighted to reflect
the demographic composition of (target population).
Because the sample is based on those who initially
self-selected for participation [in the panel], no
estimates of sampling error can be calculated.”
Assessment of Representative-ness of
Nonprobability Samples
This statement must be prominently placed
in descriptions of the methodology in the
survey report.
´
´
18
For nonprobability surveys it is not appropriate
to use statistical significance tests or other formal
inferential procedures for comparing subgroup
results or for making population inferences about
any type of statistic. The survey report cannot
contain any statements about subgroup differences
or other findings which imply statistical testing
(e.g., the report cannot state that a difference
is “significant”).
Nevertheless, it is permissible to use descriptive
statistics, including descriptive differences,
appropriate to the types of variables and relations
involved in the analyses. Any use of such
descriptive statistics should clearly indicate that
they are not formally generalizable to any group
other than the sample studied, and there cannot
be any formal statistical inferences about how the
descriptive statistics for the sample represent any
larger population.
The exception to the rule against reporting
statistical significance tests of differences
is nonprobability surveys that employ an
experimental design in which respondents
are randomly assigned to different cells in the
experimental design. In this case, it is appropriate
to use and report on statistical significance tests to
compare results from different cells in the design.
´ The Advisory Panel on Online Public Opinion Survey Quality
1)
Evidence on how well the obtained sample
in a nonprobability survey matches the target
population on known parameters should be
presented where possible. For this purpose, use
high quality data sources such as Statistics Canada
or well-designed probability surveys done in
the past.
2)
Contingent on resources and on survey
importance, consider the following:
- Proactively building into different surveys
common questions that could be used on an
ongoing basis to compare results obtained
using different survey methodologies – e.g.,
the results for a common question could be
compared when it is asked in a telephone
probability survey of the target group versus
in an online nonprobability survey of the
target group.
- Use of a multi-mode method for a survey
project in order to be able to, for example,
allow comparison of the results of a
probability survey component with the results
for a nonprobability survey component, or
allow exploration of questionnaire mode
effects in order to assess whether one
mode might elicit more realistic, honest, or
elaborated responses than another mode.
Statistical Treatment of Nonprobability Surveys
´
Consider using other means for putting
descriptive statistics in context, for example:
- If similar studies have been done in the past, it
may be useful to comment on how statistical
values obtained in the study compare to
similar studies from the past.
- For statistics such as correlations, refer
to guides on what are considered to be
low, medium or high values of descriptive
correlational statistics.
With regard to the latter aspect of the
recommended guideline:
attitudinal/evaluative/value variables in order
to allow exploration over time of how the
non-demographic component of online survey
coverage might be changing. Although, it was
also noted that getting agreement on these
variables could be challenging, and might
be more easily accomplished at the level of
particular departments or agencies.
“Justification” Standard
The intent behind the “justification” standard is to
ensure that the statistical limitations associated with
nonprobability surveys are taken into account in
planning and reporting on such surveys.
That said, as noted in the introduction to this
section, considerable work is being done by the
research industry to overcome these statistical
limitations, and there are promising developments and
results. It may be that solutions to the statistical issues
will be found in the future.
´
´
The first guideline was suggested by the Panel
in the context of general agreement that no
margin of sampling error can be reported for
nonprobability surveys. As one Panelist stated,
there could be demographic comparisons to census
data, or comparisons to results of similar studies
with similar dependent variables. This could
provide some perspective on degree of “error” in
population estimates.
The second guideline was suggested by the Panel
both with respect to assessing representativeness
for particular surveys and with respect to
developing a broader framework to explore issues
with online and other survey methodologies by
means of doing methodological research that
makes use of existing POR studies.
- Some Panelists were particularly supportive
of use of multi-mode methods. While multimode designs can potentially add to study
cost, it was suggested multi-mode methods
can be useful not only for assessing the
representativeness of a particular survey, but
also as a means of creating data sets that could
allow exploration of how online survey results
and coverage evolve over time relative to
other methodologies (telephone in particular).
The latter could be helpful in the future when
it may be appropriate to revise standards and
guidelines for online surveys.
Standard for “Maximizing Representativeness”
and Guidelines for “Assessment of
Representativeness”
The word “representativeness” can be interpreted
in a variety of ways, and there was some discussion
of whether or not this term should be more tightly
defined. However, the Panel decided the term should
be used broadly for now, with an understanding
that as online survey methodologies and experiences
develop over time that perhaps the meaning of
“maximizing representativeness” could be tightened
up in the future.
With regard to the “Assessment of
Representativeness” guidelines:
- One suggestion was that surveys include
“Statistical Treatment” Standard and Guidelines
With regard to the first two standards pertaining to
margin of sampling error:
´
The Advisory Panel supports the MRIA position
that research companies must “refrain from making
statements about margin of error on population
estimates when probability samples are not used.”
´
The disclosure statement pertaining to not
reporting margin of sampling error is modeled
after one proposed by AAPOR.
With regard to the standard on “use of statistical
significance tests to determine whether differences
among subgroups exist”, most members of the Panel
felt that it is not appropriate to report statistical
significance tests when using nonprobability sampling.
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
The following are additional notes on the
Panel’s discussion with regard to the above standards
and guidelines.
19
One Panelist, however, had a different view on
reporting the results of statistical significance tests.
They felt that statistical significance tests of subgroup
differences are acceptable providing it is stated the
results must be interpreted with caution since the
differences may not be representative of the population
(i.e., the significance test results may have low external
validity). This Panelist referenced the frequent
use of nonprobability samples in scientific research
in the social sciences (e.g., convenience samples of
students, consumers, etc.), and that in this research
statistical significance tests of subgroup differences
are often reported. The Panelist felt it reasonable that
GC POR follow these common practices in social
science research. The Panelist also felt that if a strong
case can be made for the representativeness of the
nonprobability sample, this lends additional credence
to reporting of statistical differences among subgroups.
“Guidance” on
Setting Sample Sizes for
Nonprobability Surveys
Panelists were asked what, if any, guidance should
be provided with respect to setting sample sizes for
nonprobability surveys, given that margin of sampling
error does not apply to such samples for purposes of
estimating population parameters. The issue is that
margin of error provides a metric for assessing possible
sample sizes, and without this metric other criteria
must be used to make decisions about sample size.
The Panel recommended the following guideline.
20
´ The Advisory Panel on Online Public Opinion Survey Quality
GUIDELINES
FOR SETTING SAMPLE SIZE FOR
NONPROBABILITY SURVEYS
Because nonprobabilistic samples cannot be used for
population inferences, the number of cases has no
effect on the precision of the population estimates
generated. Nonetheless, there are factors to consider
when setting the sample size for a nonprobability
survey, including:
´
Description of sample data: The sample size
should take into account the complexity of
the descriptive analyses that will be reported.
For example:
- Consider not only the total sample, but
also the number and incidence levels of the
subgroups within the total sample for which
descriptive statistics will be reported.
- For multivariate descriptive analyses, the
sample size should be sufficient to support
these types of analyses.
´
Maximizing sample representativeness: As part of
adhering to the “maximizing representativeness”
standard for nonprobability samples, one needs to
take into account the number and incidence levels
of the various subgroups judged important for
purposes of making a credible claim for apparent
representativeness.
The members of the Panel fell into two camps with
respect to providing guidance on appropriate or
inappropriate uses of nonprobability samples for
POR research:
´
Several Panelists essentially felt that no additional
guidance should be stated, on the grounds that
the various standards and guidelines the Panel
is already recommending with respect to use
of nonprobability samples are sufficient. For
reference these standards and guidelines covered
the following areas:
Standards:
Maximizing representativeness
Sampling
Statistical treatment
Justification of use of
nonprobability surveys
There was more agreement among the Panel with
respect to the following points:
´
For example, there was some discussion of
the ability to use nonprobability surveys to predict
voting results in the U.S. The accuracy achieved in
published results is impressive, particularly in that
it is predicting an outcome for a population that
includes Internet non-users. However:
- The published examples focus on success
in predicting the total voting outcome,
but questions remain on ability to predict
outcomes for specific subgroups. This is
essentially a question about the likely accuracy
of “multivariate” analyses. Such analyses are
often important in POR surveys – e.g., to
understand how results vary as a function of
region, gender, age, etc.
´
Several Panelists felt the Panel should at least state
examples of the more appropriate sorts of uses for
nonprobability surveys in a POR context, even if
these are not stated as formal guidelines.
- One cannot assume that ability to predict
voting behaviour means there would be
equal success in predicting the other types of
dependent variables important in POR – e.g.,
awareness, satisfaction, preference, perceived
importance, frequency of use, etc.
Guidelines: Assessment of representativeness
Statistical treatment
Working with these standards and guidelines,
it would be up to the researchers on a particular
project to draw conclusions on whether and how
to use nonprobability sampling for that project.
While there are promising developments with
respect to the accuracy that can be achieved
using nonprobability samples, there is not yet
sufficient empirical (or theoretical) validation
of either the accuracy or precision of population
estimates to justify using nonprobability samples
interchangeably with probability samples.
- Because of the commercial importance of
successful prediction outcomes, there is reason
to be concerned about a “publication bias” –
i.e., unsuccessful predictions may not
be publicized to the same extent as
successful predictions.
- It is not always clear what sampling,
weighting and methodological steps were
required in order to achieve a successful
prediction outcome – and indeed sometimes
this is not provided in order to protect
proprietary information. The problem is
this makes it difficult to know what steps
to take in a new survey on a new topic to
achieve a similar level of success in accuracy
of prediction.
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
“Guidance” on
Currently Appropriate
– and Inappropriate
– Uses of Nonprobability
Surveys for GC Public
Opinion Research
21
´
There was agreement among the Panel that
nonprobability samples should be used with
caution, even though there was not consensus
on how specifically to characterize when it is
appropriate to use nonprobability samples.
Among those who did attempt to characterize
appropriate/inappropriate uses of nonprobability
surveys, suggestions included:
- Exploratory research
- Theory/perspective building research
- Use nonprobability surveys in a manner
similar to focus groups/qualitative research
– e.g., to get ideas for what public opinions
may be, but not to put much emphasis on the
specific quantitative values obtained
- Use nonprobability surveys to determine
a direction (e.g., in policy or program
design), but not to try to precisely estimate
magnitudes/levels
In this regard, a few Panelists were concerned
that the emphasis being put on the statistical
limitations of nonprobability samples might be
perceived by some as implying that nonprobability
methodologies will forever be relegated to
peripheral status in POR. They felt it important
to emphasize that work is being done on how to
generate output representative of a population,
starting from nonprobability samples – and that
this work has already started to deliver promising
results. Also relevant here are the difficulties
that telephone probability samples face (e.g.,
declining/low response rates, coverage issues
posed by cell phone usage), and a need to have a
balanced perspective when judging what survey
methodology will, in practical terms, deliver
the most accurate and precise results for a
given project.
´
- Get a quick read on something before
validating it using a probability survey
- Experimental design, where the focus is on
determining the existence of differences in
response to some stimulus
- Nonprobability surveys should not be used
to make major program design or costing
decisions unless no other alternative is
available, and all possible steps are taken to
put the results into some formal framework
for assessing accuracy in representing the
relevant population
´
There was agreement among the Panel that the
GC should continue to monitor methodological
developments pertaining to the accuracy
and precision of population estimates using
nonprobability samples.
This is a dynamic field, and it appears
progress is being made. It may well be appropriate
at some point in the not-too-distant future to
broaden the range of POR studies where it would
be acceptable to use nonprobability sampling
methodologies that meet proven design criteria.
22
´ The Advisory Panel on Online Public Opinion Survey Quality
There were suggestions that the GC should use
existing POR surveys to conduct methodological
research with the goal of aiding in the
development of best practices in the use and
interpretation of different survey methodologies
– particularly including (but not limited to)
nonprobability online surveys and telephone
probability surveys.
Initiatives might include, for example,
the following types of activities:
- Common benchmarking measures could
be used to compare different survey
methodologies, and to monitor trends in
both demographic and non-demographic
coverage of online methodologies relative
to other methodologies.
- Post-hoc analyses could be done of the
statistical properties of nonprobability
survey data, in order to explore the accuracy
and precision of estimates. These could
include, for example, resampling techniques
(bootstrap, jacknife, etc.), and sensitivity
tests showing how predictions from a
nonprobability sample change as a result of
changes in sample size, weight factors, etc.
- Multi-mode research designs could be used
in order to facilitate direct comparisons of
different methodologies.
Multi-mode Surveys
Multi-mode surveys are ones where different
methods of questionnaire administration are used.
They will often involve a combination of online and
telephone methods, although there are certainly other
possibilities as well (e.g., in-person, mail, fax).
Multi-mode surveys might be done for any of
several reasons:
STANDARDS
FOR MULTI-MODE SURVEYS
When a survey is conducted using multiple modes of
questionnaire administration:
´
The reasons for using a multi-mode rather than
a single-mode method must be stated, both in the
research proposal and the survey report.
´
Multi-mode surveys can be a way of incorporating
online questionnaire administration into a
probability sample. For example, when doing a
telephone RDD probability sample of the general
public, one could provide respondents a choice of
telephone or online questionnaire completion.
´
When the plan is to combine data collected via
different modes in the data analyses, then steps
must be taken to ensure as much comparability
as possible across the different survey modes in
terms of question wording and presentation of
response options.
´
Multi-mode methods may be useful for increasing
survey response rate, if for whatever reason some
respondents are more reachable through one
mode than another.
´
Steps must be taken to ensure avoidance of
duplicate respondents in different modes. The
steps taken, and the results, must be documented.
´
´
Multi-mode surveys can be valuable for exploring
the strengths, weaknesses, and comparability of
different modes of questionnaire administration.
´
Multi-mode surveys may be helpful in
accommodating different accessibility
requirements, or different respondent preferences.
The survey report must discuss whether there are
any data quality issues arising from combining
data collected via different modes. This could
include, for example, discussion of possible
impacts of mode on key survey variables, the
impact of any differences in response rate by
mode, and nonresponse bias analyses by mode.
´
Multi-mode surveys might in some circumstances
reduce total survey cost by shifting some of the
interviews from a higher cost method (e.g.,
telephone) to a lower cost method (e.g., online).
A challenge posed by multi-mode methods is the
possibility of “mode effects” on responses. Notably,
the online (visual, self-administered) and telephone
(auditory, interviewer-administered) modes have
some quite different characteristics in terms of how
the respondent experiences the survey – and these can
potentially lead to answering questions differently.
The overall purpose of the standards below is to
ensure consideration of potential mode effects in the
research results.
´
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
23
Attempted
Census Surveys
In a census survey, an attempt is made to collect data
from every member of a population. For example, an
organization might want to do a survey of all of its
employees. In this case, the population is “all of the
organization’s employees”, and this would qualify as
an attempted census survey if all employees are invited
to participate in the survey.
Because all members of the population are invited
to participate in the survey, rather than a randomly
selected sample, there is no margin of sampling error.
However, there are two other sampling-related sources
of error that must be considered:
´
Coverage error due to discrepancies between the
sample source and the population
Using the example above: Perhaps the list of
employee addresses is not completely up to date,
and some new employees are missing from the
sample source (under-coverage); or, perhaps some
of the e-mail addresses in the sample source are
for non-employees such as contract workers
(over-coverage).
´
Nonresponse error: Ideally every member of the
population will complete the survey questionnaire.
However, this is unlikely to occur, resulting in the
possibility of non-response error.
Because margin of sampling error does not apply
to a census survey, statistical tests for differences
among subgroups that rely on estimated sampling
error cannot be used.
The Panel recommends the following standards:
24
´ The Advisory Panel on Online Public Opinion Survey Quality
STANDARDS
FOR ATTEMPTED CENSUS SURVEYS
The list or sample source must be clearly stated,
including any of its limitations/exclusions in
representing the universe for the target sample
and the potential for bias.
Note: Whenever possible, an estimate of the
percentage of the universe that has been excluded
or underrepresented should be provided.
´
´
The number of attempted recontacts and
procedure for attempted recontact should
be stated.
´
Do not state a margin of sampling error, as this
does not apply to attempted census surveys.
Standards and Guidelines
For Data Collection
This section details the standards and
guidelines related to the following
aspects of in-field procedures:
´ Required Notification to
Survey Respondents
´ Use of E-Mail for Identifying or
Contacting Potential Survey
Respondents
Required Notification
to Potential Survey
Respondents
The Panel considered the following MRIA standards
specific to conducting research using the Internet,
as detailed in the MRIA’s Code of Conduct and
Good Practice:
Respondent cooperation must be a voluntary and
informed choice
´ Access Panels: Access Panel
Researcher’s identity and list sources must be disclosed
Management; Use of Multiple Panels
Respondent’s anonymity must be protected
in the Execution of a Survey
Privacy
Interviewing children and young people
Online Survey Fieldwork; Detecting
and Dealing with Satisficing
´ Attempted Recontacts
´ Validation of Respondents
There was a consensus to adopt the MRIA
position as is with regard to the standards for the first
three topics. These sections have been reproduced in
the section titled, Responsibilities of Research Firms to
the Public.
The following are comments on modifications
made to the two remaining topics.
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
´ Incentives/Honoraria
´ Fieldwork Monitoring: Monitoring of
25
Privacy
Interviewing Children and Young People
The Advisory Panel agreed with adopting the MRIA
privacy standard, supplemented with the sample
privacy statement included in the Appendix of the
MRIA’s Code of Conduct.
In addition to commenting on the MRIA privacy
standard, the Panel was asked whether or not the
MRIA’s privacy statements should be supplemented
with the privacy-related guidance in the ESOMAR
document Conducting Market and Opinion Research
Using the Internet.
The MRIA standards do not list any required
elements for privacy statements. However, ESOMAR
does list both “standard elements for all privacy
statements”, and “three major variants” corresponding
to three different sampling methods.
The Panel agreed that:
Overall, there was agreement to adopt the MRIA
standards related to interviewing children and young
people. Several panel members noted that it is more
difficult in an online environment to identify who
one is actually surveying (e.g., on a telephone survey,
there may be auditory cues as to the age of the person).
Additional language was added to the standards to
acknowledge this difficulty.
´
It was unnecessary to reproduce the “standard
elements” for all privacy statements on the
grounds that the MRIA sample privacy statement
already conforms with these “standard elements.”
´
The “three major variants” should be included
to provide guidance on how to apply the MRIA
standards to different sampling methodologies,
since some specific aspects of the policy will vary
by the survey method being used. For reference,
these major variants are:
STANDARDS
RESPONSIBILITIES OF
RESEARCH FIRMS TO THE PUBLIC
1)Respondent cooperation
must be a voluntary and
informed choice
Voluntary Participation
´
Survey Respondents’ co-operation must at all
times be voluntary. Personal information must not
be sought from, or about, Respondents without
their prior knowledge and agreement.
Variant #1
Misleading and Deceptive Statements
“Surveys where the respondent has, or is in the
process of, voluntarily joining a panel for market
research purposes”
´
Variant #2
“Surveys where the research agency has been given or
has acquired a list of e-mail addresses in order to send
invitations to participate in a survey”
Variant #3
“Intercept surveys where the respondent is selected as
a 1 in n sample of visitors to a website”
26
´ The Advisory Panel on Online Public Opinion Survey Quality
In obtaining the necessary agreement from
Respondents, the Researcher must not mislead
them about the nature of the research or the
uses which will be made of the findings. In
particular, the Researcher must avoid deceptive
statements that would be harmful to or irritate
the Respondent.
Use of Survey Information
´
Survey introductions or a survey description
to which a link has been provided must assure
Respondents that data will be collected only for
research purposes. Any other purpose, such as
rectifying a specific customer complaint, must
have the proven express consent of the respondent.
Researchers must not under any circumstances
use personal information for direct marketing or
other sales approaches to the respondent.
´
Commercial researchers must not use respondents
contacted in the course of conducting GC online
surveys to build their own access panels.
Disclosure of Client
´
Duration of the Online Survey
´
For surveys completed on-line, respondents
should be informed, at the beginning of
the survey, about the length of time the
questionnaire is likely to take to complete
under normal circumstances.
For customer database surveys (i.e., surveys based
on client-provided lists), the identity of the Client
must be revealed.
Disclosure of List Sources
´
Where lists are used for sample selection, the
source of the list must be disclosed. Researchers
should ensure that lists are permission-based for
research purposes and that the data are current.
E-mail Invitations to Respond
´
Researchers should reduce any inconvenience or
irritation their e-mail invitations might cause the
recipient by clearly stating its purpose in the first
sentence and keeping the total message as brief
as possible.
Any links to data protection, privacy policy or
cookie policy statements should be given at the
start of the questionnaire.
2)Researcher’s identity and
list sources must be disclosed
Disclosure of the Identity of the Researcher
´
Protection of Respondent Anonymity and
Use of Information
´
Links to Privacy and Cookie Policies
´
3)Respondent’s anonymity
must be protected
Respondents must be told the identity of the
Researcher carrying out the project and given
contact information so that they can, without
difficulty, re-contact the Researcher should they
wish to do so.
The anonymity of Respondents in consumer
research must always be preserved unless they
have given their informed and express consent
to the contrary. If these Respondents have given
permission for data to be passed on in a form
which allows them to be personally identified,
the Researcher must ensure that the information
will be used for research purposes only, OR, if
requested, to rectify a customer complaint. Such
personally identified information must not be
used for subsequent non-research purposes such
as direct marketing, list-building, credit rating,
fund-raising or other marketing activities relating
to those individuals.
Providing Information about
Research Agency/Sponsor
Respondents must be given the opportunity
to find out more about the research agency or
sponsor carrying out the study, by giving them
the name of the organization together with
contact information (postal address, telephone
number, agency’s website or e-mail address) or
a registration number and the MRIA’s toll-free
telephone number for any research registered
in the MRIA’s Research Registration System. A
corresponding hyperlink is recommended for
this purpose.
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
´
27
4) Privacy
5)Interviewing Children and
Young People
Disclosure of Privacy Policies
´
´
Canadian organizations that collect personal
information are required by law to have a privacy
policy. Marketing Research and Intelligence
Association members carrying out research on
the Internet should post their privacy policy on
their website, with a Privacy hyperlink from
every page of the website. The order and wording
of the published privacy statement is a matter
for each member to decide according to its
specific circumstances.
The MRIA Privacy Protection Handbook includes
a sample corporate privacy policy. An example of
the MRIA privacy statement for Internet research
and the variants, depending on the sampling
methodology follows at the end of this section.
Respondent’s E-mail Address is
Personal Information
´
A Respondent’s e-mail address is personal
information and must be protected in the same
way as other identifiers.
General
´
Children may be familiar with using the Internet
but research has found them to be naïve and
trusting, happily disclosing information about
themselves or their households without realizing
the implications of doing so. Parent groups,
consumer groups and legislators are particularly
concerned about potential exploitation of children
on the Internet and it is for this reason that
guidelines place greater burdens on Researchers
than would be the case in adult research. While
validating respondent identity and age can be a
challenge in online research, it is important that
researchers make every effort to do so.
´
A “child” is to be defined as “under the age of 13”
and a “young person” as “aged 13-17.”
Observation of Laws and National Codes
´
Disclosure of the Use of Cookies,
Log Files or Software
´
Researchers must have a readily accessible policy
statement concerning the use of cookies, log files
and, if applicable, software. This statement may
be either included in their privacy policy or it
may appear in a separate document. Software
must not be installed on Respondents’ computers
without their knowledge or consent. In addition,
Respondents must be able to remove the
Researcher’s software easily from their machines
(e.g., for Windows users, the software must appear
in the Add/Remove Programs folder in their
Control Panel).
Conformance to Industry Guidelines
´
´
28
Respondents are entitled to ask that part or all
of the record of their interview be destroyed or
deleted and the Researcher should conform to any
such request where reasonable.
´ The Advisory Panel on Online Public Opinion Survey Quality
Researchers must use their best endeavours to
ensure that they conform to the requirements of
this guideline, for example by introducing special
contacting procedures to secure the permission of
a parent, legal guardian, or other responsible adult
before carrying out an interview with children
under 13. Where necessary Researchers should
consult the MRIA for advice.
Adult Consent
´
Deletion of Respondent’s Record
Researchers must observe all relevant laws and
national codes specifically relating to children and
young people although it is recognized that the
identification of children and young people is not
possible with certainty on the Internet at this time.
Permission of a responsible adult must be obtained
before interviewing children under the age of
13 years.
´
3)
Researchers must ensure that the principle of
consent is met, so if Internet research is conducted,
special measures must be taken to ensure verifiable
and explicit consent.
Process for Obtaining Consent:
Online Panels or Other Approved Lists
In cases where interviews with children of adult
online panelists or children of other online list
members are desired, the following measures must
be implemented.
The e-mail invitation to the adult panelist or
list member must contain the following:
For websites aimed at adults, a notice to
parent or guardian, seeking their consent for
their child to be asked to participate in the
research, must be posted on the website.
This notice must include:
a)
b)
c)
d)
e)
f)
´
A heading explaining that this is a notice
for parents
Name and contact details of the agency/
agencies and the name of the Client
(if the Client agrees)
The nature of the data to be collected
from the child
An explanation of how the data will
be used
A description of the procedure for giving
and verifying consent
A request for a parent’s contact e-mail
address, address or phone number for
verification of consent
a)
A notice stipulating that the online survey is
intended for the child within the household
b)
Name and contact details of the
agency/agencies
c)
The nature of the data to be collected from
the child
Parent Contact Details
An explanation of how the data will be used
´
d)
Process for Obtaining Consent:
Recruiting Children from Websites
´
In cases where children are being recruited
from websites, the following measures must
be implemented:
1)
For websites aimed at children, a notice to
children, informing them of the requirement
for adult consent must be shown at the
beginning of the survey. This notice should
be clear and prominent and must include an
explanation of the subject matter and nature
of the research and details of the agency
undertaking it, with contact information.
To obtain consent, the notice must request
the adult’s contact information (e.g. e-mail
address). It must also refer to the fact that
consent will be verified.
2)
Questionnaires on websites aimed at
children must require a child to give their age
before any other information is requested.
If the age given is less than 13 years, the
child must be excluded from giving further
information until the appropriate consent
has been obtained.
It is permissible to ask children to provide contact
details for their parents in order for consent to be
sought as long as this purpose is made clear in the
request for information.
Acceptable Forms of Consent
for Classic Research
´
Where personal information collected from
children will only be used for classic research
purposes and no personal data will be passed on
for any other purpose, a return e-mail from parent
or guardian giving their consent is acceptable, as
long as additional steps are taken to ensure that
the consent actually came from a parent - for
example, following up with an e-mail, letter
or phone call.
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
Consent
29
Situations When Parental Consent is
NOT Required
´
Prior parental consent will not be required to:
1)
2)
Collect a child’s or parent’s e-mail address
solely to provide notice of data collection and
request consent.
Collect a child’s age for screening and
exclusion purposes. If this screening leads
to the decision that a child does qualify for
interview, parental consent must then be
sought to continue with the interview.
E-Mails to Children
´
E-mail communications must not be addressed
to children without verifiable and explicit
prior consent.
Types of Information Collected
´
Personal information relating to other people
(for example, parents) must not be collected
from children.
Sensitive Questions
´
Asking questions on topics generally regarded as
sensitive should be avoided wherever possible and
in any case handled with extreme care.
Policies Must be Understandable
´
30
All data protection, privacy policy, consent and
other notices must be capable of being understood
by children.
´ The Advisory Panel on Online Public Opinion Survey Quality
Sample Privacy Policy Statement
and Three Major Variants
MRIA Sample Privacy Policy Statement
NameOfCompany would like to thank you for
taking part in this Market Research survey about
GeneralDescriptionOfTheSurvey. We are not trying to
sell or promote anything. We are interested only in
your opinions. The answers you give us will be treated
as Confidential unless you have given your consent to
the contrary. In the relatively few instances where
we ask you for permission to pass data on in a form
which allows you to be personally identified, we will
ensure that the information will be used only for the
purposes stated.
We will not send you unsolicited mail or pass on
your e-mail addresses to others for this purpose. As
with all forms of marketing and opinion research,
your co-operation is voluntary at all times. No
personal information is sought from or about you,
without your prior knowledge and agreement. You are
entitled at any stage of the interview, or subsequently,
to ask that part or all of the record of your interview
be destroyed or deleted. Wherever reasonable and
practical we will carry out such a request.
We use cookies and other similar devices sparingly
and only for quality control, validation and to prevent
bothersome repeat surveying. You can configure your
Browser to notify you when cookies are being placed
on your computer. You can also delete cookies by
adjusting your browser settings
We automatically capture information about your
browser type for the sole purpose of delivering an
interview best suited to your software.
Our web site has security measures in place
to protect the loss, misuse, and alteration of the
information under our control. Only certain
employees have access to the information you provide
us with. They have access only for data analysis and
quality control purposes.
You can contact us at [email protected]
to discuss any problems with this survey. You can find
out more about us at www.ourwebsite.com. We are
members of the MARKETING RESEARCH AND
INTELLIGENCE ASSOCIATION and follow their
code of conduct for market research.
Variant #1: Panels
Variant #3: Intercept Surveys
Surveys where the respondent has, or is in the process of,
voluntarily joining a panel for market research purposes
Intercept surveys where the respondent is selected as a
1 in n sample of visitors to a web site
The sign up process – describe the registration
process.
´
The panel database – describe information that
is stored for panel management, control and
sample selection.
´
Frequency of contact – Give some statement of
how often or for how long.
´
Password identity system – if it is used describe
how it works and the security it offers.
´
Opt in and opt out policies for communications
other than surveys such as panel maintenance
or reward schemes. State what communications
will be sent, which are optional and clarify any
potential communications for third parties.
´
Reward – explain any reward scheme and if this
forms the basis for a contract.
Variant #2: List of E-Mail Addresses
Surveys where the research agency has been given or
has acquired a list of e-mail addresses in order to send
invitations to participate in a survey
´
Source of information – clear statement of where
the e-mail address came from or that this will be
included in the information given in the survey
itself. Also, if a list has been provided, state that
the list provider has verified to the research agency
that the individuals listed have a reasonable
expectation that they will receive e-mail contact.
´
Spamming – will not knowingly send e-mail
to people who have not consented to helping in
research. May include mechanism for removing
your name from future surveys or notifying the
provider of the e-mail list.
´
Password identity system – if it is used describe
how it works and the security it offers.
´
Stop and start interview process – if this is possible
explain how, and any information stored to
allow it.
´
Explain intercept technique – random selection.
´
Password identity system – if it is used describe
how it works and the security it offers.
´
Stop and start interview process – if this is
possible explain how, and any information stored
to allow it.
´
Invisible processing – describe any invisible
processing used to make the intercept or re-direct
respondents to the survey.
Use of E-mail
for Identifying or
Contacting Potential
Survey Respondents
The standards recommended by the Advisory Panel
are closely modeled on the MRIA standards regarding
the use of e-mail for identifying or contacting potential
survey respondents.
The MRIA standards use the terminology
of “consumers” and “business-to-business”. The
following are examples of how these terms are to be
interpreted in a public opinion research context:
´
“Business-to-business research”: This includes
surveys of businesses, but also could include
other types of organizations – e.g., NGOs (NonGovernmental Organizations), other government
organizations, and so forth. This research
would also include surveys of professionals on
subjects relevant to their profession. Examples of
professionals who might be surveyed include selfemployed entrepreneurs, small office/home office
workers (SOHOs), scientists, doctors or nurses,
and so forth.
´
“Consumers”: This includes the general public,
as well as employees in an employee survey.
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
´
31
There were several additions made to the
standards as follows:
Permission requirements in the case of
Government of Canada lists: Some Panelists
recommended it be made clear that in the case
of GC lists provided to third-party research
providers, that the appropriate permission for
such usage must exist. Their concerns were that
(a) this may not always have been the case in the
past, and (b) it is important moving forward for
departments/agencies to consider seeking such
permission as they gather client information, in
order to enable them to use the information for
research purposes at a later point in time.
Standard 1.1.2 was added to address
these concerns.
STANDARDS
FOR USE OF E-MAIL
´
´
´
32
Under what circumstances “collecting e-mail
addresses from public domains” qualifies as
“subterfuge”, and when it does not: In the MRIA
wording of Standard 3.1, the linkage between
“subterfuge” and “collecting e-mail addresses
from public domains” triggered some concerns
that this language could be overly restrictive. A
sentence was added to Standard 3.1 to define the
circumstances in which it would be acceptable to
collect e-mail addresses from public domains.
Meaning of the term “agent”: In Standard 4.1
on Data Collection and Recruitment Techniques,
the MRIA language referred to “agents” without
any accompanying definition. Some Panelists
suggested adding clarification of the term.
Terminology from a similar standard suggested
by IMRO (Interactive Marketing Research
Organization) – namely, “The use of Spambots,
Spiders, Sniffers or other ‘agents’ that collect personal
information without the respondents’ explicit
awareness” – was added to address these concerns.
Note: Spambots, etc. refer to automated methods
or programs which collect e-mail addresses or files
from the Internet.
´ The Advisory Panel on Online Public Opinion Survey Quality
1.1 Unsolicited E-mail
´
Researchers must not use unsolicited e-mail
to invite consumers to participate in research.
Researchers must verify that consumers contacted
for research by e-mail have a reasonable
expectation that they will receive e-mail contact
for research, irrespective of the source of the
list (i.e. Client, list owner, etc.). Such agreement
can be assumed when all the following
conditions exist:
1)
A substantive pre-existing relationship exists
between the individuals contacted and the
research organization, the Client, or the list
owners contracting the research (the latter
being so identified).
2)
Individuals have a reasonable expectation,
based on the pre-existing relationship, that
they may be contacted for research. In the
case of lists provided by the Government of
Canada to third-party marketing research
providers, there must be legislative authority
to provide information for this purpose, or
those on the list must have previously given
permission for their information to be used
for this purpose.
3)
Individuals are offered the choice to be
removed from future e-mail contact in
each invitation.
4)
The invitation list excludes all individuals
who have previously taken the appropriate
and timely steps to request the list owner to
remove them.
´
Unsolicited survey invitation e-mails may be sent
to business-to-business research Respondents
provided that Researchers comply with points 3
and 4 in clause 1.1 above, as well as the anti-spam
policies of their Internet service providers and
e-mail service providers.
3.1 Collection of E-mail Addresses
´
Research organizations are prohibited from using
any subterfuge in obtaining e-mail addresses of
potential respondents, such as collecting e-mail
addresses from public domains, using technologies
or techniques to collect e-mail addresses without
individuals’ awareness, and collecting e-mail
addresses under the guise of some other activity.
An exception to the above is that it is acceptable
to collect e-mail addresses from public domains
for business-to-business research relevant to their
professional interests.
4.1 Data Collection & Recruitment Techniques
´
Researchers must not make use of surreptitious,
misleading or unsolicited data collection or
recruitment techniques – including using
spambots, spiders, sniffers or other ‘agents’
that collect personal information without the
Respondent’s explicit awareness, spamming,
scamming or baiting Respondents.
5.1 Misleading E-mail Return Addresses
´
Research organizations are prohibited from
using false or misleading return e-mail addresses,
including spoofing the from label of e-mail
messages, when recruiting Respondents over
the Internet.
6.1 Opt-out
´
A Respondent must be able to refuse participation
in the survey via a suitable option, and to refuse
further contact by e-mail in connection with
the survey.
Access Panels
This section details the recommendations of the
Advisory Panel related to access panels, including:
´
Standards for access panel management
´
Standards and guidelines with respect to use of
multiple panels in the execution of a survey
´
Guidelines pertaining to category exclusion when
using access panel samples
Standards for Access
Panel Management
The Advisory Panel recommended adopting the
ESOMAR standards for access panel management
with only relatively minor modifications.
The Panel also recommended these standards
be applied both to commercial access panels and to
any access panels established and operated by the
Government of Canada.
Notably, some of the ESOMAR requirements
are of the form “have a policy”, but ESOMAR does
not state what the policy should be. For reference,
the following are areas where ESOMAR used
this approach:
Definition of an active panel member (Section 1.2)
Frequency of selection for participation in
surveys (3.3)
Maximum number of research projects in which a
panel member can participate (3.4)
Validation checks that a panel member did indeed
answer a survey (4.4)
Frequency of updating panel member background
information (5.2)
How long active panelists are allowed to remain on a
panel before being removed (5.3)
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
2.1 Business-to-Business Research
33
Panelists supported the ESOMAR “have a policy”
approach because of the flexibility this gives in terms
of permitting different approaches to dealing with
the various issues related to panel management. This
resulted in two issues that needed to be addressed:
´
Some Panel members were concerned the use
of the word “policy” in a GC context could be
misinterpreted. To address this, the word “rule”
has been substituted for “policy.”
´
In the case of commercial access panels, suppliers’
“rules” should be subject to review (e.g., through
the RFSO process), in order that the GC can apply
quality assessments to the rules. The Panel agreed
to the following recommendation:
The Advisory Panel recommends these access panel
management standards with the understanding that,
for commercial access panels, the standards which
specify a requirement to have a rule but do not specify
the content of the rule will be subject to quality
assessment in a competitive bidding process.
On the matter of applying category exclusion to
samples based on access panels, the Advisory Panel
recommended two guidelines: one that encourages
consideration of applying category exclusion, and one
that suggests at least a 6-month exclusionary period
for most types of surveys (see Standard 4.1 for the
wording of the guidelines).
Note that the possibility of applying category
exclusion is supported in the recommended standards.
Specifically, Standard 4.1 requires the following:
Panel owners should keep detailed records for each
panel member of:
-
The research projects or surveys for which they have
been sampled
-
The nature of the panelist’s response to each project
or survey
With records of this sort on each panelist, the
ability exists to exclude panelists from a survey who
have recently been sampled for or completed another
survey on a similar topic.
34
´ The Advisory Panel on Online Public Opinion Survey Quality
ESOMAR provides the following commentary as
to why category exclusion should be a consideration:
Response rates and data quality may suffer if
panelists are offered repeated opportunities to
complete interviews on the same topic. Panelists may
show signs of conditioning (repeated interviewing
influencing subsequent opinions). These concerns can
be mitigated by ensuring the panelists are provided
an appropriate number of invitations and including a
mix of topics.
Providing a mix of different subject matter
opportunities, and/or by creating topical sub-panels so
that panelists are explicitly recruited as subject matter
experts can help response rates and prevent either over
use or serial disqualifications.
STANDARDS
FOR INTERNET ACCESS PANEL
MANAGEMENT
1.0 Panel Size
1.1
The size of the panel should be stated honestly
and be based on the number of individuals who
have personally joined the panel. Even though the
panel owner might have data on other household
members in panelists’ homes, the panel size
should not be calculated to include additional
household members who have not actively joined
the panel.
The claimed size of the panel should be based
on active panel members.
´
2.2
1.2
Panel owners should have a clear and published
definition of an active panel member. The size of
the active panel will normally be lower than the
total number of panelists. The following definition
is recommended, but the final definition rests with
the panel owner:
An individual panel member whose set of
background variables are complete and up
to date (see point 3 below) and who in the
preceding 12 months has either:
a)
Joined the panel following procedures
set out in the Panel Recruitment section
below.
b) Co-operated by completing an on-line
questionnaire (including replying to a
screener)
c) Indicated to the panel owner that they
wish to remain a member of the panel
2.0 Panel Recruitment
´
The panel owner should retain documentary
proof of how each panel members was recruited
– from what type of source their name and e-mail
address was obtained including, where relevant,
the web site from which they were invited to
join the panel. In particular, respondents who
have been actively recruited through a traditional
sampling approach and invited to join the panel
should be identified. An overall analysis of type
of recruitment source for the active panel or for
any sample drawn from it should be available
to potential buyers. Panel owners may protect
commercially sensitive information about the
exact sources used.
2.3
´
The panel owner should have documented
procedures for checking that new panel members
are not already panel members and thereby avoid
duplication in the panel.
2.4
´
2.1
´
Panel members must be told that they are a
member of a panel and be asked to voluntarily
and actively indicate that they wish to be on the
panel. A double opt-in recruitment process is
recommended particularly where respondents
are recruited on-line. This procedure requires
the respondent to initiate an approach to the
panel owner, the panel owner replies confirming
the panel details and double checks that the
respondent is who they seem to be and that they
do wish to join. The respondent then replies to
complete the double opt-in and joins the panel.
´
There may be circumstances when the panel
owner already has e-mail addresses for potential
panelists, where a simplified opt-in process is
acceptable. This would start with an e-mail from
the panel owner followed by the panel member
replying or visiting a web site to enrol.
´
The panel owner should retain documentary
proof (either hard copy or electronic) of each panel
member’s agreement to join the panel.
On recruitment all panel members should provide
a set of basic descriptive information about
themselves in order that the representativeness
of the panel can be assessed and that targeted or
stratified sample can be drawn.
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
´
35
2.5
´
2.6
There is not a prescribed mandatory minimum set
of background variables that should be recorded
about each active panel member. And, the nature
of appropriate background variables can be
different for different types of panels – e.g., for
a general public panel versus a business panel.
However, depending on the panel target group,
the following variables can have valuable roles in
strategies to avoid duplication or clarify individual
identity, stratification of samples for research
projects, development of sample weights, and
conducting analyses of potential nonresponse bias:
- Sex
- Language
- For a business panel: occupation or position,
´
3.0 Project Management
3.1
´
type of business, business size
- Level of education
- Household size
- Region
- Location (preferably including the first 3
postal code digits)
- Age (date of birth)
- Presence of children in household
- Working status
- Weight of Internet usage (hours per week)
- Type of Internet access (dial-up vs. high-
speed, as this can make a difference in ability
or willingness to do online surveys)
- Use of adaptive technologies
´
Wherever possible, the coding of background
variables should be compatible with coding
systems used by Statistics Canada.
´
All panel members must be given a clear and
unambiguous guarantee that the access panel is
used solely for the purpose of conducting market
research surveys (i.e. there will be no attempt
to sell products or approach panel members
for telemarketing or any other form of
marketing activity).
Panel owners should have a clearly defined list
of data about panelists that can be used in the
definition of a sample to be selected from the
panel. This list should include both background
variables provided by all panel members and items
of panelist history such as recency of selection for a
previous project and co-operation history.
3.2
´
Panel owners should provide to clients a clear
and honest description of the nature of their panel
– the population it covers – and be transparent
about partnership arrangements with other
panel owners.
3.3
´
Panel owners should have a published rule about
how frequently they select individual panel
members to participate in surveys.
3.4
´
Panel owners should have a published list of
background variables for which data are available
from all panel members.
Panel owners should have a clearly stated rule
about the maximum number of research projects
or the maximum time commitment for which a
panel member will be selected to participate in any
given period of time.
3.5
´
36
´ The Advisory Panel on Online Public Opinion Survey Quality
Panel owners should maintain a profile of each
panelist that can be used to identify specific
panelists from the entire panel who should or
should not be asked to participate as part of a
specific project sample.
4.0 Panel Monitoring
´
Panel owners should have a clearly defined rule
on how they reward panelists. The research buyer
should be informed of the reward method to be
used on their project.
3.7
´
´
Panel owners should provide a comprehensive
response analysis at the end of each survey. This
should also include a copy of the solicitation
e-mail sent to panel members and the full wording
of any screening or introductory questions put to
panelists before the main survey started.
The following content is recommended for
inclusion in a project technical summary:
Original invite text(s)
Date(s) of invites, and date(s) of reminder(s)
Date of closing fieldwork (days in field)
Panel used (proprietary or third party
and amounts)
´
Response based on the total amount of
invites (% or full numbers) per sample drawn
(country, questionnaire):
% questionnaires opened
% questionnaires completed
(including screen-outs)
% in target group (based on quotas)
% validated (rest is cleaned out, if applicable).
´
A short description of how the response and the
project relate to the standard criteria – is it less
or more than usual, and any peculiarities with
the survey.
3.8
´
Panel owners should have documented procedures
to ensure that a panel member can answer a
survey for which they have been selected,
only once.
4.1
´
Panel owners should keep detailed records for
each panel member of:
The research projects or surveys for which
they have been sampled
The nature of the panelist’s response to each
project or survey
´
The records should be stored in such a way that it
is easy to determine:
When a panelist was last selected for a survey
When a panelist last co-operated with
a survey
The number of surveys the panelist has
completed in any given period of time.
Guideline 1: Consider excluding panel members
who have recently participated in a survey on
the same subject (a practice called “category
exclusion”).
Guideline 2: When category exclusion is applied,
consider using an exclusionary period of at
least 6 months. Note that 6 months may not
be appropriate in all circumstances – e.g., in
longitudinal panel surveys where clients may wish
to obtain trend data on a particular topic on a
more frequent basis.
4.2
´
Panel owners should calculate regularly and be
able to make available to potential clients key data
about their panel including:
Average number of projects selected for,
per panelist per period
Maximum number of projects selected for,
per panelist per period
Average number of complete questionnaires
per panelist per period.
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
3.6
37
6.0 Privacy/Data Protection
4.3
´
Where panel owners adopt an electronic
storage system that allows all responses given
by a respondent (across many surveys), all data
collected exclusively on behalf of a client must be
treated as confidential and may not be used in the
future on behalf of a second client either in the
selection of sample or analysis of data.
6.1
´
4.4
´
Panel owners should have a clear and published
rule about validation checks. They should
maintain records of any checks they carry out to
validate that the panel member did indeed answer
a survey.
The panel must be managed in accordance with
local data protection laws and, if legally required,
should be registered with the appropriate
authority. To ensure coherent, cost-effective
management of public opinion research through
the Government of Canada, institutions must
ensure that the principles of fair information
practices embodied in Sections 4 to 8 of the
Privacy Act, as well as in the Personal Information
Protection and Electronic Documents Act, are
respected in any public opinion research.
6.2
5.0 Panel Maintenance
´
5.1
´
Panel owners should regularly remove from the
panel non-active members. Each panel member’s
record of participation should be reviewed
regularly and the panel owner should have clearly
defined rules for when to remove panelists as nonactive based on their cooperation history in the
preceding period. Panel members who appear to
be inactive because they have not been selected for
a survey since the last review of their status should
be contacted in order to confirm their willingness
to continue as panel members.
5.2
´
Panel owners should have a clearly defined policy
on how frequently panel members will be asked to
update their background information. This policy
should also define whether or not changes in
circumstances discovered during survey projects
will be recorded in the data record.
5.3
´
38
Panel owners should have a clearly defined policy
on how long they will allow an active panelist to
remain on the panel before they are removed and
replaced by new panel members.
´ The Advisory Panel on Online Public Opinion Survey Quality
Panel members must, on their request, be
informed which personal data relating to them are
to be stored. Any personal data that are indicated
by panel members as not correct or obsolete must
be corrected or deleted.
6.3
´
Panel members must be given a simple and
effective method for leaving the panel whenever
they choose to. Panel members who have stated
that they wish to leave the panel must not be
selected for any further surveys and must be
removed form the panel as soon as practicable.
Further, their e-mail addresses cannot be traded,
sold or given away to another research supplier.
Standards and Guidelines
With Respect to Use of Multiple Panels
in the Execution of a Survey
It is sometimes the case that a panel
provider will supplement the sample
from the primary panel with samples
from other panels. These other panels
might be specialty panels operated by
the same panel company, or panels
operated by other companies. This
use of multiple panels can occur, for
example, when particularly large sample
sizes are required, and/or when the
incidence groups.
´
The identity of other panels must be disclosed
before they are used.
´
All panels used must conform to the Government
of Canada standards with regard to access panel
management, and it is the responsibility of
the lead panel provider to ensure this if other
panels are to be used. If secondary panels do not
fully comply with GC standards, then areas of
noncompliance must be disclosed and discussed
prior to the use of the panels.
The Advisory Panel recommended the following
standards and guideline.
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
target population includes lower
The issue for the GC is ensuring data quality
is maintained when other panels are used which
have no direct contractual arrangement with the
GC. For example, the GC might issue a Standing
Offer contract to Panel Company A based on its
panel management quality standards, but a survey
conducted by Panel Company A might also include
samples from Panel Companies B and C who are not
on the Standing Offer.
One possible solution to the issues raised by use
of multiple panels is simply to forbid the practice.
However, the Advisory Panel did not recommend this,
but rather acknowledged that in some circumstances
it can be necessary and useful to use multiple panels
in order to facilitate completion of a project.
The Panel agreed that:
39
STANDARDS
USE OF MULTIPLE PANELS
GUIDELINE
USE OF MULTIPLE PANELS
´
If it is determined that multiple access panels
will be used for an online survey, it is still the
responsibility of the lead panel provider to ensure
that the standards and guidelines for Sampling
Procedures and for Response/Success Rate
are adhered to for the survey as a whole. This
requires the lead panel provider to have sufficient
information and control over the other panel
samples to comply with these standards.
´
´
The identity of the other panels must be
disclosed and agreed to prior to the use of these
other panels.
Incentives/honoraria
´
Any other panels used must conform to MRIA
requirements regarding use of unsolicited e-mail.
´
Any other panels used must conform to the
Government of Canada standards with respect
to panel management. If a panel does not
fully comply with these standards, the areas
of noncompliance must be disclosed prior to use
of the panel, and agreement obtained to use
the panel.
´
The distribution of the initial and completed
samples across the different panels must
be described.
´
A description must be given of the steps taken to
avoid duplication in the sample of individuals who
are members of more than one panel.
´
Panel source must be part of the data record for
each respondent.
´
Both prior to fieldwork, including in the research
proposal, and in the survey report, there must be
discussion of any data quality issues that might
arise because of the use of multiple panels for
a survey.
It is recommended that selected data gathered
during survey administration be analyzed in order
to identify the nature of any “panel effects” that
might exist. Such data could include, for example,
response/success rate, and results for selected
survey variables.
The starting point for the Online Advisory Panel
was the section in the Telephone report on Incentives/
Honoraria. The Panel generally agreed with these
existing guidelines, but there were also a few
modifications recommended to:
´
Address the need for transparency about
incentives at several stages in the research process.
This included adding a guideline related to postsurvey transparency that names of contest winners
be posted, and that appropriate records be kept in
the event an audit is requested.
´
Add a standard and reword the guidelines to
reflect that incentives are often expected by
respondents to online surveys, particularly for
those surveys conducted by research companies.
STANDARD
FOR INCENTIVES/HONORARIA
´
The details of any incentives/honoraria to be used
for an online survey must be provided in both the
proposal and survey documentation, including:
- The type of incentive/honoraria
(e.g., monetary , non-monetary)
- The nature of the incentive – e.g., prizes,
points, donations, direct payments, etc.
- The estimated dollar value of the incentives to
be disbursed
40
´ The Advisory Panel on Online Public Opinion Survey Quality
GUIDELINES
FOR INCENTIVES/HONORARIA
´
Monetary incentives should be used only when
there are strong reasons to believe they would
substantially improve the response rate or
the quality of responses. Note that in online
surveys, particularly those conducted by research
companies, respondents have often come to expect
that an incentive will be offered.
´
The decision to use respondent incentives
(monetary or non-monetary) or honoraria to gain
respondent cooperation should carefully weigh the
potential for bias in the study due to nonresponse
against the potential that the use of incentives/
honorariums can affect the sample composition
(i.e., who agrees to participate in the survey)
and/or the possibility that the response to some
questions in the survey may be influenced.
´
The use of respondent incentives (monetary or
non-monetary) or honoraria, and the magnitude
of incentives or honoraria, may be considered as a
strategy to improve response rates under one or a
combination of these circumstances:
- When using sample sources that have
pre-existing commitments to potential
respondents with regard to incentives/
honoraria (e.g., commercial access panels)
- When response burden is high or exceptional
effort is required on the part of the respondent
(e.g., when interviews exceed 20 minute in
length, respondents are required to do some
preparatory work for the online survey, the
study is complex)
- The target population is low incidence
(e.g., 5% or less of the population) or the
population size is very limited
There will be a cost saving, e.g.,
incentives/honoraria are less expensive
than large numbers of re-contacts
b) The use of incentives/honoraria is
required to meet the study schedule
Note: Under no circumstances are employees
of the Government of Canada to receive monetary
incentives or honoraria for participating in GC
sponsored research in which GC employees are a
specified target group.
a)
´
Consider the use of non-monetary incentives
wherever possible and appropriate. These can
include: colouring books for children’s surveys, or
a copy of the survey findings (e.g., the executive
summary, survey highlights) for special-audience
research. However, the type of incentive selected
must in no way influence potential answers to
survey questions.
´
Monetary incentives/honoraria in the form of
“cash” disbursements (either directly to the
respondent or for example to a charity of their
choice), gift certificates and entries in prize
draws can be considered. The amount of the cash
disbursement and gift certificates should be kept
as low as possible, without compromising the
effectiveness of the incentive/honorarium.
´
The use of incentives (monetary or non-monetary)
or honoraria for a survey will also require
decisions and documentation as to:
- When incentives/honoraria will be provided,
- At later stages of the field process rather than
for all interviews; note, however, that great
caution must be used in offering different
incentive amounts to different respondents
- When it can be demonstrated that:
whether at initial contact or post-survey
- To whom incentives/honoraria will be given,
whether all contacts (whether or not they
complete the survey) or only those who
participate in the survey
- How the incentives/honoraria will be paid
out/distributed by the research firm or the
Government of Canada and the associated
cost for this disbursement (e.g., professional
time and direct expenses)
- The population is made up of hard-to-reach
target groups (e.g., physicians, CEOs, recent
immigrants/newcomers)
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
41
´
When prizes or lotteries are used, the pertinent
provincial and federal legal requirements must
be met. Where consistent with these legal
requirements, the name of the winners should
be posted on the website of the responsible
organization. It is recommended, out of concern
for the privacy of these individuals, that they
be identified only by their first name and the
first letter of their last name. Records of how
prizes or lottery amounts were awarded and of
all disbursements must be maintained by the
third party provider or government department
responsible for the survey. These records should
be available for audit to confirm that prizes
or lottery amounts were awarded and
distributed appropriately.
Fieldwork Monitoring
Monitoring of
Online Survey Fieldwork
There was general consensus by the Panel that
monitoring during the fieldwork is important to detect
and address any issues while the survey is in the field
rather than once the fieldwork is completed.
The Panel recommended the following standard
and guidelines:
STANDARD
FOR MONITORING OF
ONLINE SURVEY FIELDWORK
´
The Panel was asked to consider whether or not
standards or guidelines are required related to:
´
Monitoring online surveys once the fieldwork
is underway, in order to ensure:
-
The survey questionnaire is being
administered as intended
-
Interview data are being recorded
appropriately
´
Detecting and dealing with satisficing, that is:
-
Flatlining, straightlining or replicated
answer patters
-
-
Rushed answers
Each online survey must be closely monitored
throughout the fieldwork to ensure that responses
are valid, that the survey is administered
consistently throughout the data collection
period, and that the responses are being
recorded accurately.
Illogical and unreasoned answers
GUIDELINES
FOR MONITORING OF
ONLINE SURVEY FIELDWORK
´
Monitor the following aspects of fieldwork in
order to identify any adjustments that might be
needed during the fieldwork period:
1)
Survey invitation, for example, when
attempted contacts related to participation
are made and when they are responded to
2)
Survey response, for example:
- Analysis of drop-offs: causes, respondent
characteristics, at what points in the
questionnaire drop-offs occur
- Frequent (e.g., daily) review of completed
interviews in terms of respondent sources
and respondent characteristics
42
´ The Advisory Panel on Online Public Opinion Survey Quality
3)
Survey experience, for example:
- Page loading times
- Respondent contact with “help” desk/
technical support: number and reasons
´
Monitor the quality control of all aspects of
the survey experience from the respondent’s
perspective by, for example having in-house
staff act as “proxy” quality assurance
respondents during the course of the fieldwork
or other methods.
Detecting and Dealing
with Satisficing
Satisficing refers to completing a questionnaire (or
part of a questionnaire) with little thought or care.
The ideal respondent “optimizes” as s/he answers
every question, conducting a complete and unbiased
search of memory and full integration of retrieved
information. By contrast, satisficers’ responses are
formulated with reduced thoughtfulness, careless
integration of retrieved information, and a haphazard
selection of response choice.
The self-administered nature of online surveys
is associated with some risk of satisficing behaviour
– although satisficing is by no means a problem unique
to online surveys.
Satisficing can occur for different reasons –
for example:
´
The respondent may have something else they
want to do and they want to get through the
survey as quickly as possible.
´
The respondent may only be “in it for the money”
and as a result do as little as possible to get
through the survey to receive their incentive.
´
The respondent may not find the survey
topic interesting or personally relevant and as
a result not think much about the answers to
the questions.
´
The questionnaire may be poorly designed and
out of dislike or frustration the respondent may
hurry through the questionnaire.
´
The questionnaire may exceed the cognitive
abilities of the respondent, leading to frustration
or to seemingly illogical answers.
IMRO (IMRO Guidelines for Best Practices in
Online Sample and Panel Management) provides the
following commentary on satisficing:
In general there are four types of “cheating or
satisficing behaviors” that should be screened for.
1)
Flatlining, Straightlining or Replicated Answer
Patterns: This behavior is demonstrated by people
rushing through the survey giving little or no thought
to the responses. Typically the survey should contain
pattern recognition algorithms to detect a series of the
same answers (e.g. all “4s” on a matrix question, or a
replicated zig-zag pattern). Caution should be applied
in tossing interviews for pattern replication unless
there is little or no chance that the pattern can not
represent a legitimate opinion
2)
Rushed Answers: Respondents who take a survey at
much faster than normal rate are probably not paying
close enough attention to answers. A mean time to
complete should be calculated during pre-testing.
Anyone who completes at less than half this time
should be considered suspect. Note: when tracking
speed in real time, be sure to use the mode rather than
the mean in terms of time standards. This is because
respondent may be interrupted and finally complete
hours later radically increasing the mean time to
complete.
3)
Illogical and Unreasoned Answers: Another
problem related to flatlining is related to not taking
the time to read questions correctly. But because
randomly answering questions can escape a pattern
recognition program, additional tests are advisable.
The degree to which people are paying attention can
be tracked by performing tests on answers that should
logically be very different. If, for instance, someone
rates something as “too expensive,” they should not
also rate it as being “too cheap.” This type of answer
matching must be done on an ad hoc basis and
instigated by the survey designer.
4)
Automated Survey Responses: Some potential
survey cheaters use automated “keystroke replicators”
or “field populators,” which attempt to recreate many
duplicate surveys using a single survey as a template.
Panel providers should ensure that technological
blocks are in place to detect and defeat these
mechanisms prior to admission to the survey itself.
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
43
Points for Consideration:
Unlike phone and in-person interviewing, online
research is self-administered. Whenever the survey is
administered without a proctor, some level of error
is more likely to occur. However, it is more likely
that people will be lazy (straight-lining answers,
for example) rather than being outright dishonest.
Pattern recognition and real-time error checking can
be employed to avoid this type of bad survey behavior.
(page 14-15)
With regard to the IMRO point about “automated
survey responses”: this refers to a respondent
answering a survey multiple times. A standard has
already been stated for this matter in the section of this
report, Validation of Respondents.
The IMRO commentary was written for access
panel providers. However, satisficing can potentially
be an issue in any online survey, regardless of sample
source. That said, it may be that the risk of satisficing
behaviour differs by survey context and sample source.
For example, it may be that commercial access panels
should be held to a higher standard of screening for
satisficing than some other sample sources:
´
´
Risk: Arguably, the risk of satisficing may be
higher in commercial services, because they use
respondents who are recruited on the basis of
being paid to take surveys and these respondents
are likely completing multiple surveys within a
given period of time.
Resources: Commercial access panels may have
invested in the computer and programming
resources that would enable them to do some
automated screening for satisficing. By contrast,
this ability and resources to do automated
checking may not be as readily available in other
survey contexts. The latter may depend more on
manual checking of data records for evidence of
satisficing behaviour.
The Panel recommended the following
standard and guidelines related to detecting and
dealing with satisficing. Note: The Panel also
recommended there should be a requirements
(a) to at least consider whether and how to deal with
satisficing at the research design phase of a project,
and (b) to indicate the cost impact of measures that
might be taken to deal with satisficing. A standard
and guideline related to these two points have been
added to Proposal Documentation.
STANDARDS
FOR DETECTING AND DEALING
WITH SATISFICING
“Rushed Answers”
´
Where it is technically feasible to record
questionnaire completion time, each online survey
should set a criterion defining “rushed answers”
based on total time to complete the survey.
The suspect records should be examined, and
judgments made as to whether any of the suspect
records should be deleted from the final sample.
The following requirements also apply:
- The number of records deleted should be
stated in the survey report, together with a
general indication of the bases for deletion
- The final data file should contain an indicator
variable flagging whether or not a record was
classified as “rushed”
- The data records for the deleted cases should
be available upon request
- The final data file must contain a variable
giving the total time to complete the
questionnaire (when measurement of total
completion time is technically feasible)
- The final data file must contain a variable
giving the total number of questions answered
44
´ The Advisory Panel on Online Public Opinion Survey Quality
GUIDELINES
FOR DETECTING AND DEALING
WITH SATISFICING
“Flatlining, Straightlining, and Replicated
Answer Patterns”
´
Commensurate with risk and resources, it is
recommended that survey providers implement
procedures to detect flatlining, straightlining
and replicated answer patterns, and have criteria
for deleting from the final sample respondents
for whom data quality is sufficiently suspect. In
addition, if detection procedures are implemented:
- There should be a clear, non-technical
description of the answer patterns checked
for, and the procedures used to detect them;
the elimination criteria should be stated in the
survey report
- The final data file should contain an indicator
variable flagging whether or not a data record
was classified as suspect
Attempted Recontacts
The Advisory Panel considered standards and
guidelines for recontact attempts for surveys:
´
using probability samples, or which are based
on attempted census samples
´
using nonprobability samples
Probability Sample/Attempted Census
In a probability survey or attempted census, the
role of attempted recontacts of nonrespondents is to
improve response rate, which in turn can lower the
risk of nonresponse bias. In general, the greater
the importance of a survey, the higher the target
response rate should be, and therefore the number
of attempted recontacts.
In the case of telephone surveys, the Government
of Canada standard is to do a minimum of eight callback attempts.
Depending on the online sample source and
sampling process:
´
Attempted recontact may not be possible – for
example, in river-sampling the concept
of attempted recontact does not appear to apply
(see p. 54 for a definition of river-sampling)
´
Attempted recontact may occur via
different modes
- The data records for the deleted cases should
be available upon request
“Illogical and Unreasoned Answers”
´
Where it is judged useful, and judged appropriate
in terms of impact on questionnaire length,
consider introducing checks for illogical or
unreasoned responding. If such checks are
incorporated into the survey questionnaire:
- The final data file should contain
indicator variables flagging whether or
not an instance of illogical or unreasoned
responding occurred
- The treatment of data from illogical/
Use of e-mail for attempted recontacts would
be the most common mode. However, if telephone
dialing was part of the sampling process, then the
attempted recontact might be done by telephone. Or, if
an attempted e-mail contact is “bounced back”, then a
recontact by telephone might be attempted, providing
respondent telephone numbers are available. Other
modes are at least theoretically possible as well –
e.g., in-person (perhaps for an employee survey),
mail or fax.
unreasoning responders should be described
in the survey report
- The data records for any deleted cases should
be available upon request
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
45
Different modes of attempted recontact have
different levels of “intrusiveness.” For example,
telephone contacts that do not result in contact tend
to have low intrusiveness (e.g., no answer, busy signal,
answering machine with no message left). By contrast,
e-mail arguably has a higher level of intrusiveness,
because it will be seen by the respondent (an e-mail
contact is perhaps analogous to a message left on an
answering machine).
The Panel agreed that the number of attempted
recontacts that would be considered “intrusive”
might vary by study, and the decision was to state
a minimum number of recontact attempts rather
than a maximum. When setting a desired number
of attempted recontacts for a survey, the intensity
of the recontact effort should balance both potential
improvements to sample size/composition and degree
of intrusiveness in terms of respecting a person’s
right to choose not to participate. So, for example,
for many online surveys the numeric standard for
telephone call-backs (n=8) may not be appropriate
for e-mail recontacts.
There was general agreement by the Panel to
adopt the following standard for recontact attempts
for probability surveys and attempted census surveys.
STANDARD
ATTEMPTED RECONTACTS FOR
A PROBABILITY SURVEY OR
ATTEMPTED CENSUS
´
In the case of probability or attempted census
surveys where recontact is feasible, attempts must
be made to recontact nonrespondents using an
appropriate contact procedure.
´
In general, the intensity of the recontact effort
should balance both potential improvements to
data quality and degree of intrusiveness in terms
of respecting the person’s right to choose not
to participate.
´
Where recontact by e-mail is judged the
appropriate method, a minimum of two attempted
recontacts should be made. Where recontact by
telephone is judged the appropriate method,
the standard for telephone surveys applies
(i.e., minimum of 8 call-backs).
Nonprobability Samples
In surveys based on nonprobability samples, there
is no clear relationship between response rate and
data quality. For example, if the sample source is a
nonprobability-based access panel, a higher response
rate does not guarantee any greater accuracy in
generalizing to a population than does a lower
response rate. This is because the data are coming
from a group of respondents with no known
probability of selection from the target population.
Nonetheless, there are some practical reasons
to consider attempting recontacts when using a
nonprobability sample. The Panel felt the following
guideline should be provided that gives examples of
situations when attempted recontacts should
be considered.
46
´ The Advisory Panel on Online Public Opinion Survey Quality
GUIDELINE
ATTEMPTED RECONTACTS FOR A
NONPROBABILITY SURVEY
´
In the case of surveys based on nonprobability
samples where recontact is feasible, consider
making attempts to recontact respondents using
an appropriate contact procedure. Reasons for
making attempted recontacts could include, for
example, achieving a desired sample size, reducing
potential bias resulting from differences between
“early” responders versus “late” responders,
or selective recontacts to increase the sample
size of certain subgroups of interest (e.g., for
analytical purposes, or to reduce the level of
weighting required).
In general, the intensity of the recontact effort
should balance both potential improvements
to sample size/composition and degree of
intrusiveness in terms of respecting the person’s
right to choose not to participate.
For surveys generated from panel or e-mail samples
where respondents are known in advance, unique
login passwords and cookies can be used to ensure
that respondents complete the survey questionnaire
only once.
For website visitor intercept surveys, where login
passwords are not feasible and/or where cookies are
not allowed, other measures will need to be put in
place. Examples of measures that can be taken include:
´
Random selection of visitors to sites to reduce the
chance of respondents completing the survey more
than once
´
Post-data collection procedures to identify
potential duplication, e.g., pattern matching
of responses
´
Including a question asking respondents if they
have already completed the survey
The Advisory Panel agreed to the following
standard to ensure respondents answer a questionnaire
only once.
STANDARD
ENSURE RESPONDENTS ANSWER
A SURVEY ONLY ONCE
Validation of
Respondents
The Advisory Panel considered standards
and guidelines:
´
´
For ensuring respondents can answer a survey
only once
´
To supplement a standard on validation of
respondent identity already agreed to for access
panels so that it also applies to online surveys that
do not use access panels
For any online survey research project, there
should be documented procedures to limit the
possibility that respondents can answer a survey
for which they have been selected, more than once.
Validation of Respondent Identity
Validation of respondent identity can be important
in online survey research, because the Internet
medium creates opportunities for respondents to
misrepresent themselves (although, the possibility
of misrepresentation is not unique to the
Internet medium).
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
´
Answer a Survey Only Once
47
The risk that some respondents may misrepresent
themselves is particularly an issue for commercial
access panels, where respondents might be tempted to
create multiple identities so that they can participate
in more surveys in order to make more money.
In the case of surveys using access panels,
the following standard applies:
With regard to online surveys that do not use
access panels as a sample source, the position taken
with respect to validation of respondent identity
should consider both risk and feasibility:
´
Risk: Some sample sources may be considered
less risky than others in terms of likelihood of
respondent misrepresentation. For example, risk
may be considered low in an online employee
survey using a list of employee e-mail addresses
and it might be decided that no validation checks
of respondent identity need to be done.
On the other hand, for example, when certain
other types of samples are used, the risk of
respondent misrepresentation might be considered
higher and one would want to implement some
validation checks.
´
Feasibility: It may be that the feasibility of
validation checks – both in terms of what can be
done and how much can be done – will vary by
sample source.
Panel owners should have a clear and published
rule about validation checks. They should maintain
records of any checks they carry out to validate that
the panel member did indeed answer a survey.”
For reference, IMRO (IMRO Guidelines for Best
Practices in Online Sample and Panel Management) notes
the following types of validation checks that access
panels might implement:
Physical address verification at panel intake and
during incentive fulfillment is a common and
relatively fail-safe method for ensuring that panelists
have not created multiple panel profiles in order to
harvest invitations and incentives. Other forms of
validation, such as profiling phone calls at intake
and/or on a regular basis, can validate the panelist’s
demographic and professional characteristics.
At the point of panel intake, the panel owner
should, at a minimum, verify that the physical address
provided by a respondent is a valid postal address.
In addition, through survey questions and through
periodic updates of the panel profile, panel companies
should check for consistency in responses concerning
demographics and consumer behavior/preferences.
The best methods for validating the identity of
a respondent are those where the respondent is
dependent on a unique address for redemption of
incentive – such as standardized home address. A
respondent can have multiple e-mail, PayPal or
“home” addresses, but won’t benefit, if the check or
prize can only be sent to a valid home address. (p.13)
48
´ The Advisory Panel on Online Public Opinion Survey Quality
The following guideline was agreed to by the
Advisory Panel.
GUIDELINE
VALIDATION OF
RESPONDENT IDENTITY
´
When conducting an online survey using sample
sources other than access panels, consider both the
level of risk of respondent misrepresentation and
the feasibility of conducting respondent identity
validation checks. If it is considered desirable and
feasible to do validation checks, the validation
procedure should be documented, and a record of
the outcomes of the checks should be maintained.
Standards and Guidelines
For Success Rate
guidelines related to:
´ Calculation of ”Success Rate”
´ Nonresponse Bias Analyses for
Online (a) Probability Surveys and
(b) Nonprobability Surveys
´ Qualified Break-offs
´ Response/Success Rate Targets
Calculation of
“Success Rate”
In the July 2007 Vue, the MRIA Response Rate
Committee published an article discussing the
concept of response rate as it applies to web-based
surveys (What does response rate really mean in a webbased survey?). (See MRIA at http://mria-arim.ca).
They conclude the article with proposed calculation
formulas for “success rate.” They also state that the
“MRIA Response Rate Committee is recommending
that the term response rate should not be used in
reporting data collection outcomes for most
web surveys.”
In this article, the MRIA also invited comments on
the proposed success rate terminology and calculations,
implying it is possible that the terminology and
formulas may evolve further in the future. In this
context, the Panel recommended a standard that (a)
essentially says to follow the recommendations of
the MRIA Response Rate Committee current at the
time a survey is done, and (b) requires a record of
contact dispositions that is compatible with MRIA
calculation recommendations.
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
This section details the standards and
49
STANDARD
CALCULATION OF “SUCCESS RATE”
´
Calculations pertaining to success rate or level
must be calculated and described as recommended
by the MRIA. The results must be included in the
survey report. The survey report must also show
a record of contact dispositions that includes the
categories required to comply with the MRIA
calculation formulas.
Nonresponse Bias
Analyses
Probability Surveys
and Attempted Census Surveys
The Telephone report contained a standard and
guidelines with respect to nonresponse bias analyses
which apply to probability surveys and attempted
census surveys. These also apply to such surveys when
conducted online. For reference, the standard and
guidelines are:
STANDARD
FOR PROBABILITY SURVEYS AND
ATTEMPTED CENSUS SURVEYS
´
50
All survey reports must contain a discussion of
the potential for nonresponse bias for the survey
as a whole and for key survey variables. Where
the potential of a nonresponse bias exists, efforts
should be made to quantify the bias, if possible,
within the existing project budget. If this is not
possible, the likelihood and nature of any potential
nonresponse bias must be discussed.
´ The Advisory Panel on Online Public Opinion Survey Quality
GUIDELINES
FOR PROBABILITY SURVEYS AND
ATTEMPTED CENSUS SURVEYS
´
The nonresponse analyses conducted as part of
routine survey analysis would be limited to using
data collected as part of the normal conduct of
the survey, and could include techniques such
as comparison of the sample composition to the
sample frame, comparison to external sources,
comparison of “early” versus “late” responders, or
observations made during data collection on the
characteristics of nonresponders. The nonresponse
analyses conducted for a particular survey would
be tailored to the characteristics of that survey.
´
Consider contracting additional data collection or
analyses when the nonresponse analyses conducted
as part of routine survey analysis suggest there
may be value in getting additional information.
Nonprobability Surveys
In the case of nonprobability surveys, the applicability
of nonresponse bias analysis is not always as clear as
it is in the case of probability and attempted census
surveys. With a nonprobability sample, the results
technically cannot be statistically projected to a
population because it is not known, via selection
probabilities, how the sample relates to the population
of interest. In this context, since one does not have
a well-founded estimate of a population parameter
in the first place, it can be hard to then apply the
concept of nonresponse bias analysis in estimating the
population parameter.
To put this another way, statistically speaking
one could say that the results of a survey based on a
nonprobability sample describe only that particular
sample of people. It does not matter if there are any
nonresponders during the process of generating the
sample, because the nonresponders by definition
are not part of the sample. Nonresponders only
matter when one is trying to project the results to a
population larger than the obtained sample itself.
Examples of situations where NR bias analyses
might be useful
´
Access panel survey where the outgoing sample is
carefully constructed to try to obtain a final sample
“representative” of a particular population
For such a survey, the size and composition
of the outgoing sample would presumably be
based on normative success rates obtained in
that particular access panel, both overall and
perhaps by subgroup (e.g., perhaps success rates
are typically lower for young adults, so the
outgoing sample for that subgroup would be
disproportionately large). For purposes of NR
bias analysis, what could be of interest would
be any marked departures from the normative
success rates typical of that panel. For example,
if the typical success rate is 50% for young
male panellists but the actual success rate for a
particular survey is 30%, it might be of interest to
explore why this lower success rate occurred and
whether it might suggest a potential for bias in
any of the key survey variables.
´
List-based survey based on a carefully constructed
judgment sample
Suppose a nonprobability sample is drawn
based on a carefully constructed list in which an
attempt was made to represent various segments
of a population (albeit not in a probability
sampling sense). Then, it could be prudent to do
an analysis of nonresponse to see to what extent
the predetermined segments of interest are
represented in the final sample.
Examples of situations where NR bias analysis
may not apply or be useful
´
River-sampling
Note: River-sampling is a method for
recruiting individuals who are willing to
participate in a single survey but not necessarily
join an access panel. The recruiting is done in
real time from a network of websites with which
a research company has developed a referral
relationship and for which the company pays a per
recruit fee to the referring sites.
In river-sampling, the concept of
“nonresponders” does not seem to apply.
´
Propensity scoring
Note: Propensity scoring is a weighting
procedure for adjusting an online nonprobability
sample to attempt to look more like a telephone
probability sample (or some other sort of
probability sample) A propensity scoring
model is based on conducting parallel online
and telephone surveys, and associating select
respondent characteristics with a “propensity”
to have participated in the online survey versus
the telephone survey. Data in subsequent online
surveys are weighted so that the propensity score
distributions are similar to that found in the
reference telephone probability sample used in
developing the model.
Nonresponse bias analysis does not apply
when propensity scoring is used. Or, to put this
another way, what propensity scoring would do
(if it works perfectly) would be to introduce into
the survey results the nonresponse biases that
were in the reference telephone survey used to
calibrate the propensity scoring system. So, if
any nonresponse bias analysis were to be done, it
would be done on the reference telephone survey,
not on the online survey itself.
In this context, the Panel recommended a
standard and guidelines for nonresponse bias analyses
for nonprobability surveys.
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
Nonetheless, at a practical level, results from
nonprobability surveys are sometimes projected to
target populations. So, at least in practical terms, the
nonresponders in such nonprobability surveys can
matter in terms of assessing the quality of the
survey estimates.
The potential value of some sort of nonresponse
(NR) bias analysis may vary by type of nonprobability
survey or type of nonprobability sampling:
51
STANDARD
FOR NONPROBABILITY SURVEYS
´
The proposal for a nonprobability survey must
state whether or not any nonresponse bias analyses
are planned. If not, a rationale must be stated. If
analyses are planned, the general nature of the
analyses should be described.
GUIDELINES
FOR NONPROBABILITY SURVEYS
´
In the case of surveys based on nonprobability
samples, consider including in the survey report a
discussion of the potential for nonresponse bias for
the survey as a whole and for key survey variables
when both of the following conditions are met:
- The survey results are described as being
Qualified Break-offs
A qualified break-off is a respondent who qualifies for
the survey (e.g., passes upfront screening) but then at
some point during the interview refuses to participate
further. Because of the self-administered nature
of online surveys, the qualified break-off rate can
potentially be fairly high, whereas it is expected that
qualified break-off rates in telephone surveys tend to
be quite low because of the human interaction.
For example, in an experimental study of the
“qualified break-off” phenomenon, Hogg & Miller
(Quirk’s, July/August 2003, Watch out for dropouts)
observed qualified break-off rates (dropout rates,
in their terminology) ranging from 6% to 37%.
They also observed:
representative of a population larger than the
actual survey sample itself
- Something is known about the characteristics
of nonresponders
Where the potential of a nonresponse bias
exists, efforts should be made to quantify the bias,
if possible, within the existing project budget. If
this is not possible, the likelihood and nature of
any potential nonresponse bias must be discussed
in the final report.
´
´
52
The nonresponse analyses would be limited to
using data collected as part of the normal conduct
of the survey. The nonresponse analyses conducted
for a particular survey would be tailored to the
characteristics of that survey.
Consider contracting additional data collection
or analyses when the nonresponse analyses
suggest there may be value in getting additional
information.
´ The Advisory Panel on Online Public Opinion Survey Quality
Obtaining different survey findings because
respondents dropped out of surveys can be seen as an
extension of the common problem of non-response
error. Non-response error occurs when those who do
not respond to survey invitations might be different
in important ways from those who do. Similarly,
those who do not complete surveys can be different in
important ways from those who do finish.
The Panel agreed to adopt the following standard
which would apply to all online surveys, including
both probability and nonprobability surveys:
STANDARD
FOR QUALIFIED BREAK-OFFS
´
If technically feasible, the data records for
qualified break-offs should be retained in order to
permit comparisons of qualified break-offs with
respondents who completed the survey – which is
a form of nonresponse bias analysis. Where there
is a sufficient sample size of qualified break-offs,
the potential for nonresponse bias should be
explored by performing the comparisons, and the
conclusions included in the survey report. Where
the potential of a nonresponse bias exists, efforts
should be made to quantify the bias, if possible,
within the existing project budget. If this is not
possible, the likelihood and nature of any potential
nonresponse bias must be discussed.
Response/
Success Rate Targets
1)
Online or multi-mode surveys based on
probability or attempted census samples must
be designed to achieve the highest practical rates
of response/success (over the various modes),
commensurate with the importance of survey
uses, time constraints, respondent burden and data
collection costs.
2)
Prior to finalization of the research design and
cost for any particular online or multi-mode
survey, a target response/success rate or response/
success rate range (over the various modes) must
be agreed upon by the government department/
agency and the research firm. The research will be
designed and costed accordingly.
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
The starting point for the Panel was the Telephone
report, which contained standards and guidelines with
respect to response rate targets. The Panel generally
agreed with a modified version of the standards.
Notably, while the Panel agreed in principle with
the advisability of setting numeric success rate target
guidelines for online surveys based on probability
samples (as had been done for telephone surveys), most
members of the Panel felt that experience with online
surveys is too limited at the present time to be able to
set guidelines for numeric targets with confidence,
and as well it was felt the targets might vary by type of
online methodology (e.g., access panel versus website
visitor intercept).
Several Panelists suggested PORD should
publicize response/success rates for online surveys
as projects accumulate, in order to (a) help POR
Coordinators establish reasonable targets for particular
projects, and (b) contribute to perhaps setting
guidelines for numeric targets at some point in
the future.
The Panel recommended the following standards:
STANDARDS
FOR RESPONSE/SUCCESS
RATE TARGETS
53
54
´ The Advisory Panel on Online Public Opinion Survey Quality
Standards and Guidelines
For Data Management and Processing
The report on The Advisory Panel on
Telephone Public Opinion Survey Quality
(Telephone report, for short) dealt with
´ Coding
´ Data Editing and Imputation
Coding
STANDARDS
FOR CODING
Use of Coding Software
´
If automated coding software is used, the error
rate should be estimated. If the error rate exceeds
5%, the research firm shall:
- Inform the Project Authority
- Revise the dictionary
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
standards and guidelines for:
The Online Advisory Panel was not asked to comment
on these standards and guidelines on the basis that
most aspects apply equally to both telephone and
online surveys.
This section reproduces the standards and
guidelines from the Telephone report.
55
Developing code frames
´
The initial code list/frame shall be developed
based on a systematic review of a minimum of
10% of open-ended responses and 50% of partial
open-ended responses, where a frame does not
already exist.
´
The research service provider shall ensure that
coders working on the project are provided with
instructions and training that shall include, as
a minimum:
- An overview of the project
- Identification of questions or variables to
´
Also:
- Where “don’t know” and “no answer”
responses have been used, these shall be
distinguishable from each other
clear rules or guidelines for the treatment of
responses in “other” or catch-all categories; if
the “other” or catch-all category exceeds 10%
of responses to be coded, the responses should
be reviewed with a view to reducing the size
of the group.
´
After initial code frame approval, when further
codes become appropriate in the process of coding,
all copies of the code frame shall be updated
and any questionnaires already coded shall be
amended accordingly.
´
Upon request, the research firm shall provide the
Project Authority with the initial code frame and
any updated versions.
´
The research firm shall provide the Project
Authority the final version of the code frame.
be coded
- The minimum proportion or number of a
sample (and its make-up) used to produce
code frames
- Where necessary or appropriate, specific
subgroups required to develop code frames
(e.g., by region, user or non-user)
- Guidelines for the inclusion of codes in the
code frame (e.g., decisions or rules regarding
what should be included or excluded from a
given code)
- Any use to be made of code frames from a
Coding Verification
´
The research service provider shall have defined
procedures for the verification of the coding
for each project, including documenting the
verification approach to be used. Procedures
shall ensure that there is a systematic method of
verifying a minimum of 10% of questionnaires
coded per project and the verification shall be
undertaken by a second person.
´
If a coder’s work contains frequent errors, that
coder’s work (on the project) shall be 100%
verified/re-worked. If necessary, appropriate
retraining shall be given to that coder until error
rates are acceptable. The effectiveness of the
retraining shall be reviewed and documented.
´
The research service provider shall define the
meaning of frequent errors and document
that definition.
previous project or stage
- Any other requirements or special instructions
specific to the project
Code frame approval/coding procedures
´
56
The research firm project manager responsible
for the project shall approve the initial code frame
prior to the commencement of coding and shall
document it. This approval may involve the
netting, abbreviating, rewording, recoding or
deletion of codes.
´ The Advisory Panel on Online Public Opinion Survey Quality
- The research service provider shall have
Data Editing/Imputation
GUIDELINES
FOR CODING
STANDARDS
FOR DATA EDITING/IMPUTATION
´
For some variables, the research service provider
should use existing established classification
standards, such as those for industry, occupation
and education.
´
An accurate record of any changes made to the
original data set shall be kept. No data shall be
assumed/imputed without the knowledge and
approval of the research firm project manager.
Comparison to the original data source shall be the
first step in the process. Any imputation processes,
including the logic of the imputation method(s)
used shall be documented and available to the
client on request. All edit specifications shall
be documented.
´
Where forced editing is used, the logic of the
forcing shall be documented and test runs carried
out, with the results documented to show that the
forcing has the desired effect.
´
Data editing/imputation should be used
cautiously. The degree and impact of imputation
should be considered when analyzing the data,
as the imputation methods used may have a
significant impact on distributions of data and the
variance of estimates.
´
The research firm shall include documentation of
any imputation/forced editing, both in a technical
appendix and in the final report.
Coding Verification
There are two basic approaches to verification:
dependent and independent. Dependent
verification means that the second person has
access to the original coding. Independent
verification means that the second person
does not have access to the original coding. In
independent verification, the original coding
and the verification coding are compared and
if they differ, the correct code is decided by an
adjudication process. Independent verification
detects more errors than dependent verification.
Independent coding verification should be
used wherever possible.
´
´
The final coded dataset should be reviewed, at
least once, to ensure the internal consistency of the
coding, and be corrected as necessary.
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
Developing Code Frames
57
58
´ The Advisory Panel on Online Public Opinion Survey Quality
Standards and Guidelines
For Data Analysis/Reporting and Survey Documentation
standards and guidelines for:
´ Data Analysis/Reporting:
Data analysis plan, Data analysis
verification, Delivery of data tables
and Inferences and comparisons
´ Back-up, Retention/Security of Data
´ Survey Documentation
The Online Advisory Panel was asked to
comment on only some of the standards and guidelines
in this section of the Telephone report on the basis
that most aspects apply equally to both telephone
and online surveys.
The Panel was asked to consider the
following issues:
´
Data Analysis/Reporting: Inferences and Comparisons
´
Retention of technical data
´
Data security
´
Survey Documentation related to:
-
Including details about how survey
accessibility needs were met
-
Including both text versions and facsimiles of
the online survey instrument(s).
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
The Telephone report dealt with
59
Data Analysis/Reporting
DATA ANALYSIS PLAN
(from the Telephone report)
The Telephone Panel was asked to comment on
whether or not the following should be a standard
or a guideline:
During data analysis, any changes to the data analysis
plan should be submitted to the Project Authority
for review.
There was no agreement reached: several
members of the Telephone Panel preferred this to be
a standard, one member felt it should be a guideline
and others did not have a strong opinion one way or
the other.
´
As a minimum, these checks shall verify:
- Completeness, i.e., that all tables are present as
specified, including the results of all reported
significance tests
ended responses accurately reflect the
full content
The research service provider shall keep accurate
and descriptive records of the analysis process,
to ensure that any analysis undertaken can be
replicated at a later date.
Data analysis verification
´
60
The research service provider shall have in place
procedures to ensure the tabulations and other
outputs have been checked.
´ The Advisory Panel on Online Public Opinion Survey Quality
- That all derived data items are checked
against their source
- That the figures for subgroups and nets
are correct
- That there are no blank tables
(i.e., with no data)
- Weighting (e.g., by test tables)
- Frequency counts prior to running tables, in
order both to ensure the accuracy of data and
to determine base sizes for subgroups
- Spelling and legibility
- That any statistical analysis used is
appropriate and correct, both in its descriptive
and inferential aspects
Analysis records
´
- That the standard breaks/banner points are
checked against source questions
STANDARDS
FOR DATA ANALYSIS
VERIFICATION
- That the base for each table is correct against
other tables or frequency counts
DATA ANALYSIS VERIFICATION
(from the Telephone report)
- That abbreviations for headings or open-
´
For any subsequent outputs, appropriate checks
shall be applied.
Electronic data delivery
DELIVERY OF DATA TABLES
(from the Telephone report)
STANDARDS
FOR DELIVERY OF DATA TABLES
´
The research service provider shall provide the
Project Authority with a data file.
´
For data delivered to the Project Authority in
electronic format, the following shall be checked
prior to data release:
- Compatibility of the file format with the
Delivery of stand-alone hard or soft copy
of data tables
´
software specification agreed with the client
(for Government of Canada, preferably
SPSS version, Windows format per the
PWGSC RFSO);
When data are reported to the client, such as in a
stand-alone hard or soft copy of data tables,
the following shall be taken into account,
as appropriate:
- Reference to the actual source question to
and records are in each file);
which the data pertains
- Clear identification of any subgroups used
- Availability of the bases for each question,
so that the number of respondents who have
actually answered the question is identifiable
- A structural description of the file;
- Labelling of the contents of the file, i.e.,
fully labelled variables and value labels;
- Identification and description of any
computed or recoded variables, and
instructions on limitations of use;
- Labelled weighting variables and a
description of how these were applied;
- Availability of both weighted and
- Clear and complete definition and explanation
of all variables used in the analysis of the data,
including any significance testing, indexing,
scoring, scaling and calculations of means,
median, modes and standard deviations
unweighted bases
- The number or proportion of respondents
who replied “don’t know” or gave
“no answer”
- Inclusion of all appropriate documentation
to allow for replication of the data analysis
and additional analyses, including
where applicable;
- Inclusion of a description of any weighting
method applied to the data
- Completeness (i.e., the correct number of files
- All personal identifiers per PIPEDA have
been removed from the files;
- Encryption of files upon request;
- Presence of viruses in the file.
- The types of statistical tests being used and
their level of precision
- Information on cell suppression and other
measures to assure confidentiality
- Warnings on results which are unreliable due
to very small sample sizes
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
61
Inferences and
Comparisons
There were no changes to the standard and guidelines
previously recommended in the Telephone report for
inferences and comparisons for probability surveys:
STANDARDS
FOR PROBABILITY SURVEYS
´
Research service providers must base
statements of comparisons and other statistical
conclusions derived from survey data on
acceptable statistical practice.
´
Nonprobability Samples
The Panel recommended the following standards and
guideline pertaining to inferences and comparisons for
surveys based on nonprobability samples:
STANDARDS
FOR NONPROBABILITY SURVEYS
´
GUIDELINES
FOR PROBABILITY SURVEYS
´
Before including statements in information
products that two characteristics being estimated
differ in the actual population, make comparison
tests between the two estimates, if either is
constructed from a sample. Use methods for
comparisons appropriate for the nature of the
estimates. In most cases, this requires estimates
of the standard error of the estimates and, if the
estimates are not independent, an estimate of
the covariance between the two estimates.
´
Given a comparison that does not have a
statistically significant difference, conclude that
the data do not support a statement that they
are different. If the estimates have apparent
differences, but have large standard errors making
the difference statistically insignificant, note this in
the text or as a note with tables or graphs.
´
62
Support statements about monotonic trends
(strictly increasing or decreasing) in time series
using appropriate tests. If extensive seasonality,
irregularities, known special causes or variation
in trends are present in the data, take those into
account in the trend analysis.
´ The Advisory Panel on Online Public Opinion Survey Quality
When performing comparison tests, report
only the differences that are substantively
meaningful, even if other differences are also
statistically significant.
There can be no statements made about margins
of sampling error on population estimates when
nonprobability samples are used.
The survey report must contain a statement on
why no margin of sampling error is reported, that
is based on the following template: “Respondents
for this survey were selected from among those
who have [volunteered to participate/registered to
participate in (department/agency) online surveys].
[If weighting was done, state the following sentence
on weighting:] The data have been weighted to reflect
the demographic composition of (target population).
Because the sample is based on those who initially
self-selected for participation [in the panel], no
estimates of sampling error can be calculated.”
This statement must be prominently
placed in descriptions of the methodology in
the survey report.
´
´
For nonprobability surveys it is not appropriate
to use statistical significance tests or other formal
inferential procedures for comparing subgroup
results or for making population inferences about
any type of statistic. The survey report cannot
contain any statements about subgroup differences
or other findings which imply statistical testing
(e.g., the report cannot state that a difference
is “significant).”
Nevertheless, it is permissible to use
descriptive statistics, including descriptive
differences, appropriate to the types of variables
and relations involved in the analyses. Any use of
such descriptive statistics should clearly indicate
that they are not formally generalizable to any
group other than the sample studied, and there
cannot be any formal statistical inferences about
how the descriptive statistics for the sample
represent any larger population.
The exception to the rule against statistical
significance tests of differences is nonprobability
surveys that employ an experimental design in
which respondents are randomly assigned to
different cells in the experimental design. In this
case, it is appropriate to use statistical significance
tests to compare results from different cells in
the design.
Back-up, Retention &
Security of Data
RETENTION OF TECHNICAL DATA
The Panel agreed to adopt a modified version of the
MRIA standard on the retention of technical data. For
reference, “technical data” is defined by MRIA as raw
anonymized data, analyses and the information described
in Clause 26 of the MRIA Code of Conduct under
Responsibilities of Researchers to Clients.
The standard below provides examples of the
types of data and analyses that must be retained.
Note: The relevant content of Clause 26 has been
incorporated into the standards and guidelines for
Survey Documentation.
STANDARDS
FOR RETENTION
OF TECHNICAL DATA
GUIDELINES
FOR NONPROBABILITY SURVEYS
´
Consider using other means for putting
descriptive statistics in context, for example:
- If similar studies have been done in the past, it
´
The research service provider must maintain
the technical data on all studies for a period of
three years, so that if requested, the study can
be replicated. For online surveys, this also will
include how the questionnaire was presented and
a representation of any visual/audio materials used
in the survey.
´
Technical data not already included in the Survey
Report/Appendix that must be maintained
include, but are not limited to:
1)
may be useful to comment on how statistical
values obtained in the study compare to
similar studies from the past.
- For statistics such as correlations, refer
to guides on what are considered to be
low, medium or high values of descriptive
correlational statistics.
Data pertaining to data processing and
analysis may include, but are not limited to:
- Raw data files
- Other electronic files
- Code frames
- Project files including project
management information and survey
programming files
- E-mails and other correspondence
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
63
2)
Where data have been edited, cleaned,
recoded or changed in any other way from
the format, content and layout of its original
format, the original data, final data and
programme files, including all documentation
related to changes to the data (as a minimum)
shall be kept so that the final data set can be
easily reconstructed.
STANDARDS
FOR DATA SECURITY
Protection of Data/Servers
´
Researchers must use up-to-date technologies
to protect the personal data collected or stored
on websites or servers. In particular, panel
registration pages, and online surveys that collect
sensitive personal information, must use Secure
Socket Layer (SSL) or an equivalent level of
protection, at minimum.
´
Researchers must also put in place measures to
ensure the “physical” security of data and servers.
DATA SECURITY
The Panel recommended adoption of a modified
version of the MRIA standards for data security.
There was a suggestion that because of concerns
about security of data, there should be an explicit
statement that there is a preference for Canadianbased servers to be used. However, before including
this type of statement, there are two issues which
would first need to be addressed:
´
Does this limit Canadian online service providers
and software packages that can be used by the
Government of Canada? For example, many
online software packages have been developed by
American companies and data is being stored in
the U.S.
´
Would this type of statement be consistent with
PWGSC’s contracting policies, e.g., NAFTA?
Temporary Storage of Data on Servers
´
If the temporary storage of data collected takes
place on a server that is operated by a provider,
the research agency must place the provider under
the obligation to take the necessary technical
precautions to ensure that third parties cannot
access the data on the server or during data
transfer. Temporary storage of the collected data
on the server must be terminated at the earliest
possible time.
Data Storage on Servers Outside of Canada
´
When data is stored on servers outside of Canada,
researchers must ensure that commitments with
respect to privacy and Canadian privacy laws can
be maintained.
Transmission of Data Internationally
´
64
´ The Advisory Panel on Online Public Opinion Survey Quality
Before data is sent over the Internet to another
country, Researchers must check with competent
authorities that the data transfer is permissible.
The recipient may need to provide safeguards
necessary for the protection of the data.
Disclosure of Respondents’ E-mails
in Batch Transfers
Researchers must have adequate safeguards
in place to ensure that when e-mails are sent in
batches, the addresses of the respondents are
not revealed.
In the Event of Any Data Breach
´
´
In the event of any data breach, the client must
be informed immediately and provided with
details about both the nature and the extent of
the data breach.
The client and research supplier shall decide on
appropriate actions to be taken.
Survey Documentation
The Panel was not asked to comment on most of
the detailed standards and guidelines for Survey
Documentation already adopted by the Government
of Canada for public opinion telephone surveys, given
that most of the requirements would not change for
online surveys.
The Panel was asked to comment on a revised
standard and an additional guideline with respect
to the version(s) of the questionnaire to be included
in the Study Materials appendix to a final survey
report, reflecting:
´
The final survey questionnaire will exist in two
different formats -- a text version, and the actual
online version as experienced by respondents. For
the reader of a survey report, both versions might
have value.
´
It is possible for an online questionnaire to have
“auditory” aids as well as “visual aids.”
The Panel agreed to a modified standard
and guideline with respect to the version(s) of the
questionnaire to be included in the Study Materials
appendix and these have been incorporated into the
standards and guidelines shown below.
Other modifications to the standards and
guidelines for Survey Documentation summarized
below reflect the various standards recommended
by the Panel throughout the discussions:
General
For quantitative research, the following minimum
details shall be documented in the project report.
These allow the reader to understand the way the
research project was conducted and the implications of
its results.
Background
´
The name of the client.
´
The name of the research service provider.
´
Detailed description of background including,
at minimum:
- Purpose, how the research will be used
- Objectives, research questions
Sample
´
Detailed description of the sample including:
- The target group for the research project,
including whether or not Internet non-users
are part of the target population
- The achieved sample size against projected
sample size and reasons, if relevant, for not
obtaining the projected sample
- The sample source and sampling method,
including the procedure for selecting
respondents; the type of sampling method
used should be identified -- i.e., probability,
nonprobability, attempted census
- The weighting procedures, if applicable
´
For nonprobability samples, provide:
- Rationale for choosing a nonprobability
sample
- Detailed description of steps take to maximize
representativeness of the sample and of the
limitations/uncertainties with respect to the
level of representativeness achieved
Standards and Guidelines for Pre-Field Planning, Preparation and Documentation
´
´
STANDARDS
FOR SURVEY DOCUMENTATION
65
Data Collection
´
Detailed description of methodology including:
- The date of fieldwork
- The average survey length and the range
- The data collection method(s),
Appendix 1: Study Materials
´
and if applicable:
´
The type and amount of incentives
Describe any accessibility provisions for
respondents using adaptive technologies
For multi-mode surveys, provide a rationale
for using a multi-mode rather than a
single-mode method.
Appendix 2: Technical Appendix
´
Detailed record of contact dispositions
´
A detailed description of the quality control
procedures used and the results of each, measures/
sources of sampling and non-sampling errors and,
as appropriate any other information related to the
quality of the survey
Quality Controls
´
The estimating and imputation procedures,
if applicable
´
A brief summary of other quality controls
and procedures used, including the results of
each (Note: These are to be detailed in the
Technical Appendix.)
´
For multi-mode surveys, detailed description of
any data quality issues arising from combining
data collected via different modes/instruments.
Presentation of Results
´
An executive summary of key results and
conclusions, linked to the survey objectives,
research questions
´
For probability samples, state the level of
precision, including the margin of error and
confidence interval for the total sample and any
key sub-groups
´
For nonprobability samples and attempted census
surveys, the report must contain a statement on
why no margin of sampling error is reported;
nonprobability-based surveys must use the
prescribed template.
´
Overview of the survey analytical plan
´
The contact dispositions and response/success rate
using the formula recommended by MRIA
´
For results based on subgroups, indicate the
number of cases
66
´ The Advisory Panel on Online Public Opinion Survey Quality
Study Materials, containing the questionnaires,
descriptions or representations of any visual or
auditory aids, and other relevant data collection
documents, in all languages in which the research
was conducted. There should be a version of the
questionnaires displaying any instructions (e.g.,
skip, terminate, etc.) needed to understand the
logic and flow of the questionnaire.
GUIDELINES
FOR SURVEY DOCUMENTATION
Appendix 1: Study Materials
´
It is recommended that screen shots of selected
portions of the questionnaires be included
which show how the questionnaire appeared
to respondents.