Assessing Online Systematic Review Training: Updated

Assessing Online Systematic Review Training:
Updated Findings from an
Environmental Scan and Evaluation
Leah Boulos, Sarah Visintini, Robin Parker,
Krista Ritchie, Jill Hayden
Presented by Leah Boulos
CHLA/ABSC Conference 2017
May 17, 2017
1
Objective
o To conduct an environmental scan and
assessment of online systematic review
training resources in order to:
o Describe available resources
o Evaluate whether they follow
current best practices for online
instruction
2
Methods:
Environmental scan
o Broad Google search
o Exhaustive YouTube search
o Targeted website search
Google search string:
((“systematic review” OR “scoping review” OR “evidence review”
OR “knowledge synthesis” OR “evidence synthesis”) online
teaching OR course OR workshop OR seminar OR education OR
training OR module)
3
Selection criteria
Content
Format
Availability
Language
At least three of
six systematic
review steps
Online courses,
videos, web
tutorials or
modules
Available to the
public or a
group to which
membership is
open
English
Six systematic review steps:
1.
Defining a research question and/or creating a protocol
2.
Conducting a rigorous search
3.
Determining selection criteria
4.
Critical appraisal and/or risk of bias assessment
5.
Data extraction
6.
Analysis and/or creation of an in-depth report
4
Methods:
Evaluation
o Evaluation framework based directly on
Foster, Shurtz, & Pepper (2014)
o In turn based on QuADEM approach
o Four categories, 26 questions, 37 possible
points
Content
Design
Interactivity
Usability
5
Methods:
Evaluation
Content
Credibility, relevance, currency,
organization, ease of
understanding, focus,
appropriateness of language
Interactivity
Level of interactivity, variety of
tasks, appropriateness and
difficulty of tasks, opportunities
for reflection and feedback
Design
Levels of Bloom’s Taxonomy,
learning objectives, learning
styles
Usability
Layout, navigation, compliance
with Americans with Disabilities
Act
6
Methods:
Analysis
o All resources separately and top five
(top quartile)
o Produced descriptive statistics
o Repeated measures ANOVA between scores
in each category
7
Results:
Environmental scan
Resources identified through environmental scan
(n = 55)
Multi-part resources combined to make new total
(n = 48)
Resources screened for eligibility after duplicates
removed (n = 41)
Resources identified as candidates for evaluation
(n = 27):
Creators contacted (n = 13)
Resources included for evaluation
(n = 20)
Resources excluded (n = 14):
Did not meet content criteria (n = 10)
Did not meet format criteria (n = 4)
Resources excluded (n = 7):
No contact information (n = 2)
No response (n = 5)
8
Results:
Environmental scan
o Audiences: researchers, health care professionals,
and students
o Creators: universities, research organizations, and
government agencies
Resource Format
Creator Location
AUS, 2
EUR, 7
CAN, 2
USA, 9
Video, 8
Online
course, 7
Web
module, 5
9
Results:
Environmental scan
Time to Complete
< 1 hour
10+ hours
1 to < 5 hours
5 to < 10 hours
o Time to complete
ranges from < 1 hour
to over 200 hours
o 14/20 resources
available free of
charge
o Prices range from
$15 USD to over
$3,000 USD
10
Results:
Evaluation
Overall scores (%), arranged by resource format
92%
89%
88%
80%
74%
70%
66%
54%
61%
57% 55%
57% 57% 57%
54%
53%
51%
57%
45% 43%
Online course
Web module
Median
Average
34%
R20
R19
R18
R17
R15
R11
R09
R08
R14
R12
R10
R07
R03
R16
R13
R06
R05
R04
R02
R01
36%
Video(s)
11
Results:
Top five resources
Top five resources (top quartile)
Resource #
Resource name
Creator
R01
Comprehensive Systematic
Review Training Program (CSRTP)
Joanna Briggs Institute
R02
Introduction to Systematic Review
and Meta-Analysis
Johns Hopkins University (through
Coursera) – FREE
R03
Online Learning Modules for
Cochrane Authors
Cochrane Training – FREE
R04
Introduction to Systematic Review
and Meta-Analysis Course
Dalla Lana School of Public
Health, University of Toronto /
Knowledge Translation Program,
St. Michael's Hospital
R05
Systematic Reviews: diversity,
design and debate
EPPI-Centre
12
Principal finding #1:
High cost did not always correlate with
high score
13
Results:
Content
Coverage of systematic review steps
Step
% (n) of resources
covering step
Defining a research question
and/or creating a protocol
90% (18)
Conducting a rigorous search
90% (18)
Determining selection criteria
80% (16)
Critical appraisal and/or risk of
90% (18)
bias assessment
Data extraction
85% (17)
Analysis and/or creation of an
in-depth report
95% (19)
o
12 resources
covered all 6
steps
o
Five resources
covered 5/6
steps
o
Remaining
three resources
covered 3/6
steps
14
Results:
Content
Content scores (%), arranged by resource format
100%
100%
93% 93%
93% 93%
86%
93% 93%
86%
93%
86%
86% 86%
83%
79%
86%
71%
64%
57%
57%
Online course
Web module
Median
Average
R20
R19
R18
R17
R15
R11
R09
R08
R14
R12
R10
R07
R03
R16
R13
R06
R05
R04
R02
R01
43%
Video(s)
15
Principal finding #2:
Audience was frequently undefined,
leading to lower scores
16
Results:
Usability
Usability scores (%), arranged by resource format
100%100%
100%
75%
100%
81%
75%
56%
94%
81%
75% 75%
69%
75%
69% 69% 70%
75%
56%
31%
13%
Online course
Web module
Median
Average
R20
R19
R18
R17
R15
R11
R09
R08
R14
R12
R10
R07
R03
R16
R13
R06
R05
R04
R02
R01
0%
Video(s)
17
Results:
Design
Design scores (%), arranged by resource format
92%
88% 88%
75%
79%
67%
58%
52%
42% 42%
42%
29%
29%
29%
Online course
Web module
R18
R17
R15
R11
R09
R08
R14
R12
R10
R07
R03
R16
R13
R06
R05
R04
R02
R01
21%
48%
25% 25%
Average
46%
R20
50%
R19
50%
Median
58%
Video(s)
18
Results:
Interactivity
Interactivity scores (%), arranged by resource format
100%
100%100%
90%
90% 90%
70%
60%
65%
49%
35% 35%
25%
30%
35%
35%
25% 25%
Online course
Web module
Median
Average
R20
R19
R18
R17
R15
R11
R09
R08
R14
R12
R10
R07
R03
R16
R13
R06
R05
R04
R02
R01
0% 0% 0% 0%
Video(s)
19
Principal finding #3:
Low interactivity scores were related to
deficiencies in design
20
Principal finding #3
○
In the top five, average content–design difference is
9%; average content–interactivity difference is 1%
○
In the remaining 15 resources, the differences are 38%
and 45%, respectively
Top five average scores
(%)
93%
Other resources
average scores (%)
92%
79%
84%
41%
Design
Content
Interactivity
Design
34%
Content
Interactivity
21
Principal finding #4:
Format correlated with score; online
courses performed best
22
Principal finding #4
As a reminder, here are the overall scores by format:
○
Overall scores (%), arranged by resource format
92%
89%
88%
80%
74%
70%
66%
54%
61%
57% 55%
57% 57% 57%
54%
53%
51%
57%
45% 43%
Online course
Web module
Median
Average
34%
R20
R19
R18
R17
R15
R11
R09
R08
R14
R12
R10
R07
R03
R16
R13
R06
R05
R04
R02
R01
36%
Video(s)
23
Summary:
Principal findings
1. High cost did not always correlate with
high score
2. Audiences were frequently undefined
3. Low interactivity scores were related to
deficiencies in design
4. Format correlated with score; online
courses performed best
24
Discussion:
Recommendations
1. Include measurable objectives and
increase interactivity to cover more
levels of Bloom’s Taxonomy
2. Improve video resources
3. Recommend appropriate resources
25
Discussion:
Future research
○ Which users would benefit from
different resource types and formats?
○ User testing of high-quality resources
○
Evaluating learner satisfaction
○
Assessing rate of completion and quality of
resulting reviews
26
Discussion:
Limitations
1. Environmental scan currency
2. Systematic review step coverage
3. Access to complete resources
27
Conclusions
○ Reflect on the material currently being
used and developed
○ Be aware of common limitations of
online resources
○ Keep in mind important elements of
content, design, interactivity, and
usability
28
Thank You
Contact
Leah Boulos
Evidence Synthesis Coordinator
Maritime SPOR SUPPORT Unit
[email protected]
References
Foster M. Evaluation of best practices in the design of
online evidence-based practice instructional
modules. JMLA. 2014;102(1):31-40.
Galipeau J, Moher D. Repository of ongoing training
opportunities in journalology [Internet].
[Winnetka (IL)]: World Association of Medical
Editors; c2017 [cited 22 Mar 2017]. Available
from:
<http://www.wame.org/about/repository-ofongoing-training-opportunities>.
Opdenacker, L, Stassen, I, Vaes, S, Waes, LV, Jacobs,
G. Quadem: manual for the quality
assessment of digital educational material.
Antwerpen: Universiteit Antwerpen; 2010.
Armstrong P. Bloom's taxonomy: the revised
taxonomy (2001) [Internet]. Nashville (TN):
Vanderbilt University, Center for Teaching;
c2017 [cited 10 May 2017]. Available from:
<https://cft.vanderbilt.edu/guides-subpages/blooms-taxonomy/#2001>.
29
Appendix:
Bloom’s Taxonomy
Armstrong P. Bloom's taxonomy: the revised taxonomy (2001) [Internet]. Nashville (TN): Vanderbilt
University, Center for Teaching; c2017 [cited 10 May 2017]. Available from:
<https://cft.vanderbilt.edu/guides-sub-pages/blooms-taxonomy/#2001>.
30