The effectiveness of tools used to evaluate successful critical

JBI Database of Systematic Reviews & Implementation Reports
2014;12(3) 39 - 50
The effectiveness of tools used to evaluate successful critical
decision making skills for applicants to healthcare graduate
educational programs: a systematic review protocol
Brian Benham, MSN, APRN, CRNA
1,3
Diane Hawley, PhD, RN, CCNS, CNE
2,3
1. Doctorate of Nursing Practice (DNP) candidate, Texas Christian University, Texas, USA
2. Associate Professor of Professional Practice, Harris College of Nursing, Texas Christian University,
Texas, USA
3. Texas Christian University Center for Evidence Based Practice and Research: a collaborating centre
of the Joanna Briggs Institute
Corresponding author
Brian Benham,
[email protected]
Review question/objective
The objective of this review is to identify the effectiveness of tools used to evaluate critical decision
making skills for applicants to healthcare graduate educational programs.
Review questions:
Do unique tools to assess critical decision making skills evaluate the likelihood of student success in
healthcare graduate educational programs?
Do traditional Graduate Record Exam (GRE) scores or grade point average (GPA) evaluate the
likelihood of student success in healthcare graduate educational programs?
Background
Students leave healthcare academic programs for a variety of reasons. These reasons include financial
hardship, personal and family health issues, as well as the realization that a program may be too
rigorous for them to complete. When they attrite, it is disappointing for the student as well as their
faculty. In addition, there can be residual debt from student loans and potential for conflict and legal
1
dispute if the student was relieved from their program. Students that are accepted into graduate
educational programs that later attrite also block other students from enrolling due to a limited number
of available spots. A common desire among both students and faculty would be 100% graduation of
every applicant initially selected. Unfortunately this does not always occur.
In a review of attrition from participating Certified Registered Nurse Anesthetists (CRNA) programs in
the United States in 2005, the second most common reason for student attrition was academic
dismissal, which was responsible for 30%of all attritions. Clinical deficiencies accounted for 15%of
10.11124/jbisrir-2014-1392
Page 39
JBI Database of Systematic Reviews & Implementation Reports
2014;12(3) 39 - 50
student attrition. According to the review, overall attrition nationwide was 9%, with a low of zero students
1
to a high of 41.3%attrition. It could be surmised that academic and clinical deficiencies are related to
inadequate critical decision making (CDM) skills. If accurate, this deficit is responsible for nearly half of
all attrition.
Advanced practice nursing and other healthcare professions require not only extensive academic
preparation, but also the ability to critically evaluate patient care situations. Inherent to this process is
assessing situations, analyzing options, narrowing possible interventions and implementing the action.
This is followed by evaluating the effect of the action and correcting if needed. These steps contribute to
the process known as CDM. The ability to critically evaluate a situation is not innate. CDM skills are
2
higher level skills that are difficult to assess. For the purpose of this review, CDM and critical thinking
(CT) skills refer to the same constructs and will be referred to globally as CDM skills.
Quantitative, cognitive measures like grade point averages (GPA) or scores from the Graduate
Readiness Exam (GRE) are frequently used methods of determining whether applicants will succeed in
graduate healthcare programs. Though these two methods do elucidate an applicant’s probable ability
to complete the required course work; the applicant’s ability to engage in CDM and their mastery of
3
non-cognitive skills is harder to evaluate. A 2009 study by Megginson analyzed various methods for
4
assessing non-cognitive constructs in graduate admissions. Results highlighted that non-cognitive
attributes, such as motivation, problem solving and maturity, were generally assessed through letters of
recommendation, interviews and personality inventories.
According to O’Sullivan, former Program Director of the U.S. Army’s nurse anesthesia program
(personal communication, March 2013), most CRNA programs rely on applicant interviews and letters
of recommendations from previous supervisors of the applicant to assess CDM and non-cognitive
attributes. However, Megginson illustrates that traditional narrative letters of recommendation (NLOR)
4
exhibited low validity in predicting performance in graduate programs. Areas of NLOR weakness
included problems with leniency, less than optimal knowledge of the applicant, low reliability and other
extraneous factors. These findings suggest that the primary tool for assessing CDM and non-cognitive
factors in applicants via the NLOR hold little utility.
5
In 2010, Fero and colleagues published an experimentally-based study related to assessing CT. The
quasi-experimental cross-over designed study examined the CT scores of a convenience sample of 36
undergraduate nursing students. The numeric data was derived from videotaped vignettes (VTV),
patient care scenarios utilizing high-fidelity human patient simulators (HFHS) and two known tests for
accessing CT skills; the California Critical Thinking Disposition Inventory (CCDTI) and the California
Critical Thinking Skills Test (CCTST). Videotaped vignettes are recorded simulated situations, with an
actor portraying the patient in a given scenario. The subjects watched the VTV, then provided a written
assessment of the situation with proposed actions and rationales. The subjects were administered the
CCTDI and CCTST, then randomized into two groups, A and B. Group A was presented the VTV
involving a pulmonary embolism followed by the same scenario using the HFHS. Group B started with
the HFHS, then watched and evaluated the VTV. The steps of randomization and altering the sequence
of evaluated events constituted the quasi-experimental cross-over design stated by the author. The
results demonstrate that subjects exhibiting strong clinical thinking dispositions on the CCTDI showed a
greater ability to identify clinical problems, report essential data, initiate nursing interventions and
prioritize care. The implication of this finding is that HFHS simulated scenarios were equally effective in
assessing CDM as the CCTDI.
10.11124/jbisrir-2014-1392
Page 40
JBI Database of Systematic Reviews & Implementation Reports
2014;12(3) 39 - 50
A 2005 study presented the possibility of utilizing on-line responses to presented case scenarios as one
6
tool to evaluate CT skills. This observational study involved 53 master’s degree nursing students
enrolled in three on-line courses. The authors developed a 10-item Likert-scaled tool. The on-line
written work analyzed involved case scenarios related to one of three academic courses sampled. The
three scenarios included a crisis-intervention and decision-making situation, a primary care clinic
encounter scenario and a communication problem scenario between a student nurse and a staff nurse
in a clinical setting. Analysis of data illustrated that inter-rater reliability problems prevented this tool
from gaining reliable data.
Multiple tools are available to evaluate CT and CDM. The implication from the literature is these tools
should be used for their predictive value in admissions processes. However, a descriptive correlational
7
study reported data that confounds this conclusion. A convenience nonprobability sample of nurses
enrolled in a master’s level family nurse practitioner programs was utilized. The author’s research
hypotheses stated that nurses who scored higher on the CCTST will demonstrate multiple higher level
clinical skills, as evaluated by the Clinical Decision-Making in Nursing Scale (CDMNS) and preceptor
evaluation tools. Correlational analysis illustrated an absence of statistically significant relationships
between CT and any of the evaluated areas of study. One positive correlation was that nurses with
critical care nursing experience demonstrated higher scores on the CCTST scales.
A study published in a nurse anesthesia journal in 2012 addressed a CRNA program’s use of
8
high-fidelity simulation as an integral portion of the applicant appraisal. This retrospective study was
used to assess possible correlations between HFHS performance scores and other applicant
characteristics. These include undergraduate GRE, scores on two written assignments (goals
statement and interview essay), years of critical care and general nursing experience, professional
involvement, GRE scores and initial nursing degree earned (Associate Degree in Nursing or Bachelor of
Science in Nursing).The simulation component was evaluated using an author-derived tool (SIET, or
simulation interview evaluation tool). The tool covered eight areas of problem solving including
recognition, deductive reasoning, causation, treatment plan development, assistance seeking,
treatment initiation, communication and leadership traits. The subjects were a convenience sample of
70 applicants from the 2008 class selection cycle. Two CRNA simulation coordinators performed the
simulation assessment portion to assist with inter-rater reliability. Results of the study demonstrated a
positive correlation between face-to-face interviews and SIET scores (p=0.003). Limitations of this
study included a small sample size, retrospective nature of data collection, as well as the un-validated
nature of the SEIT evaluation tool. What this study accomplishes is the introduction of an
anesthesia-derived, simulation-based scoring matrix that functions in parallel to currently established
and accepted admission standards.
A 1996 article by Adams, Whitlow, Stover and Johnson presented an evaluation of four tools used to
10
assess CT. This expert opinion piece was based on a review of the available literature and discusses
the WGCTA, CCTST, Ennis-Weir Critical Thinking Essay Test (EWCTET) and the Cornell Critical
Thinking Tests (CCTT). The authors’ analysis found all four tools to be valid, providing a measure of the
abstract concepts of CT. The general strength of these tools, as reported in literature, is limited by the
use of convenience samples and a small number of subjects. While the authors support the WGCTA as
a validated instrument, they question its utility in nursing due to inconsistent results in published
literature. The authors view the CCTST as lacking validation in the field of nursing. The CCTT and
EWCTET are stated as having ‘a reservoir of untapped potential’ due to their underuse as predictive
tools for CT.
10.11124/jbisrir-2014-1392
Page 41
JBI Database of Systematic Reviews & Implementation Reports
2014;12(3) 39 - 50
What these studies and articles represent is current thought and a historical perspective of the
evaluation of CT and CDM skills. While these works have validity and provide information, a consensus
is lacking in the form of a systematic review of these tools in use. A thorough review of all available
databases, to include those devoted to systematic reviews, failed to demonstrate works focusing on
critical decision making assessment in applicants to graduate healthcare educational programs.
Though two meta-analyses were discovered evaluating the predictive validity of the GRE for graduate
11
student selection and performance, and the validity of the PCAT (Pharmacy College Admission Test)
12
and grade predictors in pharmacy student performance,
neither work followed a systematic review
protocol. A quality systematic review could provide light on the path to acceptance in the application
process for graduate healthcare education programs. As healthcare monies decrease and educational
funds are more difficult to acquire, any consistent tool to allow the best qualified applicant admission to
graduate healthcare programs will undoubtedly increase the number of providers, with education funds
best spent on those with the greatest likelihood of success.
Keywords
Critical thinking; decision making; academic; graduate program; masters program; doctoral program;
healthcare
Inclusion criteria
Types of participants
This review will consider studies published in the English language that include applicants, students
enrolled and/or recent graduates (within one year from completion) of healthcare graduate educational
programs.
Types of intervention(s)/phenomena of interest
This review will consider studies that evaluate the utilization of unique tools (e.g. CCTST [California
Critical Thinking Skills Test], CTOE [Arnett Critical Thinking Outcome Evaluation], HCTSR [Holistic
Critical Thinking Scoring Rubric], WGCTA [Watson Glaser Critical Thinking Appraisal], Prevue,
simulation-based evaluation, and others unknown),as well as standard tools such as the GRE or GPA,
to evaluate critical decision making skills in graduate heath care program applicants.
Types of outcomes
This review will consider studies that include the following outcome measures:
Successful quantitative evaluations based on specific field of study standards.
Types of studies
This review will consider any experimental study design including randomized controlled trials (RCTs),
quasi-experimental and before and after studies. Analytical epidemiological study designs including
prospective and retrospective cohort studies, case control studies and analytical cross sectional
studies will also be evaluated.
10.11124/jbisrir-2014-1392
Page 42
JBI Database of Systematic Reviews & Implementation Reports
2014;12(3) 39 - 50
Search strategy
The search strategy aims to find both published and unpublished studies. A three-step search strategy
will be utilized in this review. An initial limited search of MEDLINE and CINAHL will be undertaken,
followed by analysis of the text words contained in the title and abstract and of the index terms used to
describe the article. A second search using all identified keywords and index terms will then be
undertaken across all included databases. Thirdly, the reference lists of all identified reports and articles
will be searched for additional studies. Studies published in English will be considered for inclusion in
this review. Studies published after 1970 in the English language will be considered for inclusion in this
review. This date will be used due to 1970 being the earliest publication found from any articles using
the tools mentioned in the review question.
The databases to be searched include:
JBI Database of Systematic Reviews and Implementation Reports
Cochrane Library
CINAHL
ProQuest
MEDLINE
ERIC
The search for unpublished studies will include:
New York Academy of Medicine Grey Literature Report
MEDNAR
ProQuest database for theses and dissertations
OpenSIGLE
Virginia Henderson Library
Initial keywords to be used will be:
Critical thinking
Decision making
Academic
Graduate program
Masters program
Doctoral program
Healthcare
All studies identified during the database search will be assessed for relevance to the review based on
the information provided in the title, abstract and descriptor/MeSH terms. A full text will be retrieved for
all studies that meet the inclusion criteria (see Appendix I). Studies identified from reference list
searches will be assessed for relevance based on the study title.
10.11124/jbisrir-2014-1392
Page 43
JBI Database of Systematic Reviews & Implementation Reports
2014;12(3) 39 - 50
Assessment of methodological quality
Papers selected for retrieval will be assessed by two independent reviewers for methodological validity
prior to inclusion in the review using standardized critical appraisal instruments from the Joanna Briggs
Institute Meta Analysis of Statistics Assessment and Review Instrument (JBI-MAStARI) (Appendix V).
Any disagreements that arise between the reviewers will be resolved through discussion, or with a third
reviewer if a consensus cannot be reached.
Data collection
Data will be extracted independently by each reviewer from papers included in the review using the
standardized data extraction tool from JBI-MAStARI (Appendix VI). The data extracted will include
specific details about the interventions, populations, study methods and outcomes of significance to the
review question and specific objectives. Where there is missing information or data in retrieved articles,
the author(s) will be contacted for clarification and data when possible.
Data synthesis
Quantitative data will, where possible, be pooled in statistical meta-analysis using JBI-MAStARI. All
results will be subject to double data entry. Effect sizes expressed as odds ratios (for categorical data)
and weighted mean differences (for continuous data) and their 95% confidence intervals will be
calculated for analysis. Heterogeneity will be assessed statistically using the standard Chi-square and
also explored using subgroup analyses based on the different study designs included in this review as
applicable. Where statistical pooling is not possible, the findings will be presented in narrative form
including tables and figures to aid in data presentation where appropriate. Subgroup analysis will be
used where appropriate when variations in effects are noted.
Conflicts of interest
No known conflicts of interest exist.
Acknowledgements
The Texas Christian University Centre for Evidence-based Healthcare: a Collaborating Centre of the
Joanna Briggs Institute.
10.11124/jbisrir-2014-1392
Page 44
JBI Database of Systematic Reviews & Implementation Reports
2014;12(3) 39 - 50
References
1. Dosche M, Jarvis S, Schlosser K. Attrition in nurse anesthesia educational programs as reported by
program directors: The class of 2005. AmerAssoc of NurAnes J [Internet] 2008;76(4):277-281.
Availablefrom:http://www.aana.com/newsandjournal/ Documents/attrition_naprogs0808_p277-281.pdf
2. Cohen M, Freeman T, Thompson, B. Critical thinking skills in tactical decision making: A model and a
training strategy. [Internet].n.d. Available from:
http://www.cog-tech.com/papers/chapters/tadmus/tadmus.pdf
3. Hulse JA, Chenowith T, Lebedovych L, Dickinson P, Cavanaugh B, Garrett N. Predictors of student
success in the U.S. Army Graduate Program in Anesthesia Nursing. AmerAssoc of NurAnes J.
2007;75(5):339-346.
4. Megginson L. Noncognitive constructs in graduate admissions: An integrative review of available
instruments. Nur Educ. 2009;34(6):254-261.
5. Fero L, O'Donnell J, Zullo T, Dabbs A, Kitutu J, Samosky J, Hoffman L. Critical thinking skills in
nursing students: Comparison of simulation-based performance with metrics. J AdvNur, [Internet]
2010;66(10):2182-2193.doi: 10.1111/j.1365-2648.2010.05385.x.
6. Penprase B, Mileto L, Bittinger A, Hranchook AM, Atchley JA, Bergakker SA, Franson HE. The use of
high-fidelity simulation in the admissions process: One nurse anesthesia program's experience.
AmerAssoc of NurAnes J. 2012;80(1):43-8.
7. Gorton KL. An investigation into the relationship between critical thinking skills and clinical judgment
in the nurse practitioner student.(PhD dissertation). Available from: ProQuest Dissertations and Theses
8. Ali NS, Bantz D, Siktberg L. Validation of critical thinking skills in online responses. J Nurs Educ.
2005;44(2):90-94.
9. Reid, HV. The correlation between a general critical thinking skills test and a discipline-specific critical
thinking test for associate degree nursing students [PhD dissertation]. Available from: ProQuest
Dissertations and Theses
10. Adams MH, Whitlow JF, Stover LM, Johnson KW. Critical thinking as an educational outcome: An
evaluation of current tools of measurement. Nur Educ. 1996;21(3):23-32.
11. Kuncel NR, Hezlett SA, Ones DS. A comprehensive meta-analysis of the predictive validity of the
graduate record examinations: Implications for graduate student selection and performance.
PsycholBul, 2001;127, 162--181. doi:10.1037/0033-2909.127.1.162
12. Kuncel NR, Credé M, Thomas LL,Klieger DM, et al. A Meta-Analysis of the Validity of the Pharmacy
College Admission Test (PCAT) and Grade Predictors of Pharmacy Student Performance. Publication
info: Amer J PharmEd, 2005;69(1-5): 339-347.
10.11124/jbisrir-2014-1392
Page 45
JBI Database of Systematic Reviews & Implementation Reports
2014;12(3) 39 - 50
Insert page break
Appendix I: Appraisal instruments
MAStARI appraisal instrument
this is a test message
Insert page break
10.11124/jbisrir-2014-1392
Page 46
JBI Database of Systematic Reviews & Implementation Reports
2014;12(3) 39 - 50
this is a test message
Insert page break
10.11124/jbisrir-2014-1392
Page 47
JBI Database of Systematic Reviews & Implementation Reports
2014;12(3) 39 - 50
this is a test message
Insert page break
10.11124/jbisrir-2014-1392
Page 48
JBI Database of Systematic Reviews & Implementation Reports
2014;12(3) 39 - 50
Appendix II: Data extraction instruments
MAStARI data extraction instrument
Insert page break
10.11124/jbisrir-2014-1392
Page 49
JBI Database of Systematic Reviews & Implementation Reports
10.11124/jbisrir-2014-1392
2014;12(3) 39 - 50
Page 50