Title: Introducing the Debriefing Assessment in Real Time (DART

Title: Introducing the Debriefing Assessment in Real Time (DART) tool: identifying the
characteristics that distinguish effective debriefings, as assessed by Bloom’s taxonomy
Authors: Mahnoosh Nik-Ahd, MD, MPH, Nicole Yamada, MD, Louis Halamek, MD
Background:
Educators often facilitate debriefings to help trainees learn from clinical exercises, but little is
known about how to best lead a debriefing. Our group has previously developed a formal
debriefing evaluation tool known as the Debriefing Assessment in Real Time (DART) tool in order
to evaluate the quality of debriefings that follow real or simulated clinical scenarios. The DART
applies Bloom’s Taxonomy to characterize the cognitive level achieved by each question or
statement made by the debriefer being evaluated.
Objectives:
1. To introduce the DART as an objective assessment of the quality of debriefings
performed by debriefers.
2. To describe how debriefers speak with regards to the cognitive level of questions asked,
as assessed by Bloom’s Taxonomy.
Design/Methods:
This will be an IRB-approved study conducted at Stanford in 2016-17. Debriefings are led by
active clinicians with formal training in debriefing at the Center for Advanced Pediatric and
Perinatal Education (CAPE) at Stanford. Two reviewers (MN and NY) reviewed video recordings
of all debriefings conducted at CAPE between January-March 2016 using the DART. Bloom’s
Taxonomy was concurrently applied to categorize each question asked by debriefers into one of
Bloom’s six cognitive domains: Knowledge, Comprehension, Application, Evaluation, Analysis,
and Synthesis. For instance, a question such as “how accurately did the patient’s vital signs
reflect his underlying disease?” is categorized as an Evaluation question. One additional
category, Other, was created to capture the content of questions that did not readily fall into
Bloom’s Taxonomy. Interrater agreement will be established between the two reviewers.
Anticipated Results:
25 total debriefings following simulated neonatal or maternal arrest will be examined from
January-December 2016. These debriefings have ranged in duration from *** minutes ***
seconds to *** minutes. Across all debriefings, the average question to statement ratio as
uttered by the debriefer was ***. When Bloom’s Taxonomy was applied to each debriefing, the
cumulative breakdown of each subcategory is as follows: Knowledge ***%, Comprehension
***%, Application ***%, Evaluation ***%, Analysis ***%, Synthesis ***%. Other questions
comprised ***% of the questions analyzed across debriefings. Bloom's Taxonomy is
an established hierarchical model used to classify levels of complexity for cognitive learning
objectives. Our application of Bloom's Taxonomy to debriefing questions is a novel concept that
supports a framework for asking questions targeted at various cognitive levels. This approach
can empower debriefers with the specific language and methodology to target these different
levels.
Timeline:
Present - Jan 20: complete all data analysis, including interrater reliability
Jan 21 - Feb 20: manuscript preparation
Feb 21 - March 20: completion of manuscript and submission
Key Questions for Attendees: (How can those listening help you move your project along?)
1. How would you expect debriefers to use the findings of this study in their own practice?
2. How can the results of this study be applied outside of simulation-based training?