Project VIABLE - Direct Behavior Ratings

Project VIABLE: Overview of Directions Related to Training to Enhance
Adequacy of Data Obtained through Direct Behavior Rating (DBR)
Sandra M.
1
Chafouleas ,
3
Christ ,
1
Sugai
Theodore J.
& George
3
1
2
University of Connecticut East Carolina University University of Minnesota
PROJECT GOALS
Phase I: Foundations of Measurement (Instrumentation and Procedures)
How should the DBR scale be comprised? (e.g., Likert-type versus continuous), How should
assessment items be worded, and how many items should be included for each target
behavior?, How many observations and/or what duration should be used for each session?,
How long should the DBR observation rating period be in order to obtain a reliable estimate
of the behavior?
Phase II: Decision Making and Validity
In response to an intervention, does the DBR data correspond to data obtained via systematic
direct observations, ODRs, traditional rating scales?, Can DBR data be used to screen a
classroom and reliably identify students at-risk (engagement, disruption)?, Can DBR data be
used in tri-annual assessments to assess level and rate of student behavior?
Phase III: Feasibility (Training and Use, Perceived Usability)
What does DBR training require?*, How intrusive are procedures?, How well
do procedures and instrumentation generalize to teacher assessments within classroom
settings?, How do consumers of DBR perceive its usability?, How acceptable is DBR?, How
useful are DBR outcomes for school-based decisions?
* Content for this poster.
Defining Direct Behavior Rating
Direct – Rating occurs in close proximity to the time and place of the observation. Thus,
the rater must observe the target for a “sufficient” portion of the observation period.
Behavior – The target of rating must be well-defined and accessible for observation.
Rating – The rating component quantifies rater perception of the target behavior.
Example single-item DBR scale
BACKGROUND
What is the influence of rater on data obtained
from DBR? In the “absence of training”, work to
date under Phase 1 has suggested:
•Generally, profiles of aggregated DBR are consistent
across raters (when averaged within student across
occasion or across students across occasions)
T. Chris
2
Riley-Tillman ,
STUDY 1
The purpose of this study was to examine whether
direct training procedures resulted in greater DBR
accuracy than either indirect training or no
training. In addition, teacher acceptability of
DBR in behavior assessment was assessed.
•Reliable estimates of level can be established with
relatively few observations (5-10 for low stakes, 15-20 for
high stakes) completed by the same rater, BUT some
individual raters do not fall within these guidelines
Method involved having middle/high school teachers from a
private school watch video clips of student behavior, and
then rate academic engagement and disruption using
DBR. Teachers were assigned to one of 3 conditions:
•Individual raters seem to anchor ratings within a range of
gradients and then subsequent ratings are made relative to
that range
•No training: brief overview of study procedures
•Indirect: 20 minute instruction on DBR use, with
example ratings
•Direct: 20 minute instruction on DBR use, with practice
and feedback
•When looking across raters, preliminary evidence supports
systematic bias in rating.
What components to training might be included
to enhance outcomes?
•Direct training – active modeling and rehearsal as opposed
to didactic or written
•Opportunities for practice and feedback – direct practice
and immediate corrective feedback
•Intensity of training – how much training is sufficient?
•Explanation using “frame of reference” – provide clear
rationale for rating, including qualitative descriptions
•Rater error training – definitions/examples of types of
error to avoid when rating, such as leniency/severity, halo
STUDY 3
In this study, the extent to which DBR training
with different levels of practice and performance
feedback impacts rating accuracy was examined.
Method involved undergraduates watching and rating video
clips of student behavior in one of 3 conditions:
•No training: Overview of DBR and behavior
assessment
•Training: Overview of DBR and instruction on DBR
use, with practice and feedback over 3 opportunities
•Extended Training: Same but 6 opportunities
In summary, initial analysis indicated no significant
difference in accuracy between training conditions or at one
week re-test. Further analyses compared training vs. control
(no training) across three base-rate levels (low, medium and
high) for each behavior. Training did not substantially
improve overall ability to rate academic engagement but
ratings were more accurate with high academic engagement,
as demonstrated by lower systematic accuracy difference
scores. Training significantly improved accuracy when
rating all levels of disruptive behavior, and across
conditions, accuracy was highest when disruptive behavior
was low or high. Participants rated compliance with
significantly less accuracy when targets displayed medium
rates of compliance.
Riley-Tillman, T.C. Harrison, S, Amon, J., & Brooks, S.
(in preparation). An investigation of rating accuracy on
DBR following training involving practice and feedback.
In summary, chi square analyses suggested that direct
training did not improve rating accuracy. Moderate
acceptability for DBR was found. Questions about selection
and ordering of video samples during training were raised as
initial exposure to mid-scale behavior ratings may create
rater frustration.
LeBel, T.J., Kilgus, S.P., Briesch, A.M., & Chafouleas,
S.M. (in press). The impact of training on the accuracy
of teacher-completed Direct Behavior Ratings (DBRs).
Journal of Positive Behavioral Interventions.
STUDY 4
The purpose of the study was to examine whether
DBR rating accuracy is significantly impacted by
the type of training package. Research questions
include: a) Does the addition of frame of reference
(FOR) to standard training (ST) improve rater
accuracy over and above ST alone?, b) Does the
addition of rater error training (RET) to FOR and
ST suggest an improvement in rater accuracy over
and above ST alone and ST+FOR, and c) Does the
addition of increased exposure, defined as
opportunity for practice/feedback (2x more = 6x),
improve rater accuracy over the standard 3x?
Method involved undergraduates watching and rating video
clips of student behavior in one of 6 possible conditions
(described above). Clips were purposefully chosen to reflect
a range of ratings on the DBR scale, and a composite
accuracy score for each behavior will be used in analysis.
Two-way ANOVA (2 Levels of Exposure X 3 Types of
Training) will be conducted to test for differences among
groups with regard to differential accuracy (the outcome).
Data analyses in process.
Chafouleas, S.M., Riley-Tillman, T.C., Kilgus, S.P.,
Amon, J., Jaffery, G. & Brooks, S. (in preparation).
Critical components of DBR training to enhance rater
accuracy: An investigation of training content and
exposure.
STUDY 2
The purpose of the study was to examine whether
providing users of DBR with a training session
utilizing practice and performance feedback would
increase rating accuracy.
Method involved instructing 59 undergraduate students to
watch video clips of student behavior and then rate target
behaviors using DBR in one of two conditions:
•No training: 23 minute overview of DBR and behavior
assessment
•Training: 23 minute instruction on DBR use, with
practice and feedback
In summary, results were consistent with initial hypotheses in
that ratings conducted by trained participants were more
accurate than those conducted by untrained participants.
Specifically, using standard difference scores or Cronbach’s
differential accuracy, raters in the training condition were
significantly more accurate than those in the brief
familiarization condition for ratings of academic engagement
and disruption. As such, training resulted in higher levels of
absolute or rank order accuracy. There was also far less
variability among the ratings completed by the trained group
than by the brief familiarization (no training) group.
Schlientz, M.D., Riley-Tillman, T.C., Briesch, A.M.,
Walcott, C.M., & Chafouleas, S.M. (in press). The impact
of training on the accuracy of Direct Behavior Ratings
(DBRs). School Psychology Quarterly.
SUMMARY POINTS:
FINDINGS TO DATE
• Rater training may be needed to enhance outcomes, particularly
for certain individuals
• Training to enhance accurate rating of mid-scale levels of behavior
most challenging yet important
• Training need not necessarily be high intensity (lengthy)
• Components likely to be beneficial include direct training with:
• Overview with clear review of definitions and modeling of
rating
• Practice and immediate feedback
• Behavior examples that utilize the scale range (low, medium,
high)
• Impact of incorporating “frame of reference” and “error
training” still to be determined
NEXT STEPS
• Would our current findings related to instrumentation/procedures
be improved if these training components were included?
• All future work will include training beyond familiarization
• What else might be done to enhance a) outcomes for variable
individuals, and b) mid-scale rating accuracy?
• Following training, what happens to rating accuracy over time?
For additional information, please visit www.directbehaviorratings.com.
Email correspondence regarding the project should be directed to Sandra Chafouleas at [email protected].
• How do we ensure training access to all?
• Develop an on-line dynamic training module