PART TWO

The Two Types of Progress Monitoring
PART TWO
As I discussed in Part One, all of us are quite familiar with the Mastery Monitoring form of
progress monitoring. I’ll continue with the weight loss analogy: if Mastery Monitoring is “calorie
counting,” General Outcome Measurement (GOM) is standing on a weight scale. The information
obtained from the scale is not specific to what contributes to weight loss (e.g., consuming fewer
calories, getting more exercise), but enables an overall conclusion about the effects of the weight
loss effort. The information is straightforward and easy to understand. Is the person losing weight
and if not, does the program need to change? Likewise, GOM in basic skills enables educators to
judge whether the student is benefiting from instruction and “becoming a better reader,” or is
“better at mathematics computation.”
Curriculum-Based Measurement (CBM) is the most common form of academic GOM. Educators
can answer questions about whether a student is becoming a better reader by assessing student
growth in the number of words read correctly (WRC) on a set of short, standardized oral reading
tests (R-CBM) that are of equal difficulty over time. If the number of WRC increases over time on
tests like this, we can confidently state the student is becoming a better reader. It is our ability as
educators to answer this important question about broad student achievement improvement on a
frequent basis that has the highest link to improved student achievement (Fuchs & Fuchs, 1986;
Hattie, 2009; Yeh, 2007)
Like Mastery Monitoring, GOM is also based on a set of assumptions, most notably that a reliable
and valid indicator (i.e., what we test) is empirically validated as a correlate of the of the broader
achievement domain, like general reading skill or mathematics computation. Fortunately, in a
number of basic skill areas, these indicators have been identified and are represented in the CBM
research literature and made easier to do with aimsweb.
Finally, let me make a couple of concluding comments:
1. To date, it seems that academic basic skills lend themselves more readily to the identification of
reliable and valid indicators. It may not be possible to find similar parallel indicators in content
instruction (i.e., social studies, science) or more complex constructs like reading comprehension.
It may not be possible to find GOM indicators when children change rapidly like infants and
toddlers and/or when skill development is rapid (e.g., early reading and mathematics).
2. Mastery Monitoring and GOM are not incompatible. To me, a combination of both is best. When
losing weight, it might be useful to assess the number of calories consumed daily or weekly, or the
number of minutes exercising or steps walked. But the bottom line for overall effects of weight
loss efforts is that simple thing we can measure, standing on the scale to see how many pounds we
lost. If I am assessing growth in basic skills and I only have limited time to do Mastery Monitoring
or GOM, I’ll be using the latter first and foremost.
Here are some key references that I suggest as core readings for more full understanding of the
similarities and differences in Mastery Monitoring and GOM.




Deno, S. L. (1991). Individual differences and individual difference: The essential
difference of special education. The Journal of Special Education, 24(2), 160-173.
Fuchs, L. S., & Deno, S. L. (1991). Paradigmatic distinctions between instructionally relevant
measurement models. Exceptional Children, 57(6), 488-500.
Fuchs, L. S., & Fuchs, D. (1999). Monitoring student progress toward the development of
reading competence: A review of three forms of classroom-based assessment. School
Psychology Review, 28(4), 659-671.
Jenkins, J. R., & Fuchs, L. S. (2012). Curriculum-Based Measurement: The paradigm, history,
and legacy. In C. A. Espin, K. McMaster, S. Rose, & M. Wayman (Eds.), A measure of success:
The influence of Curriculum-Based Measurement on education (pp. 7-23). Minneapolis,
MN: University of Minnesota Press.
References
Fuchs, L. S., & Fuchs, D. (1986). Effects of systematic formative evaluation on student achievement: A
meta-analysis. Exceptional Children, 53, 199-208.
Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. New
York, NY: Routledge.
Jenkins, J. R., & Fuchs, L. S. (2012). Curriculum-Based Measurement: The paradigm, history, and legacy. In
C. A. Espin, K. McMaster, S. Rose, & M. Wayman (Eds.), A measure of success: The influence of
Curriculum-Based Measurement on education (pp. 7-23). Minneapolis, MN: University of
Minnesota Press.
Yeh, S. S. (2007). The cost effectiveness of five policies for improving achievement. American Journal of
Evaluation, 28, 416-436.