Use of brief experimental analysis to identify early literacy

University of Iowa
Iowa Research Online
Theses and Dissertations
Summer 2016
Use of brief experimental analysis to identify early
literacy interventions in students with letter-sound
correspondence deficits
Jennifer Lynn Kuhle
University of Iowa
Copyright 2016 Jennifer Lynn Kuhle
This dissertation is available at Iowa Research Online: http://ir.uiowa.edu/etd/2103
Recommended Citation
Kuhle, Jennifer Lynn. "Use of brief experimental analysis to identify early literacy interventions in students with letter-sound
correspondence deficits." PhD (Doctor of Philosophy) thesis, University of Iowa, 2016.
http://ir.uiowa.edu/etd/2103.
Follow this and additional works at: http://ir.uiowa.edu/etd
Part of the Educational Psychology Commons
USE OF BRIEF EXPERIMENTAL ANALYSIS TO IDENTIFY EARLY LITERACY
INTERVENTIONS IN STUDENTS WITH LETTER-SOUND CORRESPONDENCE
DEFICITS
by
Jennifer L. Kuhle
A thesis submitted in partial fulfillment
of the requirements for the Doctor of
Philosophy degree in Psychological and Quantitative Foundations
in the Graduate College of
The University of Iowa
August 2016
Thesis Supervisors: Associate Professor Kristen Missall
Professor Stewart Ehly
Copyright by
JENNIFER LYNN KUHLE
2016
All Rights Reserved
Graduate College
The University of Iowa
Iowa City, Iowa
CERTIFICATE OF APPROVAL
____________________________
PH.D. THESIS
_________________
This is to certify that the Ph.D. thesis of
Jennifer Lynn Kuhle
has been approved by the Examining Committee for
the thesis requirement for the Doctor of Philosophy degree
in Psychological and Quantitative Foundations at the August 2016 graduation.
Thesis Committee:
____________________________________________
Kristen Missall, Thesis Supervisor
____________________________________________
Stewart Ehly, Thesis Supervisor
____________________________________________
Brenda Bassingthwaite
____________________________________________
Kathryn Gerken
____________________________________________
John Westefeld
The more that you read, the more things you will know.
The more that you learn, the more places you’ll go.
Dr. Seuss
ii
ACKNOWLEDGEMENTS
There are so many people that have assisted me throughout my graduate school
career, I am afraid my words may not be adequate to convey my level of appreciation and
gratitude. First, an enormous thanks to my committee: Kristen Missall, Stewart Ehly,
Brenda Bassingthwaite, Kit Gerken, and John Westefeld. I appreciate the edits, feedback,
and comments that helped to make this a better project. Each of you has supported me in
reaching the finishing line, and for that I am thankful.
To my advisor, Kristen. Since my first year at Iowa, you have been a wonderful
mentor. Whether near or far, you were always willing to take a phone call, answer an
email, or sit down and talk. Many difficult decisions, both personal and professional, were
discussed in your office and with your guidance, they all seemed a bit easier to make.
Your support has never wavered or gone unnoticed. Thank you.
Brenda, I owe you an enormous thank you for all the dissertation and professional
support you have provided, especially within the last few months. You readily answered
questions at the final hour and helped to keep the panic at bay. Much beyond dissertation,
I have always appreciated your advice. You have repeatedly given me new perspectives on
situations and the much needed reminder that no decision is permanent.
To TJ Schneckloth and Dave Martin, thank you for opening the doors of your
schools to me. Without a second thought, you allowed me to work with your students and
teachers in completing this project. To Tiffany Steverson, without your help, this project
would not have been possible. You are a phenomenal teacher and an even better friend.
To my parents. It is not possible to come close in expressing my love and deepest
appreciation for your support the last 6+ years. Dad, I am happy to have your no nonsense
iii
attitude and sense of humor. You have kept me laughing along the way with good
reminders that there is life beyond this project and school. But understand. This was no
small essay. Mom, please know that there is no way I could have done this without you. In
many instances, you knew what I needed to hear, even before I did. I couldn’t ask for a
better mother or friend. Thank you, a thousand times over.
To Ken. You have supported me in more ways than I can count and really have
been a partner in all this work. From a fully stocked refrigerator to consistent offers to
help in any way possible, you have been wonderful. I am lucky to have you in my life.
Thanks for the ever present reminder that I could and would finish. I am beyond excited
to see what our next adventure brings.
To my wonderful friend family. Each and every one of you has given me support
and endless hours of comfort and laughs. Allison, you are as close to a sister as I will ever
have. Thank you for constantly being my cheerleader, even when I was intent on being a
grouch. You are my favorite person.
Nicole and Shannon, you are both in my fondest grad school memories. It’s great
to know I left this experience with such close friends. I wish nothing but the best for each
of you. Nicole, beyond being a wonderful friend, roommate, and my preferred dance
partner, you graciously collected my IOA data. This was no small feat and I can’t thank
you enough. Shannon, although we are miles apart, we always catch up as if no time as
passed. I continue to look forward to our phone calls and visits.
To all of my comedy friends, thank you. You are all wonderful, hilarious people!
There is no doubt that improv and stand-up got me through endless hours of writing and
edits. Finally, to my “Staying on Track” group. Thank you so much for pushing me
iv
through the final months of writing. I continue to look forward to our lunch meetings and
the encouragement we bring each other. I am proud of all of us!
v
ABSTRACT
A Brief Experimental Analysis (BEA) is used to quickly and simultaneously
evaluate two or more interventions so that the most effective intervention is selected for
on-going implementation (Daly, Witt, Martens, & Dool, 1997; Martens & Gertz, 2009).
Oral reading fluency interventions have been successfully evaluated using a BEA, yet
minimal research studies have evaluated early literacy interventions within this context
(Daly, Martens, Hamler, Dool, Eckert, 1999; Eckert, Ardoin, Daly, & Martens, 2002;
McComas & Burns, 2009). The primary goal of the current study was to examine the
effectiveness of a BEA in selecting a letter-sound correspondence intervention for
individual students. A comparison of early intervention strategies was also completed as
part of an extended analysis. The study was conducted in two phases with three
kindergarten students.
First, a BEA was used to evaluate performance-based and skill-based
interventions designed to increase letter-sound correspondence in three kindergarten
students. Specifically, four experimental conditions were evaluated: baseline, reward,
incremental rehearsal (IR) + reward, and systematic incremental rehearsal (SIR) +
reward. Effectiveness of the interventions was measured using early literacy curriculumbased measurement probes. Following the BEA, an extended analysis was completed in
which IR + reward and SIR + reward were both implemented with each student to
compare effectiveness and evaluate whether the BEA identified the more powerful
intervention to improve letter-sound correspondence.
Results indicated that in all three participants there was minimal differentiation
across BEA conditions. It appears that LSF probes were not sensitive enough to measure
vi
growth or progress in the BEA. As suggested by Petursdottir and colleagues (2014),
individualized probes may be required when completing a BEA of early literacy skills.
During the extended analysis, all three participants made gains in letter-sound
correspondence with SIR and IR interventions. When comparing the two interventions,
participants appeared to make more immediate gains with SIR. Overall, both
interventions appeared to be viable options for teaching students letter-sound
correspondence.
vii
PUBLIC ABSTRACT
Letter-sound correspondence, or the ability to understand that specific letters
relate to specific sounds, is a necessary skill before students can learn to read.
Incremental rehearsal (IR) and systematic incremental rehearsal (SIR) are both designed
to target letter-sound correspondence. However, it can be difficult to select the most
effective intervention for a student when there are multiple options available. A Brief
Experimental Analysis (BEA) allows numerous interventions to be simultaneously
evaluated with the most effective intervention selected for on-going implementation.
Research has demonstrated that a BEA is effective in selecting oral reading interventions,
but minimal research has examined if a BEA can identify an intervention for early
literacy skills.
The current study evaluated whether a BEA could be used to select a letter-sound
correspondence intervention for three kindergarten students. Four conditions were
evaluated within the BEA: baseline, reward, IR + reward, and SIR + reward. Following
the BEA, the effects of IR and SIR were compared within each student. Results indicated
that the BEA did not result in differentiation across experimental conditions. Published
early literacy probes did not appear to be sensitive enough to growth to result in
differentiation. However, growth in letter-sound correspondence was seen in all
participants in phase two. Results of phase two suggested that SIR and IR are both
successful intervention to use when targeting letter-sound correspondence skills.
viii
TABLE OF CONTENTS
LIST OF TABLES ............................................................................................................. xi
LIST OF FIGURES .......................................................................................................... xii
CHAPTER 1 INTRODUCTION ........................................................................................ 1
Early Literacy...................................................................................................................... 4
Early Literacy Interventions ............................................................................................... 8
Incremental Rehearsal ....................................................................................................... 11
Brief Experimental Analysis ............................................................................................. 12
BEA Evaluation of Oral Reading Fluency........................................................................ 13
BEA Evaluation of Early Literacy Skills .......................................................................... 16
Purpose of the Current Study ............................................................................................ 17
CHAPTER 2 LITERATURE REVIEW ........................................................................... 19
Incremental Rehearsal ....................................................................................................... 21
Incremental Rehearsal for Letter-Sound Correspondence ................................................ 28
Brief Experimental Analysis ............................................................................................. 33
Brief Experimental Analysis with Early Literacy Skills................................................... 43
The Significance of the Present Study .............................................................................. 46
CHAPTER 3 METHOD ................................................................................................... 47
Participants ........................................................................................................................ 47
Measures ........................................................................................................................... 50
Procedures ......................................................................................................................... 52
Data Analysis .................................................................................................................... 61
CHAPTER 4 ..................................................................................................................... 63
Use of BEAs to Identify Effective Letter-Sound Correspondence Interventions ............. 63
Extended Analysis Evaluating SIR and IR Interventions ................................................. 71
CHAPTER 5 DISCUSSION ........................................................................................... 104
Summary of General Findings ........................................................................................ 104
Use of BEA to Identify Effective Letter-Sound Correspondence Intervention .............. 105
Extended Analysis Evaluating SIR and IR Interventions ............................................... 108
Contributions of a BEA and Intervention to Letter-Sound Correspondence Skills ........ 111
Possible Explanations of Findings and Suggestions for Future Research ...................... 112
Limitations and Practical Implications ........................................................................... 115
REFERENCES ............................................................................................................... 119
APPENDIX A. HISTORICAL VIEW OF READING DEVLEOPMENT .................... 126
APPENDIX B. BASELINE LSF TASK ANALYSIS ................................................... 131
ix
APPENDIX C. BASELINE NWF TASK ANALYSIS ................................................. 132
APPENDIX D. REWARD TASK ANALYSIS ............................................................ 133
APPENDIX E. IR + REWARD TASK ANALYSIS ................................................... 134
APPENDIX F. SIR + REWARD TASK ANALYSIS .................................................. 137
x
LIST OF TABLES
Table 1. Model of Emergent Literacy Skills and Corresponding Definitions (Modified
from Whitehurst & Lonigan, 1998, p. 850) ........................................................................ 5
Table 2. Phases of the Study ............................................................................................. 47
Table 3. Student Demographics ........................................................................................ 49
Table 4. Order of Intervention and Assessment Sessions during Extended Analysis
Treatment .......................................................................................................................... 59
Table 5. Izzy’s BEA-Selected Interventions Based on Measure and Analysis ............... 65
Table 6. Sally’s BEA-Selected Interventions Based on Measure and Analysis .............. 68
Table 7 Jill’s BEA-Selected Interventions Based on Measure and Analysis. ................. 71
Table 8. Izzy’s Performance on Test of Letter Sounds Probes Following
Intervention ....................................................................................................................... 81
Table 9. Izzy’s Performance on NWF Probes Following Intervention. .......................... 82
Table 10. Sally’s Performance on Test of Letter Sounds Probes Following
Intervention ....................................................................................................................... 91
Table 11. Sally’s Performance on NWF Probes Following Intervention ........................ 92
Table 12. Jill’s Performance on Test of Letter Sounds Probes Following
Intervention .................................................................................................................... 102
Table 13. Jill’s Performance on NWF Probes Following Intervention. ......................... 103
Table 14. Interventions Identified as Most Effective During BEA Based on Measure
and Analysis. .................................................................................................................. 107
xi
LIST OF FIGURES
Figure 1. Results of Izzy's BEA and flashcard presentation following skill-based
sessions. ............................................................................................................................ 64
Figure 2. Results of Sally’s BEA and flashcard presentation following skill-based
sessions. ............................................................................................................................ 67
Figure 3. Results of Jill’s BEA and flashcard presentation following skill-based
sessions. ............................................................................................................................ 70
Figure 4. Results of Izzy’s extended baseline and treatment performance across LSF,
LSE, CLS, and WWR. ...................................................................................................... 73
Figure 5. Izzy’s results of SIR on targeted unknown letters as measured by Test of
Letter Sounds probes (left column) and NWF probes (right column). ............................. 75
Figure 6. Izzy’s results of IR on targeted unknown letters as measured by Test of
Letter Sounds probes (left column) and NWF probes (right column). ............................. 78
Figure 7. Results of Sally’s extended baseline and treatment performance across LSF,
LSE, CLS, and WWR. ...................................................................................................... 83
Figure 8. Sally’s results of SIR on targeted unknown letters as measured by Test of
Letter Sounds probes (left column) and NWF probes (right column). ............................. 86
Figure 9. Sally’s results of IR on targeted unknown letters as measured by Test of
Letter Sounds probes (left column) and NWF probes (right column). ............................. 88
Figure 10. Results of Jill’s extended baseline and treatment performance across LSF,
LSE, CLS, and WWR. ...................................................................................................... 93
Figure 11. Jill’s results of SIR on targeted unknown letters as measured by Test of
Letter Sounds probes (left column) and NWF probes (right column). ............................. 96
Figure 12. Jill’s results of IR on targeted unknown letters as measured by Test of
Letter Sounds probes (left column) and NWF probes (right column). ........................... 100
xii
1
CHAPTER 1
INTRODUCTION
Reading is the ability to understand and learn from written text (Scarborough,
2001; Torgensen, 2002) and is critical for success in academics, occupational pursuits,
and lifelong learning (Lonigan, Burgess, & Anthony, 2000; Snow, Burns & Griffin,
1998). The current national reading climate is grim as students continue to underperform.
In 2011, the National Reading Progress Report showed that only 34% of fourth graders
and 34% of eighth graders were considered proficient in reading (National Center for
Education Statistics, 2011). Two years later, in 2013, minimal progress in reading
proficiency had occurred. Only 35% of fourth graders and 36% of eighth graders were
considered proficient readers (National Center for Education Statistics, 2013). While
some growth had been made, there is an obvious need for improvement in the nation’s
reading skills.
One must learn to read before one can read to learn (Lesnick, George, Smithgall,
& Gwynee, 2010). Learning to read does not develop organically but requires ongoing
formal instruction, typically until the end of third grade (Fiester, 2010). Students who do
not develop the necessary reading skills or remediate reading problems before the end of
third grade may continue to struggle and never reach the point of reading to learn
(Torgensen, 2002).
Early difficulties in learning to read can be related to later problems in academic
achievement. A study by Juel (1988) found that first-grade reading ability strongly
predicted fourth-grade reading ability. Students who entered first grade with poor reading
2
skills remained poor readers in fourth grade, while the good readers in first grade
remained good readers in fourth grade. Snow et al. (1998) indicated that poor reading at
the end of third grade increased the likelihood that a student will not graduate high
school.
Scarborough (2001) suggested that as many as 65-75% of children with reading
difficulties early in their education will continue to experience reading difficulties
throughout schooling, while only 5-10% of students who are strong readers in early
grades will struggle later in reading. Lonigan et al. (2000) asserted that students with poor
early reading skills have an increased risk of qualifying for special education services.
Spira, Bracken, and Fischel (2005) presented similar evidence of achievement stability
when examining a sample of students who read below the 30th percentile in first grade.
Reading achievement was measured through the fourth grade and 70% of the students
had consistent reading deficits throughout the elementary school years, while only 30%
of the students made steady improvement across the grades (Spira et al., 2005). By the
end of second grade, the distinction between reading improvement and reading nonimprovement was established, again demonstrating the impact of early reading
difficulties (Spira et al., 2005). Using six longitudinal data sets Duncan et al. (2007)
demonstrated continuity between early and late reading skills with school-entry reading
skills one of the strongest predictors of reading in the third and fifth grades. The strength
of the prediction was maintained even after family and child demographic variables were
controlled. Based on these results, Duncan et al. (2007) argued that improving reading
outcomes should focus on early reading skills that develop at the time of school entry.
3
Kutner et al. (2007) demonstrated that high levels of adult literacy, or the ability
to read and write, were associated with more positive life outcomes. For example, adults
with high levels of literacy are more likely to have full-time employment and higher
wages than adults with low-levels of literacy (Kutner et al., 2007). Lesnick et al. (2010)
presented similar evidence from the Chicago Public School system. Student’s third-grade
reading level significantly predicted eighth-grade reading level, and students who read
above grade level in third grade enrolled in college at higher rates than their lowerperforming peers (Lesnick et al., 2010). Further, eighth-grade reading level predicted
overall ninth-grade performance, which in turn related to rates of high school graduation
and college enrollment (Lesnick et al., 2010). As indicated by Lesnick and colleagues
(2010), the values and consequences of reading reach far beyond the classroom and
educational setting. Literacy plays a critical role in academic, social, and economic
outcomes (Snow et al., 1998). Given these connections, there are obvious and serious
implications for students who fall behind their peers in reading or leave school unable to
read.
As already demonstrated, difficulties in later reading and academic achievement
can be predicted from an early age (Duncan et al., 2007; Juel, 1988; Scarborough, 2001).
Educational and professional outcomes can similarly be predicted from reading skills as
early as third grade (Lesnick et al., 2010; Snow et al., 1998). Thus, it is critical to
intervene early during the development of reading and ensure that students are obtaining
the fundamental reading skills to ensure proficiency. By intervening early, students are
less likely to encounter the range of academic, professional, and occupational outcomes
that are associated with poor reading.
4
Early Literacy
Children are exposed to numerous literacy and language concepts from birth until
approximately five years old during a period of emergent literacy. Whitehurst and
Lonigan (1998) presented a theoretical model of emergent literacy that describes literacy
development as a continuum beginning early in life and in the absence of formal
instruction. For a historical view of reading development prior to Whitehurst and
Lonigan (1998) please see Appendix A.
As illustrated in Table 1, within the model of emergent literacy there are two
specific processes necessary for later reading achievement: outside-in processes and
inside-out processes (Whitehurst & Lonigan, 1998). Each of these processes represents
distinct skills: outside-in processes represent oral language skills and inside-out processes
represent code-based skills. Each of the skills within these processes is present at
different times in the development of emergent literacy and is the result of specific
literacy experiences. Extending Whitehurst and Lonigan’s (1998) theoretical model for
emergent literacy, the National Early Literacy Panel (NELP; Lonigan, Schatschneider, &
Westberg, 2008) sought to provide empirical support for the elements of emergent
literacy. The goal of the NELP was to identify which early reading skills and abilities in
children from birth to 5 years old predict later literacy achievement and to determine the
strength of these predictions (Lonigan et al., 2008). A meta-analysis of approximately
300 studies was completed to identify the emergent skills essential for literate lives
(Lonigan et al., 2008).
5
Table 1
Model of Emergent Literacy Skills and Corresponding Definitions (Modified from
Whitehurst & Lonigan, 1998, p. 850)
Skill
Definition
Outside-In Processes:
Oral Language
Expressive and receptive language and use of syntax
and grammatical rules
Conventions of Print
Knowledge of book format and how to apply format
when reading
Narrative
Understanding the way in which stories typically
proceed and ability to formulate stories
Emergent Reading
Pretend reading and understanding that text is used to
convey a message
Inside-Out Processes:
Letter-Naming Knowledge
Identification of letters by name when in print
Phonological Awareness
Competence in recognizing and orally manipulating
spoken sound units
Letter-Sound
Correspondence
Awareness that specific letters have specific sounds
Emergent Writing
Additional Factors:
Phonetic spelling
Phonological Memory
Short-term memory for orally presented information
Rapid Naming
Rapid naming of items from a given category
Print Motivation
Interest in shared reading
Results of the analysis concluded that 11 total early literacy variables predict later
literacy achievement: 6 early literacy skills are linked strongly to later literacy and 5
6
additional early literacy skills are moderately correlated or considered “potentially
important variables.”
The results of the NELP report (Lonigan et al., 2008) were consistent with the
previously identified emergent literacy skills described by Whitehurst and Lonigan
(1998) as necessary for later literary success.
As reported by the NELP meta-analysis, the inside-out processes identified by
Whitehurst and Lonigan (1998) were most strongly related to later literacy (Lonigan et
al., 2008). These code-related skills enabled children to understand the alphabetic
principle, which is defined as the knowledge of names and sounds associated with letters,
or commonly referred to as letter naming and letter-sound correspondence (Cabell,
Justice, Konold, & McGinty, 2011; Lonigan, Purpura, Wilson, Walker, & ClancyMenchetti, 2013; Whitehurst & Lonigan, 1998). Letter-sound correspondence, also called
phoneme-grapheme correspondence, refers to a student’s understanding that specific
letters relate to specific sounds (Whitehurst & Lonigan, 1998). This skill begins to form
around five years of age or the beginning of kindergarten; by the end of kindergarten,
students are able to demonstrate letter-sound correspondence with the majority of the
alphabet (Snow et al., 1998). According to Casey and Howe (2002), students who
articulate 40 letter-sound relationships in one minute by first grade were at an advantage
over peers. Fuchs and Fuchs (2004) indicated that students should identify 35 correct
letter sounds per minute by the end of kindergarten to demonstrate grade-level skills in
letter-correspondence. Within the NELP report, letter-sound correspondence was part of
the larger alphabet knowledge category and was correlated strongly with later literacy
achievement (Lonigan et al., 2008).
7
The strong relationship between letter-sound correspondence and later literacy
achievement is seen repeatedly throughout the literature. In a study by Schatschneider,
Fletcher, Francis, Carlson, and Foorman (2004) measures of letter-sound correspondence
used in kindergarten were good predictors of reading performance in first and second
grade. Hammill (2004) conducted a meta-analysis of 450 studies and identified letter
knowledge, including letter naming and letter-sound correspondence, as one of the best
predictors of later reading. Ehri (2005) further described letter-sound correspondence as
the foundation for decoding and subsequent oral reading fluency. As students are able to
understand the connection between letters and sounds, decoding becomes easier and leads
to fluent reading. Recently, Hulme, Bower-Crane, Carroll, Duff, and Snowling (2012)
completed a mediation analysis to determine whether letter-sound correspondence and
phonemic skills were causal influences on word-level literacy. Results indicated that
letter-sound correspondence and phonemic skills were critical and causal influences to
early reading development (Hulme et al., 2012). Hulme et al. (2012) argued that students
should be taught letter-sound correspondence and phoneme skills directly to ensure
competency and later reading success.
As illustrated, letter-sound correspondence is vital to the development of reading
(Ehri, 2005; Hammill, 2004; Hulme et al., 2012; Piasta, Justice, McGinty, & Kaderavek,
2012; Schatschneider et al., 2004). Since letter-sound correspondence is the highest-level
emergent literacy skill, students must demonstrate success in this area before
transitioning into conventional reading. Students who display letter-sound
correspondence deficits are likely to continue to struggle and make little progress toward
8
the development of fluent reading. Thus, early literacy intervention, particularly with
letter-sound correspondence is paramount to later reading success.
Early Literacy Interventions
As part of the meta-analysis, the NELP also sought to identify interventions and
programs that were effective in increasing early literacy skills. Results of the metaanalysis concluded that code-focused interventions were the most effective at improving
precursor skills predictive of later literacy growth and conventional skills (Lonigan et al.,
2008). Given this, the primary focus of this review will be code-based interventions.
Code-based interventions evaluated within the NELP meta-analysis emphasized
phonological awareness skills, alphabetic knowledge, and early decoding or phonics
skills within a one-on-one or small group setting. More specifically, phonological
awareness interventions focused on the identification and manipulation of sounds within
the words (Ehri, Nunes, Stahl, & Willows, 2001; Lonigan et al., 2013; Lonigan et al.,
2008). Alphabetic knowledge interventions focused on letter name and letter-sound
knowledge (Aram, 2006; Lafferty, Gray, & Wilcox, 2005; Lonigan et al., 2013; Piasta &
Wagner, 2010b; Volpe, Burns, DuBois, & Zaslofsky, 2011). Decoding and phonics
interventions focused on letter-sound correspondence and the use of the alphabetic
principle to decode (Hatcher, Hulme, & Snowling, 2004; Vadasy & Sanders, 2008).
Supplementary analyses examined whether outcomes from code-based
interventions changed as a function of children’s age or literary skills prior to the
intervention. Overall, the results of the additional analysis concluded that code-based
interventions made strong, statistically significant changes in children’s skills in
9
phonological awareness, alphabetic knowledge, oral language, reading and spelling
regardless of age or previous reading level (Lonigan et al., 2008).
While the NELP meta-analysis demonstrated that code-based interventions
positively impact early literacy skills, Piasta and Wagner (2010a) argued that the NELP
meta-analysis did not sufficiently examine alphabetic knowledge interventions or
instruction, resulting in a number of limitations to the report. First, the NELP metaanalysis excluded all studies that contained participants older than kindergarten even
though instruction in alphabetic knowledge often continues for students demonstrating
skill deficits. The NELP meta-analysis also reported overall effect sizes based on all
code-based interventions without examining the specific impact of individual
instructional components on alphabetic knowledge (Piasta & Wagner, 2010a). Last, when
outcome variables and intervention categories were created and evaluated within the
NELP meta-analysis, alphabetic knowledge was classified as one construct without
differentiation between the discrete skills within the construct including letter naming,
letter-sound correspondence, and letter writing.
Given the range of limitations, Piasta and Wagner (2010a) conducted an
additional meta-analysis, focusing exclusively on alphabetic knowledge and addressing
the limitations of the NELP meta-analysis by changing study inclusion criteria and
examining discrete outcomes. Piasta and Wagner (2010a) specifically examined the
impact of alphabetic knowledge interventions on alphabetic knowledge outcomes, but
disambiguated the construct by individually examining letter naming, letter-sound
correspondence, and letter writing. No studies were excluded from the meta-analysis
based on student grade (Piasta & Wagner, 2010a). Results found that alphabetic
10
knowledge interventions had positive effects on alphabetic knowledge outcomes (Piasta
& Wagner, 2010a). In all but one study, letter-sound correspondence interventions
improved letter-sound correspondence regardless of whether or not phonological
awareness training was part of the intervention. Letter-naming interventions were also
related to improvement in letter-sound correspondence, although to a lesser degree.
The results of the NELP meta-analysis and Piasta and Wagner’s (2010a) followup study demonstrated that code-based interventions consistently had a positive effect on
predictors of later reading including phonological awareness, alphabetic knowledge and
oral language (Lonigan et al., 2008). More specifically, Piasta and Wagner (2010a)
demonstrated that letter-sound correspondence and letter naming could improve with
directed alphabetic knowledge instruction. Overall, the strong effects of code-based
interventions were evident across demographic variables including age and previous
reading level. As the effects remained robust it is evident that code-based interventions
were effective for students across the reading continuum, specifically in students lacking
in alphabetic knowledge such as letter-sound correspondence.
Although evidence suggests that code-based interventions are the best method to
improve early literacy skills, there is limited direction on how to specifically teach
individual alphabetic skills (Castles, Coltheart, Wilson, Valpied, & Wedgwood, 2009;
Lonigan et al., 2008; Piasta & Wagner 2010a). Armbruster, Lehr, Osborn, and Adler
(2001) stated that “systematic and explicit” instruction was the most effective method to
teach letter-sound correspondence, but provided no further instruction. While the
importance of letter-sound correspondence is evident, there is an apparent gap in the
research as few discrete interventions or instructional components can be identified to aid
11
in the specific teaching of letter-sound correspondence. Incremental rehearsal is one
intervention that has been identified as a discrete intervention that can be used to increase
knowledge of letter-sound correspondence (DuBois, Volpe, Hemphill, 2014; Kupzyk,
Daly, & Andersen, 2011; Peterson et al., 2014; Volpe et al., 2011).
Incremental Rehearsal
Incremental rehearsal (IR) is a drill technique designed to teach unknown factbased information by exposing students to a ratio of known content to unknown content.
Unknown content is systematically introduced within the known content providing
students with many opportunities to respond and repeated exposure to unknown material
(DuBois et al., 2014; Joseph, 2006; MacQuarie, Tucker, Burns, & Hartman,
2002).
MacQuarie et al. (2002) and Nist and Joseph (2008) compared the impact of IR to
other drill techniques for teaching unknown sight words to general education students. In
both studies, IR was the most effective at increasing sight word knowledge. Burns and
Boice (2009) also determined that IR was the most effective drill technique for teaching
sight words to seventh and eighth students with learning or intellectual disabilities.
Researchers have recently expanded the use of IR beyond sight words to early
literacy skills and results of these studies further demonstrated the effectiveness of this
intervention and provided a guide for letter-sound correspondence remediation. Several
studies have highlighted IR as an effective method to teach letter-sound correspondence
to early elementary students (DuBois et al., 2014; Peterson et al., 2014; Volpe et al.,
2011). While IR has demonstrated effectiveness with letter-sound correspondence,
Kupzyk et al. (2011) suggested that small changes in IR procedures could bolster the
12
already present impact. Kupzyk and colleagues (2011) created a modified version of IR,
called Systematic Incremental Rehearsal (SIR) in which only unknown content was used
and the addition of content was based on student responses rather than a prescribed
sequence. Results suggested that procedural changes may improve upon an already
effective intervention should it be applied to letter-sound correspondence (Kupzyk et al.,
2011). As described both IR and SIR are interventions which can provide the “systematic
and explicit” instruction necessary to teach letter-sound correspondence (Armbruster et
al., 2001).
Brief Experimental Analysis
Often one academic skill can be targeted by multiple, but different interventions.
If each intervention can provide effective empirically-based strategies for the academic
skill, a single intervention must be selected for sustained implementation. While it’s
beneficial to have a variety of interventions available to address an academic need, it is
necessary to recognize that students will have idiosyncratic responses to each
intervention, even if targeting the same academic skill. Since students respond
differentially to interventions it becomes difficult to determine the most effective
evidence-based strategy for any one student (Noell, Freeland, Witt, & Gansle, 2001). As
a result, practitioners need a method to quickly and successfully evaluate the effects of an
intervention prior to long-term implementation.
A Brief Experimental Analysis (BEA) uses single-subject design to identify the
most effective intervention for a single student from a group of potential interventions
(Martens & Gertz, 2009). Within a BEA, selected interventions are introduced and
removed quickly to determine which provides the most immediate and largest
13
improvement on target behaviors or skills. This allows for multiple interventions to be
tested rapidly on the academic performance of an individual student to determine the
instructional components that are improving academics (Daly et al., 1997; McComas &
Burns, 2009). When evaluating multiple interventions within a BEA, each intervention
must be distinct enough for clear implementation and demonstrate an immediate impact
on the target behavior or skill (Martens & Gertz, 2009). For this reason, a BEA cannot
evaluate intervention programs implemented over multiple weeks.
BEA is particularly useful in an academic setting as there are numerous reasons a
student may perform below grade level. As Daly et al. (1997) acknowledged, academic
performance deficits typically result from lack of skill or lack of motivation. A BEA
could systematically evaluate interventions directed at each of the aforementioned
reasons for poor performance, with the ultimate goal being the identification “of the
interventions that produce the largest outcomes…” (Daly et al., 1997, p.11). For this
reason, a BEA can be a practical and versatile tool as decisions about interventions can be
made quickly without a large investment in time or resources.
BEA Evaluation of Oral Reading Fluency
Research has supported the use of BEA with a range of academic subjects, but
most frequently with a focus on interventions to improve oral reading fluency (ORF). An
early study by Daly, et al. (1999) used BEA to examine reading interventions for ORF in
four general education students. ORF was measured in number of correct words read per
minute (CWPM) from instructional and high content overlap passages (HCO). Using a
BEA, an individualized intervention package was selected for each student in which ORF
improved over baseline in instructional passages and HCO passages.
14
Eckert, Ardoin, Daisey, and Scarola (2000) also evaluated ORF interventions
using a BEA with four general education students. A maximum of seven interventions
conditions were evaluated within the BEA using both skill-based and performance-based
interventions. Eckert et al. (2000) were able to select effective ORF interventions or
intervention packages for each of the students using results from the BEA. Similarly,
Noell et al. (2001) used a BEA to compare the effectiveness of individual and combined
performance-based and skill-based ORF interventions with four general education
students. Each brief analysis was under one hour and assessed the student’s reading
across five levels of difficulty: letter sounds, sight words, first-grade prose, second-grade
prose, and third-grade prose. Letter-naming fluency was a separate measure. Using the
BEA, specific fluency interventions were identified for all four students, yet the
identification of letter sound and sight word interventions did not occur for all students as
one student encountered a ceiling effect during the BEA. The selected interventions
varied across individuals and academic skill (Noell et al., 2001). Research continued to
demonstrate that a BEA of ORF skills measured the idiosyncratic responses to
interventions and subsequently identified the most appropriate intervention for individual
students (Eckert et al., 2002; Schreder, Hupp, Everett, & Krohn, 2012).
Jones and Wickstrom (2002) used a BEA to select an ORF intervention for 5
elementary students and implemented the selected intervention during an extended
analysis. During the BEA, differential responding occurred across all students and a skillbased ORF intervention was selected for each participant. In the extended analysis the
intervention was implemented and effects were measured using instructional and
generalization passages. Results of the extended analysis indicated that all students
15
except one increased, on average, 20% over baseline in instructional passages. Similar
intervention effects were observed when generalization passages were used indicating
that ORF gains have the potential to be widespread and stable, the ultimate goal of any
intervention (Jones & Wickstrom, 2002). McComas and colleagues (2009) illustrated
similar levels of success when a BEA-selected ORF intervention was implemented longterm in three general education students. All students demonstrated growth in ORF as a
result of the intervention.
McComas and Burns (2009) noted that although BEA research is advancing, it is
still in its infancy and the scope has remained relatively narrow in academics. BEA work
in the area of reading continues to focus primarily on oral reading fluency (ORF) with
few changes in experimental design or methodology although research has demonstrated
numerous benefits with a brief analysis. For example, a BEA is flexible as it allows
individual interventions or intervention packages targeting various deficits (i.e.,
performance or skill) to be evaluated quickly (Daly et al., 1997, 1999; Eckert et al., 2000,
2002; Jones & Wickstrom, 2002; McComas et al., 2009; Noell et al., 2001; Schreder et
al., 2012). According to Martens and Gertz (2009), a BEA is also cost effective and can
be implemented with ease in a school setting. Thus, it is ideal to continually expand this
methodology beyond the current limits of reading fluency to include other academic
areas. Yet, few studies have evaluated the use of BEA with early or emergent literacy
skills (i.e., decoding, blending, etc.). To date, only four published studies are available
that examine early literacy interventions using a BEA (Daly, Chafouleas, Persampieri,
Bonfiglio, & LeFleur, 2004; Daly, Johnson, & LeClair, 2009; Noell et al., 2001;
Petursdottir et al., 2009). Even with the small base of support, the results are promising.
16
BEA Evaluation of Early Literacy Skills
Daly et al., (2004, 2009) extended the BEA literature beyond ORF to phoneme
blending and segmentation within the classroom setting through two related studies. In
Daly et al. (2004) used an experimental analysis to compare a phoneme blending
condition to a control condition and resulted in differentiation across the conditions with
each student. Using a BEA, Daly et al. (2009) identified one student (out of four) who
was not responding to class-wide instruction and provided supplemental instruction to
improve early reading skills. These studies provided initial evidence for the effectiveness
and potential use of a BEA in identifying early literacy interventions (Daly et al., 2004,
2009).
Petursdottir et al. (2009) were able to support the use of the BEA in evaluating
early literacy interventions in kindergarten students with poor letter sound fluency (LSF).
Prior to this study, Noell et al. (2001) were the only ones to evaluate LSF as a dependent
measure within a BEA (Petursdottir et al., 2009). The BEA assessed three interventions
in a hierarchy based on amount of support needed within each intervention (Petursdottir
et al., 2009). Each student responded differently to the interventions, but an effective
intervention was selected from the BEA for each student. Generalization of skills was
also present following substained implementation of the selected intervention. These
results replicated the findings of McComas et al. (2009) by demonstrating that an
intervention identified via a BEA could improve performance when implemented longterm.
17
Purpose of the Current Study
Letter-sound correspondence is foundational to later reading skills including
decoding and reading fluency (Ehri, 2005). Students must be competent in letter-sound
correspondence before the transition to conventional literacy can occur (Whitehurst &
Lonigan, 1998). Intervention is critical if a letter-sound correspondence deficit is present.
IR and SIR are both potential interventions that can be used to target letter-sound
correspondence skills. Initial evidence supports the use of BEA with specific early
literacy skills including LSF, blending, and segmenting (Daly et al., 2009; Petursdottir et
al., 2009). However, no study has specifically examined the use of a BEA to evaluate
skill-based interventions (IR and SIR) and performance-based interventions (rewards)
directed at improving letter-sound correspondence.
The general purpose of the current study was to evaluate the effectiveness of a
BEA in selecting a letter-sound correspondence intervention for kindergarten students
displaying letter-sound correspondence deficits. There were two main objectives of this
study. First, the study attempted to determine if a BEA could be used effectively with
emergent literacy skills. The study examined the effectiveness of a BEA in selecting a
letter-sound correspondence intervention that contained skill-based, performance-based,
or both skill and performance-based interventions. Second, the study sought to compare
the effectiveness of two skill-based letter-sound correspondence interventions following
sustained implementation.
18
The specific research questions for this study were:
1) Can a brief experimental analysis conducted within a school setting effectively
evaluate skill-based and performance-based letter-sound correspondence
interventions in elementary-aged students?
2) When comparing Incremental Rehearsal to Systematic Incremental Rehearsal,
which intervention is more effective at increasing letter-sound correspondence
skills?
19
CHAPTER 2
LITERATURE REVIEW
This chapter contains a critical review of letter-sound correspondence literature
and brief experimental analysis (BEA) literature pertinent to the development of the
present study. In the last 25 years, approximately 150 articles have been published with
“letter-sound correspondence” in the title or abstract. Within these articles, a range of
subjects is present and includes: phoneme-grapheme correspondence, reading, emergent
literacy, elementary education, teaching methods, phonics, phonological awareness, and
alphabetic principle. As the purpose of the present study is to teach letter-sound
correspondence skills, the review was narrowed and focused solely on articles describing
interventions, programs, or teaching methods for letter-sound correspondence. Of the 150
articles identified, approximately 30 articles described interventions or teaching methods
directed at improvement in letter-sound correspondence. The majority of the 30 studies
examined program-based or large group interventions.
Given that BEA is the proposed method of evaluation for the present study,
individual and discrete interventions must be used. Therefore, 22 program-based studies
were removed from review, resulting in 8 studies that evaluated discrete, individual
interventions or teaching methods for letter-sound correspondence. Within these 8 studies
at least four types of interventions were identified: incremental rehearsal (IR), picture
mnemonics, iPad/computer programs, and prompting. Individual studies were reviewed
and incremental rehearsal was selected for implementation in the present study since it
was an individual intervention that had been used in multiple studies and did not require
additional materials or curriculum.
20
Within the past 25 years, approximately 26 articles have been published with
“Incremental Rehearsal” in the title or abstract. Even with only 26 articles identified, a
range of subjects was still present and included: instructional effectiveness, teaching
methods, drill (practice), word recognition flashcards, mathematics, reading, and sight
vocabulary. To narrow the articles, only those using IR as a reading intervention were
included in the review. Use of these parameters resulted in 15 IR intervention studies. Of
the 15 studies, only 3 described using IR to improve letter-sound correspondence.
Since 1990, approximately 48 articles have been published with “brief
experimental analysis” in the title or abstract. Of the 48 articles, a range of subjects was
present including: problem behavior, mathematics, writing, reading, and instructional
effectiveness. The majority of the articles (n=33) focused on using BEA to identify
effective reading interventions. Specifically, oral reading fluency was featured in 15 of
the articles and only 1 article was directed at early literacy. Thus, two additional searches
were conducted to identify additional BEA studies directed at early literacy. The search
terms “brief assessments” and “academics” and “experimental analysis” and “academics”
yielded an additional 6 studies on reading, 2 of which focused exclusively on early
reading skills. In summary, 21 articles related to BEA and ORF were identified and 4
studies related to BEA and early literacy skills were identified. To narrow the BEA and
ORF articles, case studies, meta-analysis, and BEA review articles were removed from
the literature review. Included studies evaluated skill-based and performance-based
interventions as part of the BEA. Based on these parameters, 9 BEA and ORF studies and
3 early literacy and BEA studies were selected for review. Thus, the following literatures
21
are reviewed in detail within this chapter: IR, IR with letter sounds, BEA, and BEA with
early reading skills.
Incremental Rehearsal
IR is a ratio drill technique designed to increase mastery and fluency of unknown
material through repeated exposure and error free learning (Joseph, 2006). IR is typically
conducted using flashcards, and the IR administration process gradually presents
unknown material throughout the practice of known material (Burns & Boice, 2009;
Joseph, 2006; MacQuarie et al., 2002). The repeated exposure to known and unknown
material is a unique benefit of IR and while a range of ratios of known material to
unknown material has been used in the IR literature, the most common was 90% known
material to 10% unknown material.
A specific description of the procedures is as follows. First, an unknown word
(U1) is presented to the student, immediately followed by the presentation of the first
known word (K1). After the presentation of the U1 and K1 sequence another known
word (K2) is added to the end of the sequence and then presented to the student. Thus,
the second step will include the presentation of U1, K1, and K2. Following each
sequence, an additional known work is added until nine known words are presented. As a
result the final sequence includes: U1, K1, K2, K3, K4, K5, K6, K7, K8, and K9 (Joseph,
2006; MacQuarie et al., 2002).
An early study conducted by MacQuarie et al. (2002) compared the effectiveness
of IR, drill sandwich or interspersal technique, and traditional flashcard conditions in
teaching third- and seventh-grade students words from the Esperanto International
Language. The Esperanto International Language was selected over the English language
22
to ensure that students did not have previous exposure to the unknown material. Known
words were needed for IR and drill sandwich conditions and were self-selected by each
student.
Unknown material was taught across three sessions of 9 words each; therefore
every intervention condition was conducted once per student. During interspersal, three
unknown words were interspersed with six known words and each word was presented
individually using flashcards. The sequence repeated until each set of unknown words
was presented three times. After the third repetition, the unknown words were removed
and three new unknown words were interspersed and practiced using flashcards
(MacQuarie et al., 2002). Using traditional flashcards, each unknown word was presented
in succession until three correct responses were provided (MacQuarie et al., 2002).
IR was implemented using a ratio of 90% known words to 10% unknown words.
Word retention was measured at days 1, 2, 3, 7, and 30 following the completion of each
intervention condition. The same unknown words taught in the intervention condition
were presented and students were asked to translate the word. Correct translation was
considered word retention. MacQuarie et al. (2002) found that IR resulted in significantly
better word retention than either drill sandwich or traditional flashcard method in both the
third-grade and seventh-grade students. When using the 90% known to 10% unknown
ratio in the IR condition, there were numerous presentations of each unknown word
(ranging from 9 to 81). MacQuarie et al. (2002) reasoned that the numerous opportunities
to respond to each unknown word allowed for more words to be retained even as the
retention time span increased. Overall, IR was identified as an effective reading
intervention, and one that could be particularly useful when teaching basic or early skills
23
that require automaticity for later success (MacQuarie et al., 2002). However, it must be
acknowledged that generalization of this study was limited since the unknown material
was words from the Esperanto International Language rather than the English language.
Burns and Boice (2009) replicated the work of MacQuarie et al. (2002) with
students who had an identified learning or intellectual disability. IR, interspersal, and
traditional flashcard conditions were conducted as described by MacQuarie et. al. (2002).
Word retention was measured at 1 and 2 weeks post-interventions. Burns and Boice
(2009) found that IR resulted in more words retained than interspersal or traditional
flashcards method. Similar to MacQuarie et al. (2002), IR was the condition with the
most opportunities to respond and resulted in the highest levels of retention. Again, there
was limited generalization as the unknown material was from the Esperanto International
language. Regardless, MacQuarie et al. (2002) and Burns and Boice (2009) demonstrated
that IR was an effective intervention that could be used with a range of students.
Additional evidence has been presented supporting the use of IR (Joseph, 2006;
Nist & Joseph, 2008). Joseph (2006) demonstrated the effectiveness of IR in teaching
high-frequency sight words to second-grade students who were already receiving Title I
reading services. IR was conducted using a ratio of 90% known words to 10% unknown
words. Retention of unknown words was measured 1, 2, and 3 days post-intervention and
generalization was measured using a reading passage containing the unknown words.
Overall, 90% of the words were retained 3 days after the intervention concluded and
students were able to read an average of 90% of the words in the generalization passage
(Joseph, 2006). These results illustrated high levels of retention and generalization.
According to Joseph (2006), IR provided more exposure or practice than other flashcard
24
methods and allowed for systematic and explicit instruction, resulting in high levels of
retention. While there were obvious advantages to IR over other flashcard methods,
Joseph (2006) acknowledged that IR may be a time-consuming intervention to
implement.
Nist and Joseph (2008) examined the time-consuming nature of IR when
evaluating instructional effectiveness and instructional efficiency of IR, interspersal
technique, and traditional flashcards when teaching high-frequency words to four firstgrade students. Instructional effectiveness was based on number of unknown words
retained. Instructional efficiency was defined as “ cumulative rate of words retained per
instructional time.” (Nist & Joseph, 2008, p. 294).
Approximately 200 words were selected from storybooks and high-frequency
word lists and printed on flashcards. Flashcards were used to identify known and
unknown material during the pre-assessment. Each word was presented to the student
twice in random order. Words read correctly within 3s of both presentations were
considered known and words not read correctly within 3s of presentation were considered
unknown. Words that were read correctly in only one trial were removed and not used in
the study.
Three instructional sessions occurred each week. During each session, IR,
interspersal, and traditional flashcards were counterbalanced and implemented to ensure
that each student was exposed to the three intervention conditions. Six unknown words
were taught during each intervention condition. IR procedures mirrored those of
MacQuarie et al. (2002). Retention probes were conducted the day following the session
and measured retention of all unknown words taught in the previous lesson through the
25
presentation of individual flashcards. Maintenance was measured five days after the final
instructional session was complete and generalization was measured the day after
maintenance. During generalization, unknown words identified as maintained were
presented in sentences. Each sentence contained one unknown word and three to five
known words. Unknown words were considered generalized when they were read
correctly within the context of the sentence.
Nist and Joseph (2008) found that more words were retained from the IR
condition; thus, IR had the greatest instructional effectiveness. However, IR took the
longest to implement and the rate of words retained per instructional minute was higher
in the traditional flashcard method. Therefore, traditional flashcards had the greatest
instructional efficacy (Nist & Joseph, 2008). Nonetheless, the greatest amount of
maintenance and generalization was also found in IR conditions. So while IR may require
additional time to implement, the long-term benefits of maintenance and generalization
may outweigh the lack of instructional efficiency.
While research had established IR as an effective intervention to teach unknown
material, Kupzyk et al. (2011) attempted to bolster the strength of IR through procedural
modifications with an approach termed systematic incremental rehearsal (SIR). SIR
differed from traditional IR in three essential ways: 1) only unknown material was used
during the intervention sequence, 2) antecedent prompts were provided to cue students, 3)
the addition of unknown material to the sequence was based on student performance, not
order of material. Kupzyk et al. (2011) subsequently compared the effectiveness of SIR
to IR in teaching sight-words to four first-grade students. Using both SIR and IR, each
student was taught sight-words across four counterbalanced phases, two phases for each
26
intervention condition. Each phase consisted of five instructional sessions. In total, each
student had 10 instructional sessions under SIR conditions and 10 instructional sessions
under IR conditions.
Known and unknown words were identified through a sight-word reading list.
Known words were needed for IR conditions only. Words read correctly during the first
presentation of the word lists were considered known. Unknown words were needed for
both SIR and IR conditions. Words read incorrectly during the first presentation were
represented and if read incorrectly twice, they were considered unknown. Unknown
words were presented a third time before each intervention session. Only words that were
incorrectly read at all three presentations were taught as an unknown word during
intervention (Kupzyk et al., 2011). Unknown words used during intervention were
printed on flashcards.
Three unknown words were selected for each IR session and 10 unknown words
were selected for each SIR session. IR sessions followed procedures by MacQuarie et al.
(2002) and used the 90% known to 10% unknown ratio. Error correction and corrective
feedback was provided throughout the IR sessions. The IR sequence continued until three
unknown words were taught.
The SIR procedure began with the presentation and subsequent oral model of the
first unknown (U1) word (i.e., “The word is _____). The student was prompted to repeat
the word independently. Corrective feedback and another prompt was given if the student
provided an incorrect answer. Following a correct response, the second unknown word
(U2) was presented using the same model and prompting procedure. The presentation
and prompting procedure occurred once more for U1 and U2 before a prompt-delay was
27
implemented. Once a prompt-delay was in effect, U1 was presented to the student and if
the word was not read correctly within 2s a prompt was provided. If the word was read
correctly within 2s, U2 was presented. When both U1 and U2 were identified correctly
without a prompt, a third unknown word (U3) was added to the sequence. For the first
presentation of U3, modeling, corrective feedback, and error correction were used. Then,
U1 and U2 were presented randomly with a prompt delay, corrective feedback, and error
correction. U1, U2, and U3 were then all presented in random order with a prompt delay,
corrective feedback, and error correction. Once all three unknown words were identified
correctly, a fourth unknown word was added to the sequence. Procedures continued in
this manner until 8 min elapsed and the session ended. SIR sessions were conducted for
the same amount of time it took to complete an IR session. No more than 10 unknown
words were presented in a single SIR session
During all SIR sessions, flashcards were shuffled after all the words in a given
sequence were presented and before presenting a new sequence with an additional
unknown word. Modeling was used at the first presentation of an unknown word
followed by a prompt delay at subsequent presentations. Unknown words were only
added when an entire sequence of unknown words was identified correctly using a
prompt delay. An assessment of words read correct per minute (WRCM) was conducted
on the day following an instructional session. All the words used during the previous
day’s session were presented individually on flashcards with a prompt to say the word
aloud. Maintenance was measured two weeks after the final instructional session and
conducted for each phase of the study. Therefore, four maintenance assessments were
conducted; two for SIR and two for IR. All of the words taught in one instructional phase
28
were assessed in a single maintenance assessment. Although instructional time was held
constant during the study, the number of new words presented varied on the intervention
condition. Within each IR phase of five instructional sessions, a maximum of 15 new
words were presented. In each SIR phase of five instructional sessions, a range of 17 to
42 words was presented.
Results indicated that IR and SIR were both effective at increasing amount of
correctly read words. However, Kupzyk et al. (2011) found that students read more words
correctly during the SIR condition than IR condition. While students maintained high
amounts of words in both conditions, more words were retained using SIR than IR with
all students. Further, during maintenance, the SIR data points exceeded the highest IR
data points for every student. Kupzyk et al. (2011) indicated that opportunities to respond
was a critical component in the success of flashcard interventions. SIR maximized
opportunities to respond by only using new words which decreased the amount of time
spent on already mastered material. Overall, small procedural changes in IR appeared to
improve its overall effectiveness (Kupzyk et al., 2011). While Kupzyk et al. (2011)
examined procedural changes to IR, other researchers have expanded the use of IR to
emergent reading skills (DuBois et al., 2014; Peterson et al., 2014; Volpe et al., 2011).
Incremental Rehearsal for Letter-Sound Correspondence
Volpe et al. (2011) found IR to be an effective method for teaching letter sounds
to four kindergarten students who were not responding to class-wide instruction or the KPALS program (Fuchs et al., 2001). IR was implemented three times per week using the
Tutoring Buddy software program (Volpe, 2009) to increase letter sound expression
(LSE) and letter sound fluency (LSF). Prior to each intervention session, an assessment
29
was conducted via the Tutoring Buddy program to identify known and unknown letter
sounds for the subsequent intervention session. Individual students were shown 24 of 26
lowercase letters on a computer screen and asked to provide the sound (x and q were
excluded from the study). A correct sound response was recorded if provided within 3s of
letter presentation. Responses were recorded as correct or incorrect via the computer
program.
During IR, a ratio of 66% known letter sounds to 33% unknown letter sounds was
maintained. Therefore, four known and two unknown sounds were used in each
intervention session. The same letter sounds may be taught in multiple sessions although
continuous sounds were targeted first for intervention and letters with similar written
characteristics were practiced during different sessions. The initial presentation of each
unknown sound was modeled with, “This sound makes the <letter sound>. What sound
does it make?” Once the student provided a correct independent response, the first known
sound was presented on the computer screen and the sequence continued. All IR
procedures followed MacQuarie et al. (2002).
LSE was measured prior to each intervention session using the Tutoring Buddy
program. LSF was measured weekly with AIMSweb Letter Sound Fluency progressmonitoring probes (NCS Pearson, Inc., 2005). Results demonstrated improvement in both
LSE and LSF following the implementation of IR. Individual variation was present in
LSE growth, but there was a clear positive trend identified for each student (Volpe et al.,
2011). Letter sound expression data were only collected as part of the IR condition, so no
baseline data were available for comparison. Comparable growth was identified in LSF
across students. While three students demonstrated strong growth rates, the rate of growth
30
was not sufficient to meet spring benchmarks (Volpe et al., 2011). Nonetheless, all four
students made gains in LSE and LSF as a result of IR and demonstrated that IR could
directly improve LSE and LSF. Yet, Volpe et al. (2011) did not examine the
generalization of the skills gained through IR and only demonstrated immediate levels of
growth. No maintenance data were collected.
DuBois et al. (2014) expanded upon Volpe et al. (2011) and provided additional
support for the use of IR with letter-sound correspondence through a demonstration of
skill generalization. IR was implemented using the Tutoring Buddy computer program
with a sample of 30 kindergarten and first-grade students. Half of the sample was placed
in the treatment group and half of the sample was placed in the wait group.
Students in the treatment group began IR immediately and received individual
intervention for eight days across two weeks. As described in Volpe et al. (2011), the
Tutoring Buddy program was used to identify known and unknown letter sounds for the
subsequent intervention session. Students were shown individually 24 of 26 lowercase
letters on a computer screen and asked to provide the sound (x and q were excluded). A
correct sound response was recorded if provided within 3s of letter presentation. Four
known letter sounds and one unknown letter sound were selected for IR, thus using a
ratio of 80% known letter sounds to 20% unknown letter sounds (DuBois et al., 2014).
DuBois et al. (2014) measured three dependent variables: LSE, LSF, and
nonsense word fluency (NWF). LSE was measured using the Tutoring Buddy program
and the individual presentation of 24 lowercase letters to determine known and unknown
letters. LSF was measured with progress-monitoring probes from AIMSweb and NWF
was measured with progress-monitoring probes from DIBELS (Good & Kaminski, 2002).
31
LSF and NWF data were collected four times: prior to intervention implementation, at the
end of the first intervention week, at the end of the second intervention week, and one
week after intervention termination.
Overall, results indicated that growth was made in LSE, LSF, and NWF for the
treatment group. On average students gained 6.6 letter sounds as a result of the
intervention and maintained this level of growth for up to one week after intervention
termination. The improvement in NWF presents evidence of generalization from lettersound correspondence to decoding skills (DuBois et al., 2014). Results of DuBois et al.
(2014) are promising as it was the first IR study to address letter-sound correspondence
and show skill maintenance and generalization. However, both DuBois et al. (2014) and
Volpe et al. (2011) conducted IR using a computer program that may not be readily
available. As such, additional support for traditional IR with letter-sound correspondence
is warranted.
Peterson and colleagues (2014) found that IR conducted with traditional
flashcards was an effective way to teach letter-sound correspondence to 3 kindergarten
English Learners (EL) at risk of not meeting letter sound benchmarks. Baseline
assessments of LSE and LSF were conducted using 1-min district developed fluency
probes. Each probe contained 70 lowercase letters and students were asked to say the
sound of each letter. After 1-min students were prompted to complete the rest of the
probe and provide the letter sounds. Results from the complete probe were used to
identify known and unknown sounds. Known sounds were those identified correctly
during 100% of presentations across the probe (Peterson et al., 2014). Unknown sounds
were those not correctly expressed 100% of the time or those not stated correctly within
32
3s of presentation. For example, for letters that were presented more than once, any
incorrect sound would define that letter as an unknown. Known and unknown sounds
identified on the first administration of this probe were used to create sets of flashcards
for each participant. Individual unknown sounds were placed on flashcards and divided
into three sets of four sounds and randomly assigned as Set A, Set B, and Set C. The
study was designed as a multiple baseline across letter sets. If a participant had more than
12 unknown sounds, the remainder of unknown sounds was excluded randomly (Peterson
et al., 2014).
Intervention occurred three times per week in a one-to-one setting for
approximately nine weeks. Prior to each intervention session, letter sound expression was
evaluated using the current flashcard set. IR was then implemented with six known
sounds and two unknown sounds for a ratio of 75% known letter sounds to 25% unknown
letter sounds (Peterson et al., 2014). Intervention sessions were scripted to ensure
consistency, but followed the procedures of MacQuarie et al. (2002) and included a
model of the letter name and letter sound at the first presentation of each unknown letter
and a subsequent use of the letter sound in a word (e.g., “/c/ is the first sound in the word
cat). If the student made fewer than three errors, a second IR sequence occurred with a
new unknown sound. Intervention continued with each flashcard set until mastery was
achieved (Peterson et al., 2014). Mastery was defined as correctly labeling all of the
sounds in one set at the beginning of a session without further errors. Once all sounds in a
given set were mastered, they were considered known and used in subsequent IR sessions
when teaching unknown letter sounds from the additional sets (Peterson et al., 2014).
LSF was assessed at baseline and once per week throughout the intervention.
33
Peterson et al. (2014) found an immediate improvement in LSE over baseline
upon implementation of IR. Further, mastery was obtained on all three sets of unknown
letter sounds for all students (Peterson et al., 2014). Mastery of each set was maintained
for a minimum of three assessment sessions. Slope was calculated for each student’s LSF
and used to determine growth in correct letter sounds per minute. Results indicated that
all students met the district’s LSF benchmark after IR implementation (Peterson et al.,
2014). Overall, these results supported the use of IR as an intervention to improve lettersound correspondence. Peterson et al. (2014) further demonstrated that IR could be
conducted using traditional flashcards and obtain results similar to those demonstrated by
a computer program (DuBois et al., 2014; Volpe et al., 2011)
In summary, research has supported IR as an effective reading intervention. The
effectiveness of IR has been demonstrated with students with and without learning
disabilities throughout elementary school (Burns & Boice, 2009; MacQuarie et al., 2002).
Recent evidence has suggested that IR can be expanded successfully to emergent literacy
skills while small procedural changes resulting in SIR can increase effectiveness (Kupzyk
et al., 2011). In the last five years, three studies have demonstrated improvement in lettersound correspondence with IR, highlighting the usefulness and effectiveness of IR as an
emergent literacy intervention (DuBois et al., 2014; Peterson et al., 2014; Volpe et al.,
2011). This study sought to provide additional support for the use of IR and SIR in
teaching letter-sound correspondence.
Brief Experimental Analysis
Using a BEA, outcomes from multiple interventions can be simultaneously
compared, allowing the single most effective intervention to be selected for an individual
34
student. Experimental control is needed to assert that one particular treatment is more
effective than the others. Within a BEA, control is obtained through immediate and
replicable changes in behavior following a change in conditions (Martens & Gertz, 2009).
The rapid manner of assessment with strong experimental control allows for numerous
interventions to be examined in a relative short period of time. In order for the BEA to
yield meaningful results, assessment material must meet three criteria as defined by Daly
et al. (1997): assessment materials used across each intervention need to be of equal
difficulty, BEA measures need to be sufficiently different from each other so that
interventions do not affect performance on assessments in subsequent conditions, and
assessment materials need to have high content overlap with instructional materials to be
sensitive enough to effects of brief intervention trials. Success in using BEA for
intervention identification has been demonstrated across a variety of academic skills, but
predominately with oral reading fluency (ORF; Daly et al., 1997; Daly et al., 1999;
Eckert et al., 2000; Eckert et al., 2002; Noell et al., 2001).
Daly et al. (1999) used a hierarchical approach to identify effective ORF
interventions for general education students. Using a hierarchical approach, each of the
interventions evaluated required more adult support than the previous intervention. As a
result, the most effective intervention that required the least amount of adult support
could be identified. Skill-based and performance-based intervention conditions were
evaluated throughout the BEA. Skill-based intervention conditions included: repeated
readings (RR), listening passage preview (LPP), sequential modification (SM), and
presenting easier material (EM). The performance-based intervention condition was a
reward for rapid reading (RE).
35
Effectiveness of the ORF interventions was measured by words read correctly per
minute (WRCM) in instructional and high content overlap passages (HCO). Although
entire passages were read during assessment, only the number of correct words read in
the first minute was used to evaluate the interventions. The performance-based
intervention or RE was implemented first in each BEA. Students selected three rewards
and were given specific WRCM criteria to meet in order to earn selected rewards. If the
RE improved WRCM, the BEA was discontinued and RE was deemed effective. If
performance did not improve, additional interventions were evaluated in the following
order: RR, RR with SM, LPP, LPP with RR/SM, and LPP/RR/EM. When a treatment
appeared to make a meaningful increase over baseline and previous intervention
conditions, a mini-reversal was conducted to determine if experimental control was
present. If experimental control was demonstrated, the intervention was deemed effective
and the BEA was terminated.
For all four participants, experimental control was achieved and ORF improved in
both passage types in at least one intervention condition. These findings highlight the
ability of a BEA to identify ORF interventions for individual students (Daly et al., 1999).
Further, the BEA successfully compared the effects of individual interventions to the
effects of combining instructional components. Yet, Daly et al. (1999) did not provide a
specific description for a meaningful increase over baseline, making it difficult to
ascertain the level of growth necessary to identify an intervention as effective.
Several other studies using BEA continued to evaluate how the combination of
instructional components improved ORF (Eckert et al., 2000; Eckert et al., 2002; Noell et
al., 2001). Specific skill-based and performance-based interventions were grouped
36
together and subsequently assessed using a BEA. For example, Eckert et al. (2000)
implemented seven intervention conditions including one skill-based, three performancebased, and three combined skill-based and performance-based within a BEA that was
conducted with four elementary male students.
During the skill-based intervention condition, the passage was read to the student
once before the student independently read the passage additional times. The
performance-based intervention conditions included: goal setting plus performance
feedback, contingent reinforcement, or goal setting plus performance feedback and
contingent reinforcement (Eckert et al., 2000). During goal setting plus performance
feedback conditions, students created fluency and error goals prior to reading the passage.
Following the passage read, students graphed their WRCM and number of errors. During
contingent reinforcement, the student selected two items to be used as reinforcers where
each item corresponded to a level of performance. For example, the first item selected
was considered the most highly preferred item and access to this item occurred if the
student read the story in less than 3 min with fewer than three errors (Eckert et al., 2000).
Goal setting plus performance feedback and contingent reinforcement required the
student to set individual goals, graph progress, and select a reward to earn the reward
based on passage fluency and errors (Eckert et al., 2000).
The three combined skill-based and performance-based interventions included:
skill-based intervention with goal setting and performance feedback, skill-based
intervention with contingent reinforcement, and skill-based intervention combined with
goal setting, performance feedback, and contingent reinforcement (Eckert et al., 2000). In
the com
37
bined conditions, goal setting occurred prior to the passage preview and repeated
readings. Reinforcement selection occurred after the passage preview, but before the
repeated readings. Each of the seven intervention conditions was implemented three
times per student across 10 weeks with no more than two intervention conditions
implemented in a single day. ORF was measured by WRCM using passages from the
Silver, Burdett, and Ginn reading series (Silver, Burdett, & Ginn, 1991)
Each BEA conducted displayed ORF improvement over baseline following the
implementation of the skill-based intervention condition. However, three of the four
participants made the most progress when a skill-based intervention was paired with at
least one performance-based intervention (Eckert et al., 2000). These results highlight the
utility of a BEA as individual and combination intervention conditions were
simultaneously evaluated and in doing so, the optimal intervention was selected for each
individual student. Unlike Daly et al. (1999), however, no additional passages (i.e., HCO
passages) were used to confirm results or assess generalization of the interventions
effects.
Jones and Wickstrom (2002), however, did analyze generalization of intervention
effects. Using a BEA, the effects of performance-based and skill-based interventions on
fluency and sight word acquisition from instructional passages were evaluated across five
students from first to third grade. The experimental conditions assessed within the BEA
included: incentive, repeated reading, phrase drill, and easier material (Jones &
Wickstrom). During the incentive condition, participants were required to obtain 30%
improvement over baseline. In the repeated reading condition, participants read the
passage three times; however error correction did not occur. According to Jones and
38
Wickstrom (2002) the lack of error correction was in an effort to determine the impact of
repeated opportunities to respond. During the phase drill condition, students were first
given a preview of the passage by the examiner. Next, the students read the passage aloud
and errors were marked by the examiner. The first 15 word errors were corrected by the
examiner and the student was then required to read each phrase containing a word error
three times (Jones & Wickstrom, 2002). During the easier material condition, the student
read from material that was one grade lower than the instructional passage being used in
the rest of the BEA.
The BEA was conducted in a hierarchical manner, increasing the amount of
support and resources needed for each subsequent intervention. Each condition was tested
a single time using instructional passages and the most effective intervention was then
selected for the extended analysis where generalization was analyzed. An effective
intervention was defined as resulting in a 20% increase in correct words read over
baseline. Across all five students the BEA was able to identify an effective intervention,
yet the magnitude of the effects varied considerably (Jones & Wickstrom, 2002). Three
participants improved the most with repeated readings and two improved the most with
phase drill.
During the extended analysis, the BEA-selected intervention was alternated with a
baseline condition and intervention effects were measured using instructional and
generalization passages. Generalization passages contained 80% of the words from the
previously used instructional passages. Results of the extended analysis indicated that all
students except one, made a 20% increase over baseline in instructional passages. More
importantly, effects of the intervention were evident in the generalization passages for the
39
same four students as fluency remained above baseline and stable (Jones & Wickstrom,
2002). The generalization effects further illustrated the ability of a BEA to identify an
intervention that will build reading skills across reading passages.
Subsequent research has provided evidence supporting the use of BEA while also
addressing the limitations of previous studies (Daly et al., 1999; Eckert et al., 2000). With
4 second and third grade students, Noell et al. (2001) compared the results of a BEA
directed at improving letter sound, sight word, and passage fluency to the results of an
extended analysis directed at improving the same skills. One skill-based intervention
condition, one performance-based intervention condition, and one skill-based plus
performance-based intervention were evaluated as part of the BEA and extended analysis.
The performance-based intervention condition consisted of access to a reward contingent
on a 10% improvement over the previous session’s median WRCM. The skill-based
intervention consisted of a passage preview, repeated readings, and performance
feedback. The skill-based plus performance-based intervention condition included a
preview of the passage, repeated readings, performance feedback, and access to a reward
contingent on reading improvement.
The BEA was conducted using two iterations of the following sequence of
conditions: baseline, performance-based intervention, and skill-based intervention. The
contingent reward was only combined with the skill-based intervention when the reward
improved performance in isolation (Noell et al., 2001). A single 2-min letter sound
fluency (LSF) probe and a single 2-min sight word fluency probe were used to measure
intervention effectiveness on letter sounds and sight words. Three ORF probes were
administered and the median score used to assess the effectiveness of each intervention
40
on passage fluency. Intervention effectiveness was defined as an increase of at least 20%
over baseline during both intervention conditions (Noell et al., 2001). During the BEA,
specific passage fluency or ORF interventions were identified for all four students, yet
the identification of letter sound and sight word interventions did not occur for all
students.
Following the BEA, an extended analysis was conducted. The extended analysis
conditions were conducted in the same order as the BEA and used the same procedures,
but additional sessions of each intervention condition were completed. Results of the
extended analysis indicate that 85% of the instances in which the BEA classified an
intervention as effective, the intervention was also classified as effective in the extended
intervention (Noell et al., 2001). In contrast, of the interventions that were deemed
ineffective during the BEA, 80% were also considered ineffective in the extended
analysis. Overall, intervention classification matched across the BEA and extended
analysis in 83% of the cases demonstrating the strength of the BEA in selecting the most
effective intervention (Noell et al., 2011). Interestingly, the selected interventions varied
across individuals and academic skill supporting the flexibility of the BEA and ability to
guide individual decisions.
As numerous studies have illustrated, a BEA can identify the most effective
intervention successfully from a pool of potentially useful interventions (Daly et al.,
1999; Eckert et al., 2000; Eckert et al., 2002; Noell et al., 2001). Thus, it is necessary to
examine whether the selected intervention remains effective when implemented longterm. McComas et al. (2009) evaluated performance-based and skill-based ORF
interventions within a BEA for one third grade student and two second grade students.
41
The most effective intervention was subsequently implemented during an “extended
intervention” period (McComas et al., 2009). Using instructional level passages from
DIBELS, each BEA was conducted within three days with no more than four sessions
conducted a day. A different intervention or intervention package was selected for each
student.
The identified intervention was then implemented three times per week for 30 min
a day for a minimum of three weeks. During the extended intervention, high-word
overlap (HWO) passages and GOM passages were used to measure fluency (McComas et
al., 2009). All three students demonstrated ORF growth as a result of the BEA-selected
intervention and each student reached grade-level ORF on either HCO passages, GOM
passages, or both passage types (McComas et al., 2009). The results of McComas et al.
(2009) supported the practical use of a BEA as the short and efficient assessment
illustrated differentiation in intervention responding, resulting in the ongoing
implementation of the most effective intervention.
Schreder et al. (2012) also examined the long-term effects of a BEA-selected
ORF intervention in two second-grade students. The BEA was conducted in a number of
phases with a goal of identifying an effective ORF intervention package that parents
could implement at home during the summer. The first phase determined whether RR,
LPP, or RR/LPP was most effective in increasing words correct per minute as measured
by grade-level DIBELS passages (Good, Kaminski, & Dill, 2002). During the BEA, each
intervention condition was conducted three times with the median result graphed. Results
of the BEA were differentiated so RR was selected for one participant and LPP was
selected for the other participant (Schreder et al., 2012). Once an ORF intervention was
42
identified, the second phase of the BEA was conducted to determine the number of times
a passage should be presented to each student. For example, Schreder et al. (2012)
evaluated if two, three, four, or five repeated reads let to the highest number of words
correct per minute for the student using RR. Once an effective number of iterations were
identified, parents were trained and intervention began in the extended analysis.
The extended analysis started with three baseline probes for each student. During
the extended analysis all progress was measured weekly using grade-level AIMSweb
passages (Magit & Shinn, 2002). After approximately 5 weeks, overall progress was
evaluated and results suggested that only minimal gains had been made by each student
(Schreder et al., 2012). As a result, further assessment was completed to determine if
adding a performance-based intervention to the current treatment package would increase
ORF with one of the participants. An additional BEA indicated that a reward increased
performance and as a result, the reward component was added to the treatment package
for the remainder of the extended analysis. Schreder et al. (2012) did not assess
performance-based interventions with the other participant, but increased intervention to
twice a day.
Following the treatment changes, ORF performance increased for both
participants and maintained an upward trend (Schreder et al, 2012). These high levels of
performance remained stable up to 6 weeks after intervention was discontinued. The
results of Schreder et al. (2012) supported McComas et al. (2009) and the continued use
of BEA to select an intervention for continued implementation. In sum, the evidence
supported the use of BEA as a means to select ORF interventions for students. However,
the evidence supporting the use of BEA with early reading skills remains sparse. While
43
evidence is promising, only three studies highlighted the use of BEA in selecting early
literacy interventions.
Brief Experimental Analysis with Early Literacy Skills
Daly, Chafouleas, Persampieri, Bonfiglio, and LeFleur (2004) and Daly, Johnson,
and LeClair (2009) further highlighted the use of BEA with early reading skills. In Daly
et al., (2004), an experimental analysis compared the effect of a phoneme blending
intervention to a control condition on the performance of two first grade students. During
the phoneme blending intervention, students were taught to read nonsense words by
sounding out each phoneme and then blending the sounds together. In the control
condition, students were taught nonsense words as a sight word or single response unit.
Following each condition, generalization to real words was assessed. Results displayed
differentiation in responding across the two conditions, with increased generalization in
the phoneme blending intervention condition. More importantly, Daly et al. (2004)
demonstrated the effective use of an experimental analysis with blending and segmenting
skills.
Daly, Johnson, and LeClair (2009) similarly conducted a study in which an
experimental analysis was completed to assess progress in blending and segmenting with
four-first grade students receiving the standard class curriculum, which included largegroup phonics instruction and independent seatwork. Using an experimental analysis, one
child was identified as displaying limited progress or growth from the core reading
curriculum (Daly et al., 2009). Thus, supplemental instruction was provided and assessed
within the experimental analysis. Results indicated that blending and segmenting
improved as a result of the supplemental instruction and levels of growth maintained.
44
Daly, Johnson, and LeClair (2009) further validated the use of BEA with early reading
skills.
Petursdottir et al. (2009) conducted a study evaluating letter sound fluency
interventions using a BEA with kindergarten students who were not responding to the KPALS (Fuchs et al., 2001) program. Prior to this study, Noell et al. (2001) were the only
researchers to evaluate LSF as a dependent measure within a BEA. Petursdottir et al.
(2009), however, were the first to examine LSF as the sole dependent variable.
Students at risk for reading difficulties were identified from 46 K-PALS
classrooms as scoring in the lowest 20% of the Rapid Letter Sound pretest (Fuchs et al.,
2001), an assessment administered class-wide as part of another study. Three CBM
measures were then used to select students for the BEA from the pool of eligible at-risk
students: letter sound fluency (LSF) and word identification fluency (WIF; both as part of
K-PALS; Fuchs, Berends, McMaster, Sáenz, & Yen, 2006) and DIBELS Nonsense Word
Fluency (NWF; Good & Kaminski, 2002). Using a BEA, three intervention conditions
were assessed: goal setting with incentive, modeling, and modeling and goal setting plus
incentive. The BEA was conducted using a hierarchical approach, so that interventions
requiring the least amount of adult support were assessed prior to interventions needing
higher levels of adult support.
LSF probes and specific subskill measures were used to assess progress
throughout the BEA. Petursdottir et al. (2009) argued that specific subskill measures were
necessary as LSF probes are not sufficiently different enough from each other to measure
small amounts of letter-sound correspondence growth. Thus, two specific subset
measures were developed and also used as assessment measures: letter-sound subskill
45
measure (LSM) and decoding subskill measure (DSM). Each LSM was individualized
and contained 56 letters, maintaining a ratio of two known to two unknown sounds
identified from the baseline LSF probe. The DSM contained 32 decodable words made
from the same letters as the corresponding LSM. Each of the two known sounds occurred
eight times and each of the unknown sounds occurred 20 time. Alternate versions of each
subskill measure were used.
During the BEA, an intervention was identified as effective when there was at
least a 20% increase in performance over LSM or LSF baseline measures. Petursdottir et
al. (2009) found that the two subskill measures were more sensitive to growth than the
LSF probes during the BEA. When a potential intervention was identified a mini-reversal
occurred to determine if functional control could be obtained. If the mini-reversal
demonstrated functional control with lower performance in baseline and increased
performance in intervention the BEA was terminated. An effective intervention was
identified for each student and implemented subsequently in conjunction with K-PALS
(Fuchs et al., 2001) in the classroom for 5 to 10 weeks or until the LSF goal was reached.
The CBM measures were used to assess generalization of skills during the extended
intervention period. During the extended intervention, all participants but one made
immediate growth in LSF after implementation of the intervention. Yet, growth on all
other CBMs was evident across participants.
The results of Petursdottir et al. (2009) supported the use of BEA with early
reading skills. Individual LSF interventions were identified for each participant and
when the intervention was implemented over an extended period, LSF growth was
evident. Further, the BEA-identified intervention resulted in generalization of letter sound
46
skills as demonstrated by an increase in CBMs. While LSF probes were not as sensitive
to letter sound skill growth as the subskill measures, Petursdottir et al. (2009)
acknowledged that additional research should be conducted to determine if subskill
measures are required when conducting a BEA of early reading skills or if LSF probes
are sufficient.
The Significance of the Present Study
Success in letter-sound correspondence is necessary to progress along the
developmental continuum of reading toward decoding fluency and accuracy. Therefore,
deficits in letter-sound correspondence are typically met with individual intervention.
Students respond differentially to individual interventions so it is important to select the
most effective intervention for any one student. A BEA would enable multiple lettersound correspondence interventions to be evaluated simultaneously to select the single
most effective intervention for long-term implementation. Yet minimal research has
examined the use of BEA with early literacy skills, particularly letter-sound
correspondence. This study aimed to address this gap in the literature by examining the
effectiveness of BEA in identifying letter-sound correspondence interventions for
individual students.
47
CHAPTER 3
METHOD
The study was conducted in two phases. During phase one, a BEA was conducted
to evaluate skill-based and performance-based interventions directed at improving
alphabetic knowledge, specifically letter-sound correspondence. During phase two, or the
extended treatment phase, both skill-based interventions were administered individually
to each student to compare intervention effectiveness. Table 2 provides an overview of
the study phases and order of the corresponding assessments or interventions conducted
during each phase.
Table 2
Phases of the Study
Phase One: Assessment
Screening
BEA:
Baseline
Reward
IR
SIR
Phase Two: Intervention
Extended Baseline
Extended Analysis:
SIR
IR
Participants
Six early elementary students, three female and three male, were referred by
classroom teachers as needing assistance with letter-sound correspondence skills.
Students were referred from four classrooms across two elementary schools within a
Midwestern school district serving approximately 16,000 students. Students ranged in age
from 5 years, 6 months to 8 years, 9 months. Three students were white, two students
were African-American, and one student was bi-racial. Two of the six students had an
IEP and were receiving additional academic support in a special education classroom.
Three students met selection criteria and completed the study. Selection criteria included:
48
native English speakers, knowledge of 4 letter sounds, and no special education support
in letter-sound correspondence. See Table 3 for individual student demographics.
49
Table 3
Student Demographics
Name
Age/Grade
Sex
Race
IEP status
Izzy
5 years, 6 months
Kindergarten
Female
white
No IEP
Number of Included in Other Information
Known
Study
Sounds
12
Yes
Frequently absent from
school, had changed
schools twice in prior
year
Jill
5 years, 11 months
Kindergarten
5 years, 9 months
Kindergarten
Female
No IEP
12
Yes
Female
AfricanAmerican
white
No IEP
15
Yes
Ben
6 years, 6 months
Kindergarten
Male
white
No IEP
1
No
Klay
7 years, 6 months,
2nd grade
Male
AfricanAmerican
IEP
2
No
Devon
8 years, 9 months,
2rd grade
Male
Bi-racial
IEP
14
No
Sally
Family history of
reading concerns
History of medical and
hearing concerns.
Tubes placed in ears
one week prior to
participation.
Did not meet inclusion
criteria, knew less than
4 letter sounds.
Did not meet inclusion
criteria, knew less than
4 letter sounds
Approaching LSF
benchmarks at
beginning of study.
50
Measures
Test of Word Reading Efficiency-Second Edition (TOWRE-2; Torgensen,
Wagner, & Rashotte, 2012). The TOWRE-2 contains two subtests that quickly measured
sight word recognition and decoding skills in early readers. Both subtests were
individually-administered to each student. The Sight Word Efficiency subtest measured
the amount of printed words a student could identify correctly within 45 seconds. The
Phonemic Decoding Efficiency subtest measured the amount of nonwords a student could
decode correctly within 45 seconds. Each subtest of the TOWRE-2 has four alternate
forms. As a result, the TOWRE-2 could be used for screening and identification of early
reading deficits or to monitor quarterly growth on early reading skills. Research on the
TOWRE-2 has reported a strong alternate forms reliability (r > .90), test-retest reliability
for the same forms (r = >.90), and test-retest reliability for different forms (r = .87;
Torgensen et al., 2012).
Test of Letter Sounds. The Formative Assessment System for Teachers (FAST)
Test of Letter Sounds (Christ et al., 2013) is a 1-min individually-administered probe that
measures letter-sound correspondence skills. Unique probes were available for
benchmark and progress monitoring. Students were presented with an 8 ½” x 11” sheet of
paper containing randomly ordered letters and were directed to say the sound of each
letter. Correct sounds were defined as sounds accurately identified within 3 sec. Incorrect
sounds were defined as sounds identified inaccurately, sounds that students stated they
did not know, or occasions in which a response was not provided within 3 sec. Each
incorrect sound was recorded and at the end of 1 min; incorrect sounds were subtracted
from total sounds attempted to obtain a measure of Letter Sound Fluency (LSF), or the
51
total correct sounds produced per minute. Strong alternate-form reliability (r = .89) and
strong test-retest reliability (r = .92) have been reported for Test of Letter Sounds in
kindergarten (Christ et al., 2014). The strong technical adequacy suggests that different
forms of the Test of Letter Sounds do not result in significantly different scores and the
test is a good measure for progress monitoring (Christ et al., 2014). The Test of Letter
Sounds also identified known and unknown letter sounds and determined letter-sound
expression (LSE), the number of correctly produced letter sounds from the entire probe.
Nonsense Word Fluency (NWF) probes. Dynamic Indicators of Basic Early
Literacy Skills Next (DIBELS Next; Good & Kaminski, 2011) is a 1-min individuallyadministered probe that measures alphabetic knowledge via letter-sound correspondence
and blending of common letter sounds into whole words (Good et al., 2011; Good &
Kaminski, 2011). Students were presented with a 8 ½” x 11” sheet of paper containing
randomly ordered VC and CVC nonsense words and asked to correctly blend common
letter sounds into whole words or provide individual letter sounds. Correct words or
sounds were defined as words or sounds accurately identified within 3s of presentation.
Incorrect words or sounds were defined as words or sounds that are mispronounced or
provided after 3s of hesitation from the student. Incorrect sounds were recorded and at
the end of 1 min a score of correct letter sounds was calculated.
Moderate alternate-form reliability for NWF correct letter sounds has been
reported in kindergarten and first grade (r = .71 and .85, respectively; Good et al., 2011).
Stronger alternate-form reliability has been demonstrated in NWF whole words read in
kindergarten (r = .92) and in first grade (r = .90; Good et al., 2011). There is also
52
evidence of moderate to strong validity of NWF in predicting ORF outcomes throughout
first grade (r = .70 - .80; Good et al., 2011).
Procedures
Recruitment. The author recruited students through the dissemination of fliers
and teacher referral while working as a school psychologist serving four elementary
schools within a Midwest Area Education Agency (AEA). If a teacher or administrator
identified a student who might benefit from letter-sound correspondence remediation,
they were instructed to contact the school psychologist.
Five-minute teacher interviews were conducted for each identified student to
ensure inclusion criteria were met. First, teachers were asked, “Does the student speak
English as their primary language?” Second, teachers were asked, “What type of reading
support is the student currently receiving?” and “What data suggest that the student has a
letter sound deficit or is discrepant from peers in letter-sound correspondence skills?”
Last, teachers were asked, “Which letter-sounds can the student produce accurately?”
The participating school district did not use letter-sound fluency benchmarks at any
grade, so “discrepancy from peers” was defined by a student’s teacher. If an identified
student met the above criteria, the school psychologist contacted the parents to discuss
study participation and obtain consent.
Screening. Identified students were screened using the TOWRE-2 (Torgensen et
al., 2013) and Test of Letter Sounds (Christ et al., 2013). Eligible students scored below
the 25th percentile on the Phonemic Decoding Efficiency subtest of the TOWRE-2 and
below 28 correct letter sounds, the normative winter grade-level benchmark on the Test
of Letter Sounds. Eligible students were also able to identify a minimum of 4 letter
53
sounds as demonstrated by receiving a LSE score of 4 or greater on the Test of Letter
Sounds.
Brief Experimental Analysis (BEA). Following screening, the BEA
commenced. Letter sound fluency (LSF) and letter sound expression (LSE) were
evaluated across four conditions in the following order: baseline, reward, Incremental
Rehearsal (IR), and Systematic Incremental Rehearsal (SIR). Each experimental
condition was evaluated 3 times for a total of 12 sessions. LSF was defined as the number
of letter sounds correctly identified within one minute on a unique Test of Letter Sounds
(Christ et al., 2013) probe. LSE was defined as the number of known letter sounds, or
letter sounds correctly identified across 100% of presentations on a unique Test of Letter
Sounds probe. For example, if a letter was presented two times and correctly identified
both times, it was classified as known letter sound. Test of Letter Sounds probes were
administered at the end of each experimental session.
Because previous research indicated that LSF probes may not be sensitive enough
to identify intervention effects (Noell et al., 2001; Petursdottir et al., 2014), flashcards
were used to calculate the number of correct responses following each instructional IR
and SIR session. Each of the unknown letters taught in a single session were randomly
presented to the student on flashcards with the question “What sound does this letter
make?” The number of correct responses was recorded.
The Test of Letter Sound probes administered to Izzy included upper and lower
case letters. After the 1-min. administration to determine LSF, Izzy was prompted to read
the remainder of the letters on the probe to determine LSE. The Test of Letter Sound
probes administered to Jill and Sally included lower case letters only. After the 1-min.
54
administration, Jill and Sally were prompted to read up to 52 letters, to ensure that every
letter was read at least twice. The change in probe and administration was attributed to
two main factors. First, updated probes were available once Jill and Sally began, and
these probes only contained lower case letters. Second, it took Izzy a substantial amount
of time to read the entire Test of Letter Sounds probe, which greatly reduced the
practicality and efficiency of the BEA. As a result, Jill and Sally only read 52 letter
sounds.
The BEA conducted with Izzy was completed across six days within two weeks,
with two sessions occurring each day. The BEA conducted with Jill and Sally was
completed across three days within one week, with four sessions occurring each day. The
order of conditions was consistent across participants and based upon on the amount of
adult assistance required.
Baseline. Baseline for each student included a unique Test of Letter Sounds
(Christ et al., 2013) probe following standard administration protocol to determine LSF.
LSE was determined using the same Test of Letter Sounds probe.
Reward condition. Reward sessions were conducted similar to baseline except a
reward was contingent on an increase over the LSF score obtained in the previous
baseline (Petursdottir et al., 2009). Students were told their LSF score from the previous
baseline session and instructed to beat the previous score in order to select a reward.
Rewards included a small trinket (e.g., rings, pencils), small edible items, or 5-min of free
time with the school psychologist. With each student, the first reward only session
increased LSF, so reward continued to be contingent on an increase in performance
during each subsequent skill-based session.
55
Incremental rehearsal (IR) condition. All letters were printed individually in
black ink on 3”x 5” laminated flashcards in 120-point Century Gothic font (DuBois et al.,
2011; Volpe et al., 2014), Century Gothic font provides the standard written shape for all
letters and is comparable to the font used on the Test of Letter Sounds (Christ et al.,
2014). Uppercase and lowercase letters were used to create a total of 52 individual
flashcards. Each IR session was timed. Using the results from the previous baseline
measure of LSE, four known letter sounds and two unknown letter sounds were identified
for use in the first implementation of IR. Thus, a ratio of 80% known to 20% unknown
letter sounds was used (DuBois et al., 2014; Volpe et al., 2011).
The first unknown letter sound flashcard was presented by the school
psychologist with an accompanying model. “This letter makes the [letter sound] sound.”
The school psychologist then prompted the student by saying, “What sound does it
make?” (DuBois et al., 2014). This prompt was used each time an unknown letter sound
was introduced during IR. After the student provided the correct letter sound, the first
known letter sound was presented with the prompt, “What sound?” After the student
responded correctly to the prompt the sequence started over and an additional known
letter sound was added. As a result the second sequence was: first unknown letter sound
(U1), first known letter sound (K1), and second known letter sound (K2). Each
subsequent sequence added another known letter sound until all four known letter sounds
had been presented.
The second unknown letter sound was then introduced. Prior to the introduction,
the fourth known letter sound was removed from the sequence and the first unknown
letter sound became the first known letter sound. The sequence continued with the
56
introduction of the second unknown letter sound with a prompt and model followed by
the presentation of the first known letter sound. Each subsequent sequence resulted in the
addition of a known letter sound until all four known letter sounds were again presented.
An entire IR session occurred as follows: U1, K1, U1 K1, K2, U1, K1, K2, K3, U1, K1,
K2, K3, K4, U2, K1, U2, K1, K2, U2, K1, K2, K3, U2, K1, K2, K3, K4. As a result the
first unknown letter sound was presented and practiced eight times (four times as the first
unknown and four times as the first known) and the second unknown letter sound was
presented and practiced four times.
Error correction occurred throughout the IR procedures with both known and
unknown sounds. If a student incorrectly identified a letter sound, the intervention was
stopped briefly and the student was provided with a model, “Remember, this letter makes
the [letter sound].” The student was then prompted by the school psychologist to
independently say the letter sound, “What sound?” The student was required to
independently and accurately produce the letter sound before the sequence continued.
Systematic incremental rehearsal (SIR) condition. Using results from the
preceding baseline measure of LSE, 5 unknown letter sounds were selected for use during
the SIR session. As the addition of unknown letter sounds was contingent on correct
student responses, the SIR session did not exceed the length of the previous IR session or
use more than 5 unknown letter sounds (Kupzyk et al., 2011). The unknown letter sounds
differed from the unknown sounds used during the previous IR session.
The first unknown letter sound flashcard was presented to the student with the
model, “This letter makes the [letter sound] sound.” The student was then prompted to
repeat the letter sound with, “What sound does it make?” Following a correct response,
57
the second unknown letter sound was presented using the same modeling and prompting
procedure. This model and prompting procedure occurred each time an unknown letter
sound was introduced. The first and second unknown letter sounds were presented once
more in order using the model and prompts. A prompt-delay was then implemented
(Kupzyk et al., 2011).
After the prompt delay was implemented, the first and second unknown letter
sounds were shuffled and presented randomly to the student without a prompt. If either
letter sound was pronounced incorrectly or a response was not provided within 3 sec,
error correction and modeling occurred before the cards were again randomly presented.
When the first unknown and second unknown letter sound were both read correctly
without a prompt, the third unknown letter sound was added to the sequence (Kupzyk et
al., 2011). The third unknown letter sound was introduced with the model and prompting
procedure. All subsequent presentations of the third unknown letter sound were done
with a prompt delay. Following the introduction of the third unknown letter sound, the
first three unknown letter sounds were shuffled and presented using prompt delay and
error correction. When all three unknown letter sounds were identified accurately without
a prompt, the fourth unknown letter sound was presented using the model and prompting
procedure. Then the four unknown letter sounds were shuffled and presented randomly to
the student using a prompt delay and error correction. Once all four unknown letter
sounds were identified accurately, the fifth and final unknown letter was introduced using
the model and prompting procedure.
The sequence ended when all five unknown letter sounds were identified correctly
when the prompt delay was in place (Kupzyk et al., 2011). If the time required for SIR
58
implementation exceeded the time from the previous IR session, no additional unknown
letter sounds were introduced and the sequence was completed with the letter sounds that
were being taught when time elapsed.
Extended Analysis. At the completion of the BEA, the TOWRE-2 (Torgensen et
al., 2012) was again administered. Following the TOWRE-2 (Torgensen et al., 2012),
treatment began and used an AB design with the following conditions: (A) baseline and
(B) treatment.
Baseline occurred until a stable trend in data was identified through visual
inspection. During each baseline session, a Test of Letter Sounds probe (Christ et al.,
2013) and NWF probe (Good & Kaminski, 2011) was administered. NWF probes were
used to evaluate the generalization of letter-sound correspondence skills as students are
provided with whole words and can earn higher scores if whole words are read rather
than individual sounds.
Baseline probes identified consistently unknown letter sounds. The unknown
letter sounds were then separated into two treatment groups of equal difficulty and either
IR or SIR randomly assigned to each group. In treatment, both interventions were
administered during a single intervention session with the order of implementation
alternating across sessions. Table 4 illustrates a sample order of intervention and
subsequent assessment during the treatment phase of the extended analysis.
59
Table 4
Order of Intervention and Assessment Sessions during Extended Analysis Treatment
Intervention Order
Assessment Order
1st
2nd
1st
2nd
3rd
Session 1
SIR
IR
Letter Sounds probe
NWF probe
Flashcards
Session 2
IR
SIR
Letter Sounds probe
NWF probe
Flashcards
Session 3
SIR
IR
Letter Sounds probe
NWF probe
Flashcards
Interventions were implemented as previously described. Two unknown letter sounds
were taught during each IR session and 4-5 unknown letter sounds were taught during
each SIR session. In IR once an unknown sound was correctly identified for three
consecutive sessions, it was considered known and no longer taught during the
intervention. A new unknown letter sound was then introduced. The interventions were
implemented three times per week for up to an additional five weeks (Peterson et al.,
2014; Volpe et al., 2011). Following each intervention session, LSF and LSE were
measured using the Test of Letter Sounds. A NWF (Good & Kaminski, 2011) probe was
then administered following the Test of Letter Sounds (Christ et al., 2013) to measure
generalization of letter-sound correspondence skills. The percent of and number correct
responses were also calculated using flashcards.
Maintenance. Two weeks after the termination of the intervention, a single Test
of Letter Sounds (Christ et al., 2013) probe was administered to evaluate maintenance of
LSF and LSE skills. A single NWF (Good & Kaminski, 2011) probe was also
administered to determine generalization of letter-sound correspondence skills. A post-
60
test of the TOWRE-2 (Torgensen et al., 2011) was administered as a final measure of
comparison.
Interobserver agreement and procedural integrity. All sessions during the
BEA and extended treatment were videotaped to obtain interobserver agreement (IOA)
on correct and incorrect responses on Test of Letter Sounds (Christ et al., 2013), NWF
probes (Good & Kaminski, 2011), and number of correct responses. A 1-hour training
occurred with a graduate student who collected all IOA and integrity data. Didactic
training on intervention conditions was provided to the graduate student from the author
and repeated practice scoring all measures occurred. Graduate student IOA was
completed at training and exceeded 90% on all measures.
When completing IOA, the graduate student was given a blank administrator copy
of each probe or a list of letters taught during the session and was asked to mark
incorrectly identified letter sounds via the video recording. An agreement was defined as
both the author and graduate student marking the letter sound as correct or incorrect. IOA
was assessed across 38% of all sessions and calculated by dividing the total agreements
by the total agreements plus disagreements and multiplying by 100. IOA was 98% across
all Test of Letter Sounds probes and NWF probes.
Task analyses were created for each intervention condition conducted during the
BEA and extended treatment. The graduate student assessed procedural integrity on 38%
sessions using the corresponding task analyses (Appendix A-E). The graduate student
recorded whether or not the author conducted the steps outlined in the task analysis.
Procedural integrity was calculated by dividing the number of steps completed by the
61
number of total steps in the task analysis. Integrity was 99% across all experimental
conditions in the BEA and extended analysis.
Data Analysis
Research Question 1
Can a BEA conducted within a school setting effectively evaluate skill-based and
performance-based letter-sound correspondence interventions in elementary-aged
students? The BEA was conducted using a multi-element design and visual inspection
was used to evaluate the effects of intervention type on each student’s performance. An
effective BEA highlighted differentiation between intervention conditions (reward, IR,
SIR, and if applicable; IR + reward or SIR + reward) based on the results of LSF and
LSE probes. Visual inspection of BEA data was the primary method used to determine if
differentiation between interventions was present. An intervention condition was deemed
effective if it resulted in an clear difference in performance when compared to baseline
(Daly et al., 1999). If differentiation between intervention conditions was unclear or
inconsistent, the percent and number of correct responses was used to support the
selection of an effective intervention. Data trends were also evaluated to examine if
improvement in performance occurred over time (Eckert et al., 2000). As it can be
difficult to assert the presence or absence of a trend with only 3 data points, an
examination of the median data point for each condition occurred to see if a consistent
intervention was identified (Noell et al., 2001).
Research Question 2
When comparing IR to SIR, which skill-based intervention is most effective at
improving letter-sound correspondence in elementary-aged students? Visual inspection
62
was used to identify changes in level, trend, and variability during each intervention
condition in the extended analysis. An examination of LSE was used to determine which
set of unknown letter sounds was mastered quicker and subsequently maintained
following intervention termination. An intervention condition was deemed as most
effective if more unknown letter sounds were taught and also maintained during that
condition.
63
CHAPTER 4
RESULTS
Results of the BEA are presented first, followed by the results of the extended analysis.
Use of BEAs to Identify Effective Letter-Sound Correspondence Interventions
Results of the BEA for Izzy, Sally, and Jill are presented in Figures 1, 2 and 3,
respectively. For all figures, the top panel illustrates LSF, the middle panel illustrates
LSE, and the bottom panel illustrates number of accurate flashcard responses.
Izzy (Figure 1).
Letter sound fluency (panel 1). Izzy’s LSF performance was similar across the
first probe for all 4 conditions (varying between 14 and 16 letter sounds). She
demonstrated an increasing trend in her performance when exposed to the IR + reward
(13, 27, 25 letter sounds). Her performance during baseline, without any intervention,
also showed an increase across sessions (14, 13, 29 letter sounds). Her performance in the
other two conditions did not increase to this extent. Izzy’s greatest fluency in the Reward
only condition was 19 letter sounds and her greatest fluency with SIR + reward was 20
letter sounds. When looking at her median performance across the three sessions with
each condition, we see that she appeared more successful with the IR with Reward
(Median = 25 letter sounds), than with the other conditions (Baseline = 14 letter sounds;
Reward = 15 letter sounds; SIR + Reward = 17 letter sounds).
Correct Letter Sounds in One Minute
64
35
IR + reward
30
25
Reward
20
15
10
5
Letter Sound Fluency
0
1
Total Correct Sounds
SIR + reward
Baseline
50
45
40
35
30
25
20
15
10
5
0
3
4
5
6
7
Sessions
8
9
10
11
12
IR + reward
Reward
Baseline
SIR + reward
Letter Sound Expression
1
Number of Letter Sounds Presented
2
2
3
4
5
6
7
Sessions
8
9
10
11
12
5
4
3
Incorect
Letter
Sounds
Correct Letter
Sounds
2
1
0
IR #1
SIR #1
IR #2
SIR #2
IR #3
SIR #3
Sessions
Figure 1. Results of Izzy's BEA and flashcard presentation following skill-based sessions.
65
.
Letter Sound Expression (panel 2). Again, Izzy’s performance was similar
across the first session of each condition (with a range of 37 to 40 letter sounds). Baseline
(39, 34, 45 letter sounds) and Reward only (37, 35, 43 letter sounds) were the two
conditions with increasing trends. Izzy’s performance in SIR + reward displayed a
decreasing trend (40, 20, 38 letter sounds) across the condition while IR + reward (40, 27,
40 letter sounds) displayed a flat trend and no improvement. When examining the
median performance across each condition, there was no clear differentiation across the
conditions (Baseline = 39 letter sounds; Reward = 37 letter sounds; IR + reward = 40
letter sounds; SIR + reward = 38 letter sounds).
Flashcard Accuracy (panel 3). On average, Izzy accurately produced 1.7 out of
2 letter sounds (or 83% of the letter sounds) that were presented on flashcards during the
IR + reward sessions. During SIR + reward sessions, she consistently identified 4 out of 5
letter sounds presented (or 80% of letter sounds). Izzy correctly identified more unknown
letter sounds in each SIR + reward sessions than IR + reward sessions.
Summary. Table 5 illustrates the interventions selected to improve Izzy’s
performance on LSF, LSE, and flashcard accuracy based on the analysis used. There was
intervention agreement within LSF, but not across all measures.
Table 5
Izzy’s BEA-Selected Interventions Based on Measure and Analysis
Letter Sound Fluency
Trend
IR + reward
Median
IR + reward
Letter Sound Expression
Trend
No
differentiation
Median
No
differentiation
Flashcard Accuracy
SIR + reward
66
Sally (Figure 2).
Letter sound fluency (panel 1). Sally’s initial performance on LSF suggested
differentiation among the 4 conditions, however, this trend did not hold for the duration
of the BEA. Specifically, Sally’s LSF performance without intervention (Baseline) was
the strongest (18, 27, 27 letter sounds). Her performance with SIR + reward was similar
(19, 22, 27 letter sounds). Sally’s performance during Reward showed little
improvement across the sessions (25, 22, 26 letter sounds) and the trend for IR + reward
decreased (25, 25, 22 letter sounds). Given these data, SIR + reward may be the best
intervention to affect LSF. When examining the median performance across each
condition (Baseline = 27 letter sounds; Reward = 25 letter sounds; IR + reward = 25 letter
sounds; SIR + reward = 22 letter sounds), IR + reward appears to be the strongest
intervention for teaching letter sounds.
Letter sound expression (panel 2). Sally’s performance with LSE demonstrated
even less differentiation than LSF. There was little variability, only 4 letter sounds
(range of 14-18 letter sounds), among all four conditions during the evaluation.
Flashcard accuracy (panel 3). The number of unknown letter sounds presented
to Sally during each SIR + Reward session varied due to the time it took to complete each
session. Regardless of the intervention implemented or total unknown sounds presented;
Sally correctly identified 1 letter sound during each of the first four intervention sessions
(two IR + reward sessions and two SIR + reward sessions).
Correct Letter Sounds in One Minute
67
Reward
30
25
20
15
10
IR + reward
SIR + reward
Baseline
5
Letter Sound Fluency
0
1
2
3
4
5
6
7
8
9
10
11
12
Total Correct Sounds
Sessions
20
18
16
14
12
10
8
6
4
2
0
Reward
Baseline
IR + reward
Letter Sound Expression
1
Number of Letter Sounds Presented
SIR + reward
2
3
4
5
6
7
Sessions
8
9
10
11
12
5
4
3
2
Correct
Letter
Sounds
Incorrect
Letter
Sounds
1
0
IR #1
SIR #1
IR #2
SIR #2
Sessions
IR #3
SIR #3
Figure 2. Results of Sally’s BEA and flashcard presentation following skill-based sessions.
68
In the final two intervention sessions, Sally correctly identified 2 out of 2 (or 100% of)
letter sounds presented on flashcards presented during IR + reward and 4 out of 5 (or
80% of) letter sounds presented during SIR + reward. On average, Sally produced 1.3
letter sounds (66% of the letter sounds) presented during IR + reward and 1.7 letter
sounds (46% of the letter sounds) presented during SIR + reward. Across all sessions,
more unknown letters were presented during SIR + reward than IR + reward. Therefore,
Sally identified more individual letters during the SIR + reward condition than IR +
reward condition.
Summary. Table 6 illustrates interventions selected to improve Sally’s
performance on LSF, LSE, and accuracy based on analysis used. No means of analysis
was able to select the same intervention for any single measure. SIR + reward was the
only intervention that appeared effective across measures.
Table 6
Sally’s BEA-Selected Intervention Based on Measure and Analysis
Letter Sound Fluency
Trend
SIR + reward
Median
IR + reward
Letter Sound Expression
Trend
No
differentiation
Median
No
differentiation
Flashcard Accuracy
SIR + reward
Jill (Figure 3)
Letter sound fluency (panel 1). Jill’s performance on the initial 4 sessions
suggested differentiation among the conditions. However, this trend did not hold
throughout the BEA. LSF performance without intervention (Baseline) had the most
dramatic increase (15, 19, 27 letter sounds). Jill’s increasing trend was similar with IR +
reward (19, 20, 28 letter sounds) and Reward only (22, 25, 27 letter sounds). Overall,
69
three of the four conditions demonstrated an increase in LSF with little differentiation
during the final sessions. SIR + reward was the only experimental condition in which
performance decreased during the final session (12, 25, 23 letter sounds). Given these
data, IR + reward may be the best intervention. When examining the median scores
across conditions (Baseline = 19 letter sounds; Reward = 25 letter sounds; IR + reward =
20 letter sounds; and SIR + reward = 23 letter sounds), SIR + reward appears to be most
effective at improving LSF skills.
Letter sound expression (panel 2). There was little variability and minimal
improvement in LSE across the conditions. Without intervention (Baseline), Jill’s LSE
performance appeared to stabilize (9, 12, 12 letter sounds). A similar performance was
seen during Reward only (14, 14, 13 letter sounds) and SIR + reward (12, 13, 12 letter
sounds). The only condition with an increasing trend was IR + reward (13, 11, 17
sessions). A lack of differentiation was present when using the median (Baseline = 12
letter sounds; Reward = 14 letter sounds; IR + reward = 13 letter sounds; SIR + reward =
12 letter sounds).
Flashcard accuracy (panel 3). During the first IR+ reward session, Jill did not
correctly identify any of the unknown sounds that were presented. In each subsequent IR
+ Reward session she identified 2 of 2 (or 100% of) the letters sounds presented on
flashcards. Jill’s performance on flashcards during SIR + reward sessions was variable.
Correct Letter Sounds in One Minute
70
30
Reward
25
20
SIR + reward
15
10
IR + reward
Baseline
5
Letter Sound Fluency
0
1
18
2
3
4
5
6
7
Sessions
8
9
10
11
12
Reward
Total Correct Sounds
16
14
12
10
8
6
4
SIR + reward
IR + reward
Baseline
2
Letter Sound Expression
0
Number of Letter Sounds Presented
1
2
3
4
5
6
7
Sessions
8
9
10
11
12
5
4
Incorrect Letter
Sounds
Correct
Letter
Sounds
3
2
1
0
IR #1
SIR #1
IR #2
SIR #2
BEA Sessions
IR #3
SIR #3
Figure 3. Results of Jill’s BEA and flashcard presentation following skill-based sessions.
71
During the first and third SIR sessions, Jill identified 3 out of 5 (or 60%) of letter
sounds presented. She identified 5 out of 5 (or 100% of) letter sounds presented during
the second SIR + reward session. On average, Jill accurately produced 1.3 out of 2 letters
sounds (or 67% of the letter sounds) presented on flashcards during the IR + reward
sessions and 3.7 out of 5 letter sounds (or 73.3% of the letter sounds) presented during
the SIR + reward sessions. Jill consistently identified more unknown letter sounds
correctly when using SIR + reward than IR + reward.
Summary. Table 7 illustrates the interventions selected to improve Jill’s
performance on LSF, LSE, and flashcard accuracy based on analysis used. There was no
agreement on intervention within measure, yet there was agreement with SIR + reward
across measures.
Table 7
Jill’s BEA-Selected Intervention Based on Measure and Analysis
Letter Sound Fluency
Trend
IR + reward
Median
SIR + reward
Letter Sound Expression
Trend
IR + reward
Median
No
differentiation
Flashcard
Accuracy
SIR + reward
Extended Analysis Evaluating SIR and IR Interventions
Results of the extended analysis for Izzy, Sally, and Jill are presented in Figures
4, 7, and 10 respectively. For all figures, the top panel illustrates the overall impact of
both interventions on LSF and LSE. The bottom panel illustrates the impact of both
interventions on correct letter sounds (CLS) and whole words read (WWR) on the NWF
72
probes. Measures were administered following the implementation of both interventions
to determine overall impact of the interventions.
Izzy’s broad performance on measures of intervention.
Letter sound fluency and letter sound expression. Although across baseline there
was a variable but increasing trend, an intervention was implemented to ensure ample
time for learning prior to the end of the school year. Immediately following the
implementation of intervention, there was an increase in LSF, which appeared to continue
with an overall increasing trend. Nonoverlap of all pairs (NAP) was also calculated for
LSF to evaluate the overall effects of intervention. NAP is “the percentage of data that
improves across phases” and was calculated at 0.93, indicating a strong effect size and
high probability that baseline and treatment data did not overlap (Parker & Vannest,
2009; Parker, Vannest, & Davis, 2011; p. 312). The increased levels of LSF did not
maintain following the termination of intervention.
Baseline levels of LSE were on an increasing trend and continued to increase
upon the implementation of intervention. Small gains continued during intervention with
the strongest LSE performance during the final intervention session. NAP was calculated
at 0.94 for LSE, indicating a strong effect size and improvement in LSE over baseline.
However, the increased levels of LSE also did not maintain following the termination of
intervention.
Correct letter sounds and whole words read. There was a considerable amount of
CLS variability during baseline (range of 11-32 sounds), but overall performance
appeared to be on an increasing trend. WWR was consistently low during baseline (range
of 0-1 words). After intervention was implemented there was an immediate drop in CLS.
73
60
Extended Baseline
Intervention
Main.
Correctly Produced Sounds
50
40
LSF
30
20
10
LSE
0
1
2
3
5
6
7
8
Extended Baseline
35
Correctly Produced Sounds/Words
4
9 10 11 12 13 14 15 16 17 18 19 20
Sessions
Intervention
Main.
30
25
20
15
10
WWR
CLS
5
0
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20
Sessions
Figure 4. Results of Izzy’s extended baseline and treatment performance across LSF, LSE, CLS,
and WWR.
.
74
Although performance grew again, numerous baseline points of CLS surpassed
CLS performance during intervention. In fact, CLS during the final intervention session
(17 sounds) was lower than CLS during the initial intervention session (18 sounds).
However, WWR increased across the intervention and Izzy began to spend more time
decoding whole words rather than stating individual letters sounds. Across baseline and
intervention, CLS performance appeared to drop whenever WWR performance increased.
Izzy’s specific performance with SIR and IR interventions. Results of the final
extended baseline session identified 8 unknown letter sounds.
SIR. Four unknown sounds (/d/, /qu/, /e/, /B/) were taught using SIR. Figure 5
illustrates Izzy’s performance on Test of Letter Sound probes and NWF probes for each
letter that was taught using SIR. Test of Letter Sound performance illustrated is based
upon LSE data or reading of the entire sound probe.
/d/ (panel 1). Izzy identified /d/ with 100% accuracy on 5 out of 11 letter sound
baseline sessions. Across all letter sound baseline sessions she accurately identified /d/ on
15 out of 22 (or 68% of) letter sound presentations. Izzy did not make improvement in
accurately identifying /d/ during intervention. Her letter sound performance in baseline
was greater than intervention. She identified /d/ with 100% accuracy on 1 out of 8 letter
sound intervention sessions. Across all letter sound intervention sessions Izzy identified
/d/ correctly 8 out of 17 (or 47% of) letter sound presentations. Two weeks following the
termination of intervention, Izzy identified /d/ with 50% accuracy on the letter sound
probe.
75
Test of Letter Sounds
Baseline
Test of Letter Sounds
Intervention
Nonsense Word Fluency
Main.
Baseline
80
80
60
40
20
/d/
0
1
3
5
7
9
Percent of Accuracy
100
Percent of Accuracy
100
Intervention
Main.
n.
60
40
20
/d/
0
1
11 13 15 17 19
3
5
7
9 11 13 15 17 19
Percent of Accuracy
100
80
60
40
20
/qu/
0
1
3
5
7
9
11 13 15 17 19
80
80
60
40
20
/e/
Percent of Accuracy
100
Percent of Accuracy
100
60
40
20
/e/
0
0
1
3
5
7
9
1
11 13 15 17 19
3
5
7
9 11 13 15 17 19
Sessions
Percent of Accuracy
100
80
60
40
20
/B/
0
1
3
5
7
9 11 13 15 17 19
Sessions
Figure 5. Izzy’s results of SIR on targeted unknown letters as measured by Test of Letter Sounds probes
(left column) and NWF probes (right column).
Figure 5. Izzy’s results of SIR on targeted unknown letters as measured by Test of Letter Sounds
probes (left column) and NWF probes (right column).
76
There were fewer opportunities to respond to /d/ on NWF probes that were used
to evaluate generalization as the letter showed up randomly and infrequently. When using
NWF probes during baseline sessions, there were only 7 sessions in which /d/ was
presented. Izzy identified /d/ correctly on 2 out of 14 (or 14% of) letter sound
presentations. On NWF intervention sessions, /d/ was presented in three sessions.
Performance increased across these sessions and, Izzy identified /d/ correctly on 2 out of
4 presentations. Maintenance data were not reported as /d/ was not reached on the
maintenance probe.
/qu/ (panel 2). Izzy identified /qu/ with 100% accuracy on 8 out of 11 letter sound
baseline sessions. Across all letter sound baseline sessions she accurately identified /qu/
on 17 out of 21 (or 81% of) letter sound presentations. During intervention, Izzy’s
performance remained high. She identified /qu/ with 100% accuracy on 7 out of 8
intervention sessions. Across all letter sound intervention sessions Izzy identified /qu/
accurately on 13 out of 15 (or 87% of) letter sound presentations. Two weeks following
the termination of intervention, Izzy identified /qu/ with 100% accuracy on a letter sound
probe. Generalization data were not collected as /qu/ is not present on NWF probes.
/e/ (panel 3). Izzy identified /e/ with 100% accuracy on 8 out of 11 letter sound
baseline sessions. Across all letter sound baseline sessions she accurately identified /e/ on
8 out of 11 (or 73% of) letter sound presentations. During intervention, Izzy’s
performance remained similar on letter sound probes. She identified /e/ with 100%
accuracy on 6 out of 8 intervention sessions. Across all intervention sessions Izzy
identified /e/ accurately on 6 out of 8 (or 75% of) letter sound presentations. Two weeks
following the termination of intervention, Izzy identified /e/ with 100% accuracy.
77
While Izzy demonstrated strong letter-sound correspondence skills, she appeared
unable to consistently generalize this sound as evidenced by performance on NWF
probes. During baseline, Izzy identified /e/ with 50% accuracy in 1 session, 33%
accuracy in 1 session, and 0% accuracy in the remaining 9 sessions. Across all NWF
baseline sessions, Izzy identified /e/ accurately on 2 out of 27 (or 7% of) letter sound
presentations. During intervention, she identified /e/ with 100% accuracy on the final 2
out of 8 sessions. Across all intervention sessions, Izzy identified /e/ accurately on 4 out
of 16 (or 25% of) letter sound presentations. Two weeks following the termination of
intervention, Izzy identified /e/ with 100% accuracy on a NWF probe.
/B/ (panel 4). Izzy’s performance across baseline was variable. She identified /B/
with 100% accuracy on 1 out of 11 letter sound baseline sessions Across all letter sound
baseline sessions Izzy accurately identified /B/ on 12 out of 23 (or 52% of) letter sound
presentations. Performance improved immediately following the implementation of
intervention. Izzy identified /B/ with 100% accuracy on 7 out of 8 letter sound
interventions sessions. Across all letter sound intervention sessions, /B/ was accurately
identified on 16 out of 18 (or 89% of) letter sound presentations. Two weeks following
the termination of intervention, Izzy identified /B/ with 100% accuracy on a letter sound
probe. Generalization data were not collected as /B/ is not present on NWF probes.
IR. Four sounds were taught using IR (/g/, /b/, /Y/, /W/). Figure 6 illustrates
Izzy’s performance on Test of Letter Sound probes and NWF probes for each letter that
was taught using IR. Two sounds were taught at a time within a multiple baseline design.
Test of Letter Sounds
Int.
Baseline
Nonsense Word Fluency
Maintenance
Baseline
80
80
Percent of Accuracy
100
Percent of Accuracy
100
60
40
20
/g/
0
3
5
7
9
60
40
20
11 13 15 17 19
/g/
1
100
80
80
40
20
/b/
0
1
3
5
7
9
Percent of Accuracy
100
60
3
5
7
9
11 13 15 17 19
60
40
20
/b/
0
1
11 13 15 17 19
Maintenance
.
0
1
Percent of Accuracy
Int
78
3
5
7
9 11 13 15 17 19
Sessions
Percent of Accuracy
100
80
60
40
20
/Y/
0
1
3
5
7
9
11 13 15 17 19
Percent of Accuracy
100
80
60
40
20
/W/
0
1
3
5
7
9 11 13 15 17 19
Sessions
Figure 6. Izzy’s results of IR on targeted unknown letters as measured by Test of Letter Sounds
probes (left column) and NWF probes (right column).
Figure 6. Izzy’s results of IR on targeted unknown letters as measured by Test of Letter
Sounds probes (left column) and NWF probes (right column).
79
Sounds were terminated from intervention if they were identified with 100% accuracy for
three consecutive sessions. The first two sounds taught using IR were /g/ and /b/.
/g/ (panel 1). Izzy identified /g/ with 100% accuracy on 8 out of 11 letter sound
baseline sessions. Across all baseline sessions, /g/ was accurately identified on 8 out of
11 (or 73% of) letter sound presentations. Performance during intervention remained
high. Izzy identified /g/ with 100% accuracy in the first three intervention sessions. As a
result, intervention was terminated for that letter sound. Izzy continued to identify /g/
with 100% accuracy during the 6 subsequent maintenance sessions.
During NWF baseline sessions, Izzy had fewer opportunities to respond to /g/.
Overall she responded with 100% accuracy on 4 out of 6 sessions. Across all NWF
baseline probes, Izzy accurately identified /g/ on 7 out of 10 (or 70% of) letter sound
presentations. On NWF intervention probes, /g/ was only presented during two sessions
and identified with 100% accuracy each time. During maintenance /g/ was identified
correctly on 1 out of 2 (or 50%) presentations on NWF probes.
/b/ (panel 2.) Izzy identified /b/ with 50% accuracy on 3 out of 11 letter sound
baselines sessions. Across all sessions, Izzy accurately identified /b/ on 3 out of 22 (or
14%) letter sound presentations. Performance improved, although performance remained
variable during intervention. During letter sound intervention, /b/ was identified with
100% accuracy during 2 of the 8 sessions. Across all letter sound intervention sessions,
/b/ was accurately identified on 7 out of 17 (or 41% of) letter sound presentations. As /b/
was not identified with 100% accuracy for three consecutive weeks, it was taught for the
duration of the intervention. Two weeks following the termination of intervention, /b/
was identified with 50% accuracy on a letter sound probe.
80
Using NWF probes, Izzy did not accurately identify /b/ during baseline. During
intervention, Izzy was able to identify /b/ with 100% accuracy on 2 out of 7 NWF
sessions. Across all interventions sessions, /b/ was accurately identified on 3 out of 11 (or
27% of) letter sound presentations. Maintenance data were not reported as /b/ was not
reached on the maintenance probe.
/Y/ (panel 3). Izzy identified /Y/ with 100% accuracy on 11 out of 14 letter sound
baseline sessions. Across all letter sound baseline sessions, /Y/ was accurately identified
on 25 out of 27 (or 93%) letter sound presentations. Izzy’s performance remained at
100% accuracy for the duration of the intervention as she accurately identified 10 out of
10 letter sound presentations. Izzy identified /Y/ with 100% accuracy in the first three
intervention sessions. As a result, intervention was terminated. Izzy continued to identify
/Y/ correctly for the three subsequent maintenance sessions. Generalization data were not
collected as /Y/ was not present on NWF probes.
/W/ (panel 4). Izzy identified /W/ with 100% accuracy on 7 out of 17 letter sound
baseline sessions. Across all letter sound baseline sessions, /W/ was accurately identified
on 19 out of 35 (or 54%) letter sound presentations. As /W/ was the final sound
introduced; only two intervention sessions occurred. Across those sessions, Izzy
identified /W/ with 100% accuracy on 1 of the 2 sessions. She identified /W/ accurately
on 3 out of 4 (or 75%) letter sound presentations. Two weeks following the termination
of intervention, /W/ was identified with 100% accuracy on a letter sound probe.
Generalization data were not collected as /W/ was not present on NWF probes.
Summary. Table 8 provides summary data for Izzy’s performance on Test of
Letter Sounds probes following interventions. In SIR, four letter sounds were targeted
81
and the most substantial gains made with /B/. She was able to maintain high levels of
performance following the termination of intervention on three letter sounds (/qu/, /e/,
/B/). During IR intervention four letter sounds (/g/, /b/, /Y/, /W/) were targeted and Izzy’s
overall performance improved in all four sounds. Izzy made large improvements in /b/
across the intervention and she was able to demonstrate a higher level of performance
during maintenance. Similar trends were present across baseline and intervention for /W/
and evaluating the consistency of responding was not possible due to the small amount of
intervention sessions. It appears that IR was also effective at improving individual lettersound correspondence skills.
Table 8
Izzy’s Performance on Test of Letter Sounds Probes Following Intervention
Unknown
Letter
Sounds
SIR
/d/
/qu/
/e/
/B/
IR
/g/
/b/
/Y/
/W/
Baseline
Number of
Total
Sound
Percent of
Presentations Accuracy
Intervention
Number of
Total
Sound
Percent of
Presentations Accuracy
Maintenance
Number of
Total
Sound
Percent of
Presentations Accuracy
22
21
11
23
68%
81%
73%
52%
17
15
8
18
47%
87%
75%
89%
2
2
1
2
50%
100%
100%
100%
11
22
27
35
73%
14%
93%
54%
3
17
10
4
100%
41%
100%
75%
6
2
2
2
100%
50%
100%
100%
Table 9 provides summary data for NWF or Izzy’s ability to generalize individual
letter sounds to NWF probes. Generalization data were only collected on 2 of the 4
unknown letter sounds taught using SIR. In both /d/ and /e/, Izzy made overall
improvements over baseline, although improvements were not drastic. Izzy demonstrated
growth and improvement with /e/ as she maintained the skill with a higher overall
82
percentage than during intervention. For IR, generalization data was only measured with
two sounds, /g/ and /b/. On probes of generalization for /b/, Izzy showed signs of
improvement, but not mastery. Growth also occurred for /g/, but did not maintain.
Table 9
Izzy’s Performance on NWF Probes Following Intervention
Unknown
Letter
Sounds
SIR
/d/
/qu/
/e/
/B/
IR
/g/
/b/
/Y/
/W/
Baseline
Number of
Total
Sound
Percent of
Presentations Accuracy
Intervention
Number of
Total
Sound
Percent of
Presentations Accuracy
Maintenance
Number of
Total
Sound
Percent of
Presentations Accuracy
14
-27
--
14%
-7%
--
4
-16
--
50%
-25%
--
--1
--
--100%
--
10
10
---
70%
0%
---
3
11
---
100%
27%
---
2
----
50%
----
Sally’s broad performance on measures of intervention.
Letter sound fluency and letter sound expression. Sally’s baseline performance
for LSF was variable, between 16 and 29 sounds, with the slope of the trend line
appearing slightly upward. LSF improved immediately following the implementation of
intervention, yet her performance remained variable, between 29 and 44 sounds.
Although variability was present, Sally met the normative spring fluency benchmark of
42 correct letter sounds on one occasion. Ten days following intervention termination,
Sally maintained her final LSF performance. NAP was calculated for letter sound fluency
at 0.99 indicating a high probability that baseline and treatment data did not overlap,
suggesting an increase in letter sound fluency during intervention when compared to
baseline.
.
83
50
Extended Baseline
Correctly Produced Sounds
45
Intervention
Main.
40
LSF
35
30
25
20
15
10
LSE
5
0
1
2
Correctly Produced Sounds/Words
35
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20
Sesssions
Intervention
Extended Baseline
Main.
30
CLS
25
20
15
10
WWR
5
0
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20
Sessions
Figure 7. Results of Sally’s extended baseline and treatment performance across LSF, LSE, CLS,
and WWR.
84
When examining LSE, there was an initial decrease in performance following the
intervention. Yet, across the overall intervention period, there was an increasing trend in
LSE. Even with an increase in performance letter sound skills were not maintained
following the termination of the intervention. NAP was calculated for letter sound
expression and was .89, indicating a fair amount of overlap between baseline and
intervention data points suggesting that there was minimal increase in overall letter sound
expression between baseline and intervention.
Correct letter sounds and whole words read. There was an immediate
improvement in CLS following the implementation of intervention. However,
performance during baseline and intervention frequently overlapped and the final
performance on CLS during intervention (20 correct sounds) was less than the initial
baseline performance on CLS (21 correct letter sounds).
Sally obtained the spring benchmark of 28 CLS during both baseline and
intervention. WWR was consistently low during baseline, with only 1 whole word being
read across all session. WWR remained low during intervention with 2 whole words read
across all sessions
Sally’s specific performance with SIR and IR interventions.
Results of the final extended baseline session identified 7 unknown letter sounds.
SIR. Four unknown sounds (/qu/, /n/, /y/, /d/) were taught using SIR. Figure 8
illustrates Sally’s performance on Test of Letter Sound probes and NWF probes for each
letter that was taught using SIR. Test of Letter Sound probe performance illustrated is
based upon LSE data or reading 52 total letter sounds/qu/ (panel 1). Sally was unable to
identify /qu/ accurately during baseline. Sally identified /qu/ with 100% accuracy on 5
85
out of 10 intervention sessions. Across all intervention sessions she accurately identified
/qu/ on 12 out of 20 (or 60% of) letter presentations. Two weeks following the
termination of intervention, Sally maintained her final intervention performance and
identified /qu/ with 100% accuracy on test of letter sound probe. Generalization data were
not collected as /qu/ is not present on NWF probes.
/n/ (panel 2). Sally identified /n/ with 100% accuracy on 6 out of 9 letter sound
baseline sessions. Across all baseline sessions, Sally identified /n/ on 14 out of 18 (or
78% of) letter sound presentations. During intervention Sally correctly identified /n/
100% of the time that it was presented (20 out of 20 presentations). Two weeks following
the termination of intervention, Sally’s performance maintained and she identified /n/
with 100% accuracy. A comparable level of performance was seen when using NWF
probes. Sally accurately identified /n/ on 9 out of 11 (or 82%) letter presentations in NWF
baseline. During intervention, Sally correctly identified /n/ during 100% of presentations.
Again, this level of performance maintained following intervention.
/y/ (panel 3). Sally identified /y/ with 100% accuracy during 2 of 9 letter sound
baseline sessions. Across all baseline sessions, she correctly identified 5 out of 18 (or
28%) letter presentations. She made immediate improvements in intervention and
maintained those improvements for the majority of intervention sessions. She identified
/y/ with 100% accuracy on 8 out of 9 intervention sessions. Overall, she correctly
identified /y/ on 19 out of 20 (or 95%) letter sound presentations.
86
Nonsense Word Fluency
Test of Letter Sounds
Intervention
Baseline
Main.
Percent of Accuracy
100
80
60
40
20
/qu/
0
1
3
5
7
9
11 13 15 17 19
60
40
20
/n/
Percent of Accuracy
80
80
60
40
20
/n/
0
0
1
3
5
7
9
1
11 13 15 17 19
3
5
7
9
11 13 15 17 19
Percent of Accuracy
80
60
40
20
/y/
Percent of Accuracy
100
100
1
3
5
7
9
80
60
40
20
/y/
0
0
1
11 13 15 17 19
3
5
7
9
11 13 15 17 19
80
60
40
20
/d/
0
1
3
5
7
9 11 13 15 17 19
Sessions
Percent of Accuracy
100
100
Percent of Accuracy
Intervention
100
100
Percent of Accuracy
Baseline
80
60
40
20
/d/
0
1
3
5
7
9 11 13 15 17 19
Sessions
Figure 8. Sally’s results of SIR on targeted unknown letters as measured by Test of Letter
Sounds probes (left column) and NWF probes (right column).
Figure 8. Sally’s results of SIR on targeted unknown letters as measured by Test of
Letter Sounds probes (left column) and NWF probes (right column).
Main.
87
Two weeks following the termination of intervention Sally correctly identified /y/
with 100% accuracy. When using NWF probes, Sally had fewer opportunities to respond
to the presentation of /y/. During baseline, she did not identify /y/ correctly during any of
the 8 letter presentations. Performance improved during intervention, but was not
consistent. Across all NWF intervention probes Sally correctly identified /y/ on 6 out of
10 (or 60%) letter sound presentations.
/d/ (panel 4). Sally was unable to identify /d/ accurately during 9 sessions of letter
sound baseline. Performance improved significantly during SIR intervention. Sally
identified /d/ with 100% accuracy on 9 out of 10 sessions. Across all letter sound
intervention sessions, she identified 19 out of 20 (or 95%) letter presentations. Two
weeks following intervention termination, performance maintained as Sally identified /d/
with 100% accuracy. Sally was unable to identify /d/ accurately during NWF baseline
probes. Opportunities to respond were less frequent during NWF intervention, but Sally
still demonstrated an improvement. Using NWF probes, Sally accurately identified /d/ on
5 out of 6 (or 83% of) letter sound presentations. During NWF maintenance, Sally
identified /d/ with 100% accuracy.
IR. Three sounds were taught using IR (/j/, /x/, /v/). These sounds were identified
as unknown during the baseline of the extended analysis. Figure 9 illustrates Sally’s
performance on Test of Letter Sound probes and NWF probes for each letter that was
taught using IR. Test of Letter Sound performance IR taught two sounds at a time using a
multiple baseline design. Sounds were terminated from intervention if they were
identified with 100% accuracy for three consecutive sessions. The first two sounds taught
using IR were /j/ and /x/ as the sounds were similar and they were of equal difficulty.
88
Test of Letter Sounds
Intervention
Baseline
Nonsense Word Fluency
Maintenance
80
80
Percent of Accuracy
100
Percent of Accuracy
100
60
40
20
/j/
3
5
7
9
Maintenance
60
40
20
/j/
0
0
1
Intervention
Baseline
1
11 13 15 17 19
3
5
7
9
11 13 15 17 19
Percent of Accuracy
100
80
60
40
20
/x/
0
1
3
5
7
9
11 13 15 17 19
Percent of Accuracy
80
60
40
20
/v/
0
1
3
5
7
9 11 13 15 17 19
Sessions
Percent of Accuracy
100
100
80
60
40
20
/v/
0
1
3
5
7
9 11 13 15 17 19
Sessions
Figure 9. Sally’s results of IR on targeted unknown letters as measured by Test of Letter Sounds
probes (left column) and NWF probes (right column).
Figure 9. Sally’s results of IR on targeted unknown letters as measured by Test of Letter
Sounds probes (left column) and NWF probes (right column).
89
/j/ (panel 1). Sally was unable to identify /j/ during any of the 9 letter sound
baseline sessions. Three session of intervention were implemented before improvement
was displayed. Sally then identified /j/ with 100% accuracy. She continued this level of
performance for three consecutive weeks, so /j/ was discontinued from intervention.
Overall, during intervention Sally identified /j/ accurately on 6 out of 12 (or 50% of)
letter sound presentations. Following termination of the intervention, Sally identified /j/
with 100% accuracy for the 5 remaining maintenance sessions.
Sally was unable to identify /j/ accurately during any NWF baseline sessions. She
had fewer opportunities to respond to /j/ on NWF probes, but she did make progress
during intervention. Overall, Sally identified /j/ accurately on 3 out of 6 (or 50% of letter
sound presentations). She did not maintain the levels of performance seen on the letter
sound probes. During the five total maintenance sessions following termination of
intervention, Sally accurately identified /j/ on 5 out of 8 (or 80% of) letter sound
presentations.
/x/ (panel 2). Sally was unable to identify /x/ during any letter sound baseline
sessions. Gradual improvement was made across the intervention. Sally identified /x/
with 100% accuracy on 2 out of 10 sessions. Across all intervention sessions, /x/ was
accurately identified on 11 out of 20 (or 55% of) letter sound presentations. Because
Sally never identified /x/ with 100% accuracy for three consecutive sessions, it was
taught for the duration of intervention. Two weeks following the termination of
intervention, Sally identified /x/ with 50% accuracy.
/v/ (panel 3). Sally demonstrated some prior knowledge of /v/ as she identified /v/
with 100% accuracy during 3 of the 15 letter sound baseline sessions. Across all baseline
90
sessions she correctly identified /v/ on 7 out of 30 (or 23%) letter sound presentations. As
/v/ was the last letter sound introduced, there was only 4 weeks of intervention provided
and Sally demonstrated progress in that time. She identified /v/ with 100% accuracy on
the final 2 letter sound intervention sessions. Across all intervention sessions, she
identified 4 out of 8 (or 50%) letter sound presentations. Sally maintained her 100%
accuracy for /v/ two weeks following the termination of intervention.
Accuracy on NWF baseline was minimal as Sally never identified /v/ with 100%
accuracy. Across all NWF baseline probes, Sally accurately identified /v/ on 5 out of 31
letter (or 16%) letter sound presentations. There was little improvement displayed on
NWF probes during intervention. Sally only identified /v/ correctly on 1 out of 10 (or
10% of) letter sound presentations. However, two weeks following the termination of
intervention, Sally identified /v/ with 100% accuracy on 3 out of 3 letter sound
presentations.
Summary. Table 10 provides summary data for Sally’s performance on Test of
Letter Sounds probes following intervention. During SIR, four letter sounds were
targeted (/qu/, /y/, /n/, /d/) and Sally’s overall performance improved on all letter sounds
taught. While there was some prior knowledge evident with /y/ and /n/, improvement still
occurred because her performance became more consistent. On the letter sounds in which
no prior knowledge (/qu/, /d/) was demonstrated, improvement was strong. Most
important, improvement maintained for all letters following the termination of
intervention.
During IR three letter sounds were targeted (/j/, /x/, /v/) and overall performance
improved with each letter sound. While Sally’s overall intervention accuracy with /j/
91
appears low, it should be interpreted with caution. After 3 sessions of 0% accuracy for /j/,
Sally jumped to 100% accuracy and remained at this level for 2 additional sessions before
the intervention was terminated. She quickly developed the letter sound /j/ and continued
to maintain this sound with high levels of accuracy. With all letter sounds, Sally was able
to maintain levels of performance that were comparable to intervention following
intervention termination.
Table 10
Sally’s Performance on Test of Letter Sounds Probes Following Intervention
Unknown
Letter
Sounds
SIR
/qu/
/y/
/n/
/d/
IR
/j/
/x/
/v/
Baseline
Number of
Total
Sound
Percent of
Presentations Accuracy
Intervention
Number of
Total
Sound
Percent of
Presentations
Accuracy
Maintenance
Number of
Total
Sound
Percent of
Presentations
Accuracy
18
18
18
18
0%
28%
78%
0%
20
20
20
20
60%
95%
100%
95%
2
2
2
2
100%
100%
100%
100%
18
18
30
0%
0%
23%
12
20
8
50%
55%
50%
10
2
2
100%
50%
100%
Table 11 provides summary data for NWF probes or more specifically, Sally’s
ability to generalize individual letter sounds to nonsense words. Generalization data were
collected for 3 of the 4 unknown sounds (/y/, /n/, /d/) taught during SIR. There was
improvement over NWF baseline for all letter sounds taught. The largest gains were seen
in /y/ and /d/ where no generalization was present during baseline. Generalization data
was only available for two letter sounds taught during IR. While /j/ was the only letter
sound in which generalization improved during intervention, high levels of maintenance
92
were observed for both letter sounds taught using IR, which may suggest instruction
outside of the intervention.
Table 11
Sally’s Performance on NWF Probes Following Intervention
Unknown
Letter
Sounds
SIR
/qu/
/y/
/n/
/d/
IR
/j/
/x/
/v/
Baseline
Number of
Sound
Presentations
Total
Percent of
Accuracy
Intervention
Number of
Total
Sound
Percent of
Presentations
Accuracy
Maintenance
Number of
Total
Sound
Percent of
Presentations
Accuracy
-8
11
12
-0%
82%
0%
-10
13
6
-60%
100%
83%
--1
--
--100%
--
16
-31
0%
-16%
6
-10
50%
-10%
5
-3
80%
-100%
Jill’s broad performance on measures of intervention.
Letter sound fluency and letter sound expression. Jill’s LSF score increased
immediately following the implementation of intervention, and while progress was
variable (range of 26-46 letter sounds) there was an overall increasing trend. During four
sessions, Jill’s LSF met the normative spring benchmark of 42 correct letter sounds. NAP
was calculated for LSF at 0.98, indicating a strong effect size and a high probability that
baseline and treatment data did not overlap and suggesting that there was an
improvement over baseline. When examining LSE, there was a small increase following
intervention implementation, but overall progress appeared minimal as demonstrated by a
relatively flat trend line. Regardless, all intervention points exceeded baseline points as
indicated by a strong NAP calculation of 0.98. Maintenance data were not collected due
to student absence.
93
50
Intervention
Extended Baseline
Correctly Produced Sounds
45
40
LSF
35
30
25
20
15
10
5
LSE
0
1
Correctly Produced Sounds/Words
35
2
3
4
5
6
7
8
Extended Baseline
9 10 11 12 13 14 15 16 17 18 19 20
Sessions
Intervention
30
CLS
25
20
15
10
WWR
5
0
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20
Sessions
Figure 10. Results of Jill’s extended baseline and treatment performance across LSF, LSE, CLS,
and WWR.
.
94
Correct letter sounds and whole words read. A pattern similar to LSE
performance was present with Jill’s CLS performance. There was a small, but immediate
increase following intervention implementation. Yet overall improvement was variable
(18-30 letter sounds) and ultimately resulted in a flat trend line. There was little change in
WWR between baseline and intervention. During baseline, Jill identified 1 whole word
correctly across 6 sessions. During intervention, WWR remained between 0-2, with Jill
reading whole words during only 3 of the 14 intervention sessions. Maintenance data
were not collected due to student absence.
Jill’s specific performance with SIR and IR interventions. Results of the final
extended baseline session identified 9 unknown letter sounds.
SIR. Five unknown sounds (/o/, /i/, /y/, /b/, /l/) were taught using SIR. Figure 11
illustrates Jill’s performance on Test of Letter Sound probes and NWF probes with each
letter that was taught using SIR. Test of Letter Sound probe performance illustrated is
based upon LSE data or reading 52 total letter sounds.
/o/ (panel 1). Jill identified /o/ with 100% accuracy on 3 out of 6 letter sound
baseline sessions. Across all baseline sessions, Jill accurately identified /o/ on 9 out of 12
(or 75% of) letter sound presentations. During intervention, performance slightly
improved. In 9 of the 14 intervention sessions, Jill identified /o/ with 100% accuracy.
Across all intervention sessions, performance improved over baseline with Jill correctly
identifying /o/ on 23 out of 28 (or 82% of) letter sound presentations.
During NWF baseline Jill identified /o/ with 100% accuracy on 3 out of 6
sessions. Overall, during NWF baseline she identified /o/ correctly on 7 out of 12 (or
58% of) letter sound presentations. During NWF intervention sessions, Jill was 100%
95
accurate on 10 of the 14 sessions. Across all intervention sessions, she accurately
identified /o/ on 28 out of 33 (or 85% of) letter sound presentations.
/i/ (panel 2). Jill identified /i/ with 100% during 5 of the 6 letter sound baseline
sessions. Across all letter sound baseline sessions, she identified /i/ correctly on 12 out of
13 (or 92% of) letter sound presentations. During letter sound intervention, Jill identified
/i/ with 100% accuracy on 13 out of 14 intervention sessions. Jill had a comparable level
of performance during baseline and intervention on the NWF probes. It appears that Jill
had prior knowledge of /i/ and was able to generalize her letter sound correspondence
skills to individual words.
/y/ (panel 3). Jill identified /y/ with 50% accuracy in 2 of the 6 letter sound
baseline sessions. Across all letter sound baseline sessions, Jill identified /y/ accurately in
2 out of 12 (or 17% of) letter sound presentations. Jill made substantial gains during
intervention. Of the 14 intervention sessions, Jill identified /y/ with 100% on 12 out of 14
intervention sessions. Across all intervention sessions, Jill accurately identified /y/ on 26
out of 28 (or 93%) of letter sound presentations.
On NWF baseline, Jill identified /y/ with 50% accuracy on 1 out of 5 baseline
sessions. Across all NWF baseline, Jill accurately identified /y/ on 1 out of 6 (or 17% of)
letter sound presentations. Improvement was evident on NWF following intervention. Jill
identified /y/ with 100% accuracy on 4 of the 6 intervention sessions. Across all NWF
intervention probes Jill correctly identified /y/ on 6 out of 8 (or 75% of) presentations.
96
Test of Letter Sounds
Baseline
Baseline
Intervention
80
80
Percent of Accuracy
100
Percent of Accuracy
100
60
40
20
/o/
0
1
3
5
7
9
11 13 15 17 19
40
20
100
80
80
40
20
/i/
1
3
5
7
9
Percent of Accuracy
0
3
5
7
9
11 13 15 17 19
/y/
1
Percent of Accuracy
80
Percent of Accuracy
9
20
80
/b/
7
40
100
20
5
60
100
40
3
0
11 13 15 17 19
60
11 13 15 17 19
/i/
1
80
1
9
20
80
/y/
7
40
100
20
5
60
100
40
3
0
11 13 15 17 19
60
/o/
1
0
Percent of Accuracy
60
100
60
Intervention
0
Percent of Accuracy
Percent of Accuracy
Nonsense Word Fluency
3
5
7
9
11 13 15 17 19
60
40
20
/b/
0
0
1
3
5
7
9
11 13 15 17 19
1
3
5
7
9
11 13 15 17 19
Figure 11. Jill’s results of SIR on targeted unknown letters as measured by Test of Letter Sounds
probes (left column) and NWF probes (right column).
Figure 11. Jill’s results of SIR on targeted unknown letters as measured by Test of Letter
Sounds probes (left column) and NWF probes (right column).
97
Percent of Accuracy
80
60
40
20
/l/
Percent of Accuracy
100
100
80
60
40
20
/l/
0
0
1
3
5
7
9 11 13 15 17 19
Sessions
1
3
5
7
9 11 13 15 17 19
Sessions
Figure 11 continued. Jill’s results of SIR on targeted unknown letters as measured by Test of
Letter Sounds probes (left column) and NWF probes (right column).
98
/b/ (panel 4). Jill was unable to accurately identify /b/ during any letter sound
baseline session. Throughout the intervention, Jill’s accuracy varied between 50% and
100% fluctuate. Jill was 100% accurate on 6 of 14 sessions, 50% accurate on 8 of the 14
sessions, and 0% accurate on 1 session. Overall, she accurately identified /b/ on 19 out of
28 (or 68%) of letter sound presentations. During NWF baseline probes, Jill was unable
to identify /b/ accurately. Jill identified /b/ with 100% accuracy on 1 of the 9 NWF
intervention sessions. Across all NWF probes, Jill accurately identified /b/ on 5 out of 19
(or 26%) of letter presentations. While Jill made improvements in letter sound
correspondence, there was little generalization her /b/ skills to other contexts.
/l/ (panel 5). Jill identified /l/ with 100% accuracy during 3 of the 6 letter sound
baseline sessions and with 50% accuracy during the remaining 3 letter sound baseline
sessions. Across all baseline sessions, Jill correctly identified /l/ on 9 out of 12 (or 75%
of) letter presentations. During letter sound intervention, Jill identified /l/ with 100% on
12 out of 14 sessions. Across all intervention session, Jill accurately identified /l/ on 25
out of 28 (or 89% of) letter presentations.
During NWF baseline, Jill identified /l/ with 100% accuracy on 4 out of 5
sessions. Across all NWF baseline sessions /l/ was accurately identified on 5 out of 7 (or
71% of) letter presentations. During NWF intervention, Jill identified /l/ with 100%
accuracy on 8 out of 10 sessions. Across all NWF intervention sessions, Jill accurately
identified /l/ on 10 of 13 (or 77% of) letter presentations.
IR. Four sounds were taught using IR (/g/, /qu/, /e/, /v/). Figure 12 illustrates
Jill’s performance on Test of Letter Sound probes and NWF probes for each letter that
was taught using IR. IR taught two sounds at a time using a multiple baseline design.
99
Sounds were terminated from intervention if they were identified with 100% accuracy for
three consecutive sessions. The first two sounds taught using IR were /g/ and /qu/.
/g/ (panel 1). During letter sound baseline, /g/ was identified with 100% accuracy
during 4 out of the 6 sessions. Across all letter sound baseline sessions, /g/ was accurately
identified on 10 out of 12 (or 83% of) letter presentations. During intervention,
performance immediately increased from the final baseline session of 50% accuracy to
100% accuracy. Jill’s performance remained at 100% accuracy for three consecutive
sessions, so /g/ was discontinued from intervention. During letter sound maintenance Jill
identified /g/ accurately on 21 out of 22 (or 95% of) letter presentations.
During NWF baseline, Jill identified /g/ with 100% accuracy during 3 out of 4
intervention sessions. Across the intervention, Jill accurately identified 2 out of 2 (or
100% of) letter sound presentations on the NWF probes, which were used to measure
generalization of skills. She maintained 100% accuracy for 9 maintenance sessions.
/qu/ (panel 2). Jill was unable to identify /qu/ accurately during letter sound
baseline (0 out of 12 letter sound presentations). During intervention, Jill identified /qu/
with 100% accuracy on 5 of the 15 intervention sessions. Across all letter sound
intervention sessions, Jill identified /qu/ accurately on 17 out of 28 (or 61% of) letter
presentations. For the duration of the intervention, /qu/ was taught since it was not
identified with 100% accuracy for three consecutive sessions. Generalization data were
not collected as /qu/ is not included on NWF probes.
.
100
Test of Letter Sounds
Int.
Baseline
Maintenance
100
100
80
80
Percent of Accuracy
Percent of Accuracy
Baseline
Nonsense Word Fluency
60
40
20
/g/
0
1
3
5
7
9
Int.
Maintenance
60
40
20
/g/
0
1
11 13 15 17 19
3
5
7
9
11 13 15 17 19
Percent of Accuracy
100
80
60
40
20
/qu/
0
3
5
7
9
11 13 15 17 19
100
100
80
80
60
40
20
/e/
Percent of Accuracy
Percent of Accuracy
1
0
1
3
5
7
9
40
20
/e/
0
11 13 15 17 19
1
3
5
7
9 11 13 15 17 19
80
60
40
20
/v/
0
1
3
5
7
9 11 13 15 17 19
Sessions
Percent of Accuracy
100
100
Percent of Accuracy
60
80
60
40
20
/v/
0
1
3
5
7
9 11 13 15 17 19
Sessions
Figure 12. Jill’s results of IR on targeted unknown letters as measured by Test of Letter Sounds probes
(left column) and NWF probes (right column).
Figure 12. Jill’s results of IR on targeted unknown letters as measured by Test of Letter
Sounds probes (left column) and NWF probes (right column).
101
/e/ (panel 3). Jill was unable to identify /e/ accurately during any letter sound
presentations in baseline. Her performance during intervention was on an increasing trend
and Jill identified /e/ with 100% accuracy on 4 of the 11 intervention sessions. Overall,
Jill identified /e/ accurately on 12 out of 22 (or 55% of) letter presentations during
intervention.
Jill was better able to identify /e/ during NWF baseline that letter sound baseline.
During NWF baseline, identify /e/ with 50% accuracy during two sessions. Across all
NWF baseline, Jill accurately identified 2 out of 20 (or 10% of) letter presentations. Yet,
she showed little improvement on NWF during intervention. Across NWF intervention,
Jill accurately identified /e on 3 out of 21 (or 14% of) letter presentations.
/v/ (panel 4). As letters were introduced using a multiple baseline procedure, /v/
remained in baseline for the duration of the study. During letter sound baseline, Jill
accurately identified 31 out of 40 (or 78% of) letter presentations. Her performance was
higher than either /qu/ and /e/ during intervention. On NWF probes Jill identified 31 out
of 37 (or 83% of) letter sound presentations. Regardless, Jill’s performance improved
across the sessions..
Summary. Table 12 provides summary data for Jill’s performance on Test of
Letter Sounds probes following intervention During SIR, five letter sounds (/o/, /i/, /y/,
/b/, /l/) were targeted and Jill’s overall performance improved on all five sounds. She
made the most substantial gains on /y/ and /b/ which had the lowest levels of baseline
performance. Maintenance data were not collected due to student absence. During IR, Jill
was taught four sounds (/g/, /qu/, /e/, /v/). She made substantial improvements on /e/ and
/qu/, letter sounds in which no prior knowledge was demonstrated.
102
Table 12
Jill’s Performance on Test of Letter Sounds Probes Following Intervention
Unknown
Letter
Sounds
Baseline
Number of
Total
Sound
Percent of
Presentations Accuracy
Intervention
Number of
Total
Sound
Percent of
Presentations
Accuracy
Maintenance
Number of
Total
Sound
Percent of
Presentations
Accuracy
SIR
/o/
/i/
/y/
/b/
/l/
12
12
12
12
12
75%
92%
17%
0%
75%
28
28
28
28
28
82%
96%
93%
68%
89%
------
------
12
12
18
40
83%
0%
0%
78%
6
28
22
--
100%
61%
55%
--
22
----
95%
----
IR
/g/
/qu/
/e/
/v/
Table 13 provides summary data for NWF or Jill’s ability to generalize individual
letter sounds to NWF probes. Improvement in generalization was demonstrated across all
5 letter sounds that were taught using SIR, although it was not substantial in /i/ and /l/,
which may be due to Jill’s prior knowledge and skills. Greatest improvement in
generalization was seen in /o/, /y/, and /b/. Although SIR did not teach decoding skills,
Jill was able to generalize individual letter-sound correspondence skills learned during
intervention to these sounds in other contexts. Generalization data were only collected on
two letter sounds taught during IR: /e/ and /g/. While /g/ demonstrated a fair amount of
prior knowledge, it may be best to consider the impact of IR on generalization as
displayed by /e/.
103
Table 13
Jill’s Performance on NWF Probes Following Intervention
Baseline
Number of
Total
Sound
Percent of
Presentations Accuracy
Unknown
Letter
Sounds
Intervention
Number of
Total
Sound
Percent of
Presentations Accuracy
Maintenance
Number of
Total
Sound
Percent of
Presentations
Accuracy
SIR
/o/
/i/
/y/
/b/
/l/
12
12
6
7
7
58%
92%
17%
0%
71%
33
32
8
19
13
85%
94%
75%
26%
77%
------
------
5
-20
37
80%
-10%
83%
2
-21
--
100%
-14%
--
9
----
100%
----
IR
/g/
/qu/
/e/
/v/
TOWRE-2
None of the participants were able to identify any pseudowords or sight words
during the administrations of the TOWRE-2. There was no change in score from the first
administration to the third administration. Each student scored zero on this measure on all
three administrations. No growth on the TOWRE was identified.
104
CHAPTER 5
DISCUSSION
The results of the current study are organized into four sections. General findings
from each phase of the study are presented in section one. The second section examines
the contributions of this study to current BEA and early literacy intervention literature.
Section three includes possible explanations for the results of the study and future
research. The final section evaluates the limitations of the current study and practical
implications.
Summary of General Findings
The current study was designed to evaluate the effectiveness of a BEA in
identifying letter-sound correspondence interventions for kindergarten students. The
study included three general education kindergarten students and was completed in two
phases. During the first phase, a BEA was conducted to evaluate four experimental
conditions (baseline, reward, IR + reward, and SIR + reward) in improving letter-sound
fluency and letter sound expression. The goal of the first phase was to determine if a
BEA could select a single early literacy intervention for sustained intervention. During
the extended analysis, or the second phase, the two skill-based interventions (IR and SIR)
were used to teach unknown letter sounds. The goal of the second phase was two-fold: 1)
determine if the results of the BEA matched the results of the extended intervention 2)
compare the effectiveness of the two interventions. Letter-sound correspondence is an
essential foundational skill and students must acquire this skill before gaining more
advanced reading skills (Ehri, 2005; Lonigan et al., 2008).
105
Use of BEA to Identify Effective Letter-Sound Correspondence Intervention
Izzy. As differentiation across conditions was minimal, data trends and median
data points were also used to evaluate intervention effectiveness and the intervention with
the most significant upward trend was identified. As a result of data trends and an
examination of the median performance, IR + reward was selected as most effective LSF
intervention. No LSE intervention could be selected due to a lack of overall improvement
and minimal variability, even when examining trends and the median performance. The
final baseline session in both LSF and LSE measures exceeded all other sessions,
suggesting that there was not a strong functional relationship between intervention and
performance in the BEA. Flashcards were also used to further examine intervention
effectiveness but resulted in intervention results different than that of the BEA. SIR +
reward appeared most effective at teaching letter sounds when evaluated via flashcards.
No consistent intervention was identified across LSF, LSE, or flashcards.
Sally. The greatest LSF performance was observed during baseline. Since
unknown letter sounds were not taught during baseline, interventions conditions were
compared and SIR + reward was the only other condition in which consistent
improvement was made. However, when examining the median performance, IR +
reward was selected as the most promising intervention. Again, no intervention was
identified for LSE due to a lack of variability and improvement across the experimental
conditions. Since there was no clear differentiation across conditions, flashcard
performance was examined. More individual letters were introduced during SIR +
reward, and on average, more sounds were accurately identified across the SIR + reward
sessions. There was no consistent intervention identified across all measures and means
106
of analysis. Yet, SIR + reward appeared most promising as it was identified across
interventions.
Jill. The most dramatic improvement in LSF was seen during baseline. Aside
from baseline Jill performed best, albeit similarly, during Reward and IR + reward. While
performance-based interventions are beneficial to fluency tasks, new content is not
acquired or taught. As the goal of this phase was to improve overall letter-sound
correspondence, IR + reward was selected as unknown letter sounds would be taught in
this condition. Yet, when inspecting the median performance on LSF, SIR + reward
appeared to be the most effective intervention.
When examining LSE, the only experimental condition in which an increasing
trend was present was IR + reward, thus suggesting that IR + reward may be more
effective at increasing LSE. An examination of the median performance suggested that
there was no differentiation across conditions. As there was no single intervention
identified, flashcard performance was also evaluated. More unknown letter sounds were
presented during SIR + reward sessions and on average, Jill identified more sounds
during SIR + reward sessions. Jill was the only participant which the same intervention
was selected across the BEA measures. IR + reward was selected across both LSF and
LSE. SIR + reward was selected across flashcard accuracy and when using the median of
LSF performance. While there was some consistency demonstrated, a single intervention
could not be selected.
Summary. With all three participants, there was minimal variability across
experimental conditions (Baseline, Reward, IR + reward, SIR + reward) making it
difficult to identify the most effective intervention from the BEA and requiring the
107
consideration of overall trends. Table 14 illustrates the interventions selected for each
participant based on measure No meaningful decisions could be made from Izzy or
Sally’s LSE data. As a result, the BEA was unable to identify the most effective
intervention for ongoing implementation for any student.
Table 14
Interventions Identified as Most Effective During BEA Based on Measure and Analysis
Letter Sound Fluency
Letter Sound Expression
Flashcard
Accuracy
Trend
IR +
reward
Median
IR +
reward
Trend
No intervention
identified
Median
No intervention
identified
Sally
SIR +
reward
IR +
reward
No intervention
identified
No intervention
identified
SIR + reward
Jill
IR +
reward
SIR +
reward
IR + reward
No intervention
identified
SIR + reward
Izzy
SIR + reward
There was only one participant, Jill, for whom the same intervention was selected
using both BEA measures of LSF and LSE. However, these results should be interpreted
with caution as IR + reward was selected from three similarly performing conditions
(baseline, reward, IR + reward) since it was the only condition in which additional letter
sounds could be taught. Further, IR + reward was only identified when the trend in
performance was evaluated. One would expect consistency across measures and analysis
if there were strong differentiation. Consistency in flashcard performance was evident
across the participants. All three participants, on average, identified more letter sounds in
SIR + reward conditions than IR + reward conditions. Potential changes to BEA
measurement, specifically the creation of individualized probes for each student, might
allow for more differentiation or ability to select an intervention. Individualized probes
108
could be based on the letter sounds being taught for each specific student and created to
ensure that there would be high content overlap between assessment and intervention
materials, which would increase the sensitivity of the probe (Daly et al., 1997;
Petursdottir et al., 2014).
Extended Analysis Evaluating SIR and IR Interventions
Broad performance on intervention measures. The combined impact of both
SIR and IR resulted in an increase over baseline in LSF and LSE for all three
participants, with the greatest overall impact on LSF. As minimal growth was seen in
LSE, SIR + reward and IR + reward may be potentially better suited to address fluency
(performance) deficits rather than overall expression (skill) deficits.
Generalization of letter-sound correspondence skills was evident in all
participants. Improvement in NWF probes over baseline occurred during each
intervention condition. Sally was the only participant for whom NWF gains were not seen
across every letter sound. These results suggest that gains in letter-sounds can generalize
to beyond individual letter sound probes to more applied contexts. To more specifically
evaluate the impact of intervention, the individual results of SIR + reward and IR +
reward were examined within each participant.
Izzy. It appears that both SIR + reward and IR + reward were able to improve
letter sound correspondence skills in Izzy. Within SIR, Izzy required 6 intervention
sessions to identify /d/ with 100% accuracy, 1 intervention session to identify /qu/ with
100% accuracy, 1 intervention session to identify /e/ with 100% accuracy, and 1
intervention session to identify /B/ with 100% accuracy. However, when considering
baseline, Izzy made the most substantial improvements with 1 letter sounds (/B/). Within
IR, Izzy required 1 intervention session to identify /g/ with 100% accuracy, 5 intervention
109
sessions to identify /b/ with 100% accuracy, 1 intervention session to identify /Y/ with
100% accuracy, and 2 intervention sessions to identify /W/ with 100% accuracy. When
considering baseline, the largest gains were made with 2 letter sounds (/b/ and /g/). There
was no clear differentiation in performance across the two intervention groups or the time
in which improvements were made, suggesting that both interventions supported
developing letter-sound correspondence skills in Izzy. Maintenance was the same across
SIR + reward and IR + reward with 3 out of 4 sounds being identified with 100%
accuracy in each session. As maintenance of skills is the ultimate goal of any
intervention, SIR + reward and IR + reward appeared effective at developing and
subsequently maintaining letter-sound correspondence skills. Generalization data were
limited and while performance on NWF did improve, the growth was not substantial and
suggests that letter-sound correspondence intervention alone was not sufficient for Izzy to
consistently use her skills in context. Based on these results there was no match in
interventions between the BEA and the extended analysis as the BEA was unable to
identify a single intervention for letter-sound correspondence. At the same time, the
extended analysis suggested that both interventions were effective.
Although generalization data were limited, improvement in CLS was evident
across all unknown letter sounds presented on NWF probes during SIR + reward and IR
+ reward conditions. While there were no significant amounts of growth, results suggest
the letter-sound correspondence improvements can generalize to letter sounds in other
printed contexts.
Sally. Letter-sound correspondence gains were present in both SIR + reward and
IR + reward conditions. Specifically within SIR, Sally required 4 intervention sessions to
110
identify /qu/ with 100% accuracy, 1 intervention session to identify /n/ with 100%
accuracy, 1 intervention session to identify /y/ with 100% accuracy, and 1 intervention
session to identify /d/ with 100% accuracy. Within IR, Sally required 4 intervention
sessions to identify /j/ with 100% accuracy, 8 intervention sessions to identify /x/ with
100% accuracy, and 3 intervention sessions to identify /v/ with 100% accuracy. The gains
with SIR appeared to occur with a stronger rate of progress. Further, the gains made with
SIR maintained at a higher level and generalized stronger to NWF probes. It appears that
Sally responded better to the SIR intervention in learning letter-sound correspondence.
SIR + reward was identified during the BEA as most effective during LSF and flashcard
presentations. Interestingly, the extended analysis focused on improving LSE and while
SIR + reward was most effective, it was not initially identified during the BEA as being
effective.
Jill. Letter-sound correspondence skills increased in SIR + reward and IR +
reward intervention conditions. Within SIR, Jill required 2 intervention sessions to
identify /o/ with 100% accuracy, 2 intervention sessions to identify /i/ with 100%
accuracy, 1 intervention session to identify /y/ with 100% accuracy, 3 intervention
sessions to identify /b/ with 100% accuracy, and 2 intervention sessions to identify /l/
with 100% accuracy. Within IR, Jill required 1 intervention session to identify /g/ with
100% accuracy, 5 intervention sessions to identify /qu/ with 100% accuracy, and 6
intervention sessions to identify /e/ with 100% accuracy. When considering the letter
sounds that were at or near 0% accuracy in baseline (/y/, /b/, /qu/, /e/) improvements seen
on letter sound probes were larger and made quicker following SIR + reward than IR +
reward. Gains in NWF were evident following both interventions, with larger gains
111
following SIR + reward sessions. It appears that SIR + reward was the more effective
intervention for teaching letter-sound correspondence, and allowed for generalization to
other printed contexts. These results differ from the BEA which indicated that IR +
reward might be the most effective intervention to improve letter-sound correspondence
as measured by the LSF probes.
Contributions of a BEA and Intervention to Letter-Sound Correspondence Skills
The current study suggested that a BEA of early literacy skills is unlikely to result
in differentiation of intervention conditions when published LSF probes are used to
measure intervention effectiveness. These findings align with Petursdottir et al. (2014)
who found that differentiation in interventions conditions were only elucidated when
researcher-created measures were used to evaluate participant performance. Aside from
the lack of differentiation, the quick and efficient nature of the BEA process was
demonstrated in this study, and highlights the potential advantageous of this assessment
method (Daly et al., 1997; McComas & Burns, 2009). The BEA in this study was
completed quickly; in three sessions for two of the participants and six sessions for one
participant. Further, the impact of the interventions was simultaneously evaluated on
three different measures. Future research should examine the potential use of a BEA with
early literacy skills when individualized probes are created for each participant.
During the extended analysis, participants made gains in letter-sound
correspondence skills following SIR + reward and IR + reward conditions. Both
interventions appeared successful in teaching unknown content. Results from this study
concur with previous research indicating IR is successful in teaching letter-sound
correspondence (DuBois et al., 2014, Peterson et al., 2014; Volpe et al., 2011) and SIR
112
can be used to teach fact-based information (Kupzyk et al., 2011). Both SIR and IR
appear to be viable intervention options for teaching unknown fact-based information to
students.
When using flashcards to compare the effectiveness of SIR + reward and IR +
reward during the BEA, SIR + reward was consistently identified as more successful that
IR + reward. Comparable results were seen during the extended analysis as participants
made the most immediate gains with SIR + reward. This current study provides support
for early SIR research in suggesting that procedural modifications used in SIR, increase
student performance over the traditional IR (Kupzyk et al., 2011). As highlighted by
Kupzyk et al. (2011), the number of opportunities to respond to unknown content is the
most critical element in intervention. These results encourage further research in SIR, as
initial studies have indicated this may be a strong alternative to IR.
Possible Explanations of Findings and Suggestions for Future Research
The lack of differentiation across BEA conditions may have been attributed to a
variety of factors. As mentioned previously, a potentially large contributing factor may
have been the use of letter-sound fluency probes rather than probes individually created
for each participant. Petursdottir et al. (2014) suggested that published letter-sound
fluency probes are not sensitive enough to detect small changes in performance. As
probes were timed, participants may not have the opportunity to respond to all letter
sounds or reach the sounds that were taught during intervention, thus reducing the
likelihood that changes in letter-sound correspondence could be detected within the BEA.
In the same vein, previous research has used high content overlap passages to ensure that
instructional and assessment passages contain a high level of the same material so that
113
progress can be adequately measured (Andersen, Daly, & Young, 2013; Eckert & Ardoin,
2002). Specifically, when evaluating ORF interventions Andersen et al. (2013) used
passages in which, on average, 86% of the words overlapped from instructional to
assessment material. In the current study, unknown letter sounds taught during
instructional sessions overlapped on average with 24% of the letter sounds presented
during assessment. In other words, the majority of the letter sounds presented during
assessment did not correspond to sounds taught. Measures used to monitor progress
should have sufficient opportunities for the student to demonstrate the skill being taught
(Hosp & Ardoin, 2008). Potentially, there were not enough opportunities for the
participants to demonstrate any skills they learned during intervention. Future iterations
of this study could include the creation of individual letter sound probes for each student
to more effectively and precisely evaluate the impact of intervention.
The letter sounds taught during each BEA session were based upon the results of
the previously completed baseline probe. For example, the first baseline session
identified the letter sounds to be taught in BEA sessions 3 and 4, while the second
baseline session identified the unknown letter sounds to be taught in BEA sessions 7 and
8. As a result, different letter sounds were taught within each BEA intervention session
and were of variable difficulty. Participants may have required additional exposure to the
intervention with the same sounds before differentiation would have been present.
Future research should consider using the same set of letter sounds across each
experimental condition within the BEA to control for potential variation in letter-sound
difficulty and consider additional exposure to the intervention conditions within the BEA.
114
Finally, the lack of differentiation in the BEA measures may have been related to
the lack of differentiation between the individual interventions. As indicated by Martens
and Gertz (2009) interventions being tested within a BEA must be distinct individual
interventions. Both SIR and IR employ flashcards to present unknown letter sounds to the
participants. However, SIR only uses unknown content and does not present additional
content until the student has accurately identified all of the sounds presented. Although
SIR and IR are individual interventions, SIR is a modified version of IR with similar
procedures. Potentially, the modifications to IR were not significant enough to highlight
differences within the two interventions in a BEA context.
During the extended analysis, students made gains in both intervention
conditions, with the most immediate gains following SIR. However, the differentiation in
intervention performance did not maintain following intervention termination as was
demonstrated in previous research (Kupzyk et al., 2011). Maintenance performance in
both conditions appeared to increase without intervention and neared 100% on the
majority of letter sounds. This may be associated with a number of factors and should be
interpreted with caution. First, maintenance data were only collected for two participants
and during a small number of sessions. During the SIR condition, maintenance data were
only collected on a single occasion. Second, students continued to receive classroom
instruction throughout the study, which may have reinforced the letter sounds being
taught and resulted in an inflated level of maintenance. When conducting academic
research, it is difficult to control for classroom learning. Future research may consider the
collection of additional maintenance data.
115
The use of unknown content only during SIR considerably increased the number
of opportunities participants had to respond to content when compared to IR. In SIR,
participants had a minimum of 21 opportunities to respond to unknown content. The
breakdown of opportunities to respond were as follows: 6 responses to the first unknown
sound, 6 responses to the second unknown sound, 4 responses to the third unknown
sound, 3 responses to the fourth unknown sound, and 2 responses to the fifth sound.
Additional opportunities to respond in SIR were dictated by participant need rather than a
prescribed sequence. In IR, participants responded 12 times to unknown content (8 times
to the first unknown sound and 4 times to the second unknown sound). Although
corrective feedback was provided, participants did not repeat the sequence if incorrect
responses were provided. As described by Kupzyk et al. (2011), the increased number of
opportunities to respond to unknown content may be associated with the more immediate
improvements observed in SIR.
Limitations and Practical Implications
There are several limitations to this study that warrant additional discussion. First,
the set of probes that were used across the participants varied as updated probes became
available once the study had commenced. Izzy was exposed to probes that had upper and
lowercase letters, requiring her to read the entire probe while Jill and Sally were only
required to read half of their probes. As letter sounds were already difficult, one could
hypothesize that Izzy became fatigued from the task, which may have resulted in a
decrease in academic performance when reading the entire probe. The variation in probes
made it difficult to measure generalization, especially for Izzy. Capital letters identified
as unknown were not included on NWF probes, thus significantly reducing Izzy’s
116
generalization data. It would have been better to have consistency in the probes across
each participant, but Izzy had already started the BEA when the updated probes became
available. Generalization data still remained limited for Sally and Jill as some lower case
letters did not appear on the NWF probes (/qu/, /x/). Future iterations could control the
letter sounds being taught to ensure that all taught letter sounds are represented on the
generalization measure.
As is the case in any study conducted in the schools, it is difficult to account for
and control classroom variables. Sally, for example, appeared to master letter sounds
before they were taught in intervention. While it is always optimal for students to gain
academic skills via classroom instruction, Sally’s classroom instruction makes it difficult
to assert that functional control was in place and gains were due to the intervention.
Future studies would better control for the letter sounds in intervention taught to
minimize the interference of classroom instruction.
Unknown letter sounds were selected for intervention based on the final baseline
probe during the extended analysis. As a result, letter sounds were taught in intervention
that may have been identified as known in previous probes. A more thorough
examination of all baseline probes could identify consistently unknown sounds and
ensure that sounds without prior knowledge were the only ones addressed during
intervention.
There are also practical implications highlighted by this study. The use of BEAs
for academic skills is encouraged as multiple interventions can be quickly and easily
evaluated. However, as this study suggests, individualized probes may be necessary for
idiosyncratic results across early academic skills. The requirement of individualized
117
probes increases the amount of time and resources necessary to complete a BEA within
the school setting, thus reducing its ease of implementation and applicability. The use of
individualized probes increases the likelihood of variability and measurement error if
created without a strong understanding of test development. It is possible that the quick
and efficient BEA that is supported throughout research (Daly et al., 1997; Daly et al.,
1999, Eckert et al., 2002, McComas et al., 2009) may be limited to specific academic
skills.
While IR and SIR were both successfully used to teach unknown letter sounds,
there are practical limitations to each intervention. As IR uses a ratio of known to
unknown content, a minimum level of prior knowledge is required before the intervention
can be implemented. This can reduce the number of students that can initially use this
intervention. A benefit of SIR is the ability to implement the intervention without prior
knowledge of the content being taught. As a result, SIR can be used with students at any
time and regardless of their current level of knowledge. Yet, SIR could be construed as
more difficult as students are only presented with unknown material, which could
potentially result in quicker levels of student fatigue and reduced effort. In sum, SIR and
IR can both be useful in teaching letter sounds, but come with individual advantages and
disadvantages that need to be considered before implementing them with students.
Summary
Overall, the BEA using published early literacy probes was not able to
differentiate across intervention conditions when measuring LSF or LSE. These findings
suggest that a BEA examining early literacy interventions requires individualized probes
in order to identify the best intervention for a single student. As suggested by Petursdottir
118
et al. (2014), individualized probes are created for each student and designed to control
the ratio of known to unknown letter sounds so students have ample opportunities to
respond to content being taught. During phase two of the study all participants made
improvements in letter-sound correspondence skills. When examining individual
interventions, SIR+ reward and IR + reward improved letter-sound correspondence.
While SIR + reward was responsible for the most immediate improvements, both
interventions sustained high levels of improvement. SIR and IR interventions appear to
be strong options for teaching unknown letter sounds to students. The results of the
extended analysis did not match the results of the BEA, further suggesting that individual
probes may be necessary in order to effectively use a BEA for early literacy intervention
identification.
Primary limitations of this study include the use of probes that were not sensitive
to the intervention, inconsistent probes across participants, and minimal maintenance
data. The probes used are meant to measure overall growth in letter-sound
correspondence, not growth in specific individual letter-sounds. Additional research is
necessary to understand if a BEA can be used to identify early literacy interventions for
students and what the necessary components of an early literacy BEA would include.
Future research should include the use of individualized letter sound probes for each
participant and an increase in maintenance data.
119
REFERENCES
Andersen, M. N., Daly, E. J., & Young, N. D. (2013). Examination of a one‐trial brief
experimental analysis to identify reading fluency interventions. Psychology in the
Schools, 50(4), 403-414.
Aram, D. (2006). Early literacy interventions: The relative roles of storybook reading,
alphabetic activities, and their combination. Reading and Writing,19 (5), 489-515.
Armbruster, B. B., Lehr, F., Osborn, J., & Adler, C. R. (2001). Put reading first: The
research building blocks of reading instruction: Kindergarten through grade 3.
National Institute for Literacy.
Burns, M. K., & Boice, C. H. (2009). Comparison of the Relationship between Words
Retained and Intelligence for Three Instructional Strategies among Students with
Below-Average IQ. School Psychology Review, 38(2), 284-292.
Cabell, S. Q., Justice, L. M., Konold, T. R., & McGinty, A. S. (2011). Profiles of
emergent literacy skills among preschool children who are at risk for academic
difficulties. Early Childhood Research Quarterly, 26(1), 1-14.
Casey, A., & Howe, K. (2002). Best practices in early literacy skills. Best practices in
school psychology IV, 1, 721-735.
Castles, A., Coltheart, M., Wilson, K., Valpied, J., & Wedgwood, J. (2009). The genesis
of reading ability: What helps children learn letter–sound
correspondences?. Journal of Experimental Child Psychology, 104(1), 68-88.
Chall, J. S. (1979). The great debate: Ten years later, with a modest proposal for reading
stages. In L. B. Resnick & P. A. Weaver (Eds.), Theory and practice of early
reading (Vol. 1, pp. 29–55). Hillsdale, NJ: Erlbaum.
Chall, J.S. (1983). The stages of reading development. New York: McGraw-Hill.
Christ, T. J., Arañas, Y. A., Kember, J. M., Kiss, A. J., McCarthy‐Trentman, A.,
Monaghen, B. D., Newell, K. W., Van Norman, E. R., White, M. J. (2014).
Formative Assessment System for Teachers Technical Manual: earlyReading,
CBMReading, aReading, aMath, and earlyMath. Minneapolis, MN: Formative
Assessment System for Teachers.
Christ, T.J., Monaghen, B., White. M.J. Martin, M., Norman, E.V., Pike-Balow, A.,
Clark, G. (2013). FAST Test of Letter Sounds. In Formative Assessment
System for Teachers. Minneapolis, MN.
Clay, M. M. (1966). Emergent reading behaviour (Doctoral dissertation,
ResearchSpace@ Auckland).
120
Crawford, P.A. (1995). Early literacy: Emerging perspectives. Journal of Research in
Childhood Education, 10(1), 71-86.
Daly III, E. J., Chafouleas, S. M., Persampieri, M., Bonfiglio, C. M., & LaFleur, K.
(2004). Teaching phoneme segmenting and blending as critical early literacy
skills: An experimental analysis of minimal textual repertoires. Journal of
Behavioral Education, 13(3), 165-178.
Daly III, E. J., Johnson, S., & LeClair, C. (2009). An experimental analysis of phoneme
blending and segmenting skills. Journal of Behavioral Education,18(1), 5-19.
Daly III, E. J., Martens, B. K., Hamler, K. R., Dool, E. J., & Eckert, T. L. (1999). A brief
experimental analysis for identifying instructional components needed to improve
oral reading fluency. Journal of Applied Behavior Analysis, 32, 83-94.
Daly III, E. J., Witt, J. C., Martens, B. K., & Dool, E. J. (1997). A model for conducting
a functional analysis of academic performance problems. School Psychology
Review, 26, 554 – 574.
DuBois, M. R., Volpe, R. J., & Hemphill, E. M. (2014). A Randomized Trial of a
Computer-Assisted Tutoring Program Targeting Letter-Sound Expression.School
Psychology Review, 43(2), 210.
Duncan, G. J., Dowsett, C. J., Claessens, A., Magnuson, K., Huston, A. C., Klebanov, P.,
Pagani, L.S., Feinstein, L., Engel, M., Brooks-Gunn, J., Sexton, H., Duckworth,
K., & Japel, C. (2007). School readiness and later achievement. Developmental
Psychology, 43(6), 1428.
Durkin, D. (1982). Getting reading started. Boston, MA: Allyn and Bacon.
Eckert, T. L., Ardoin, S. P., Daisey, D. M., & Scarola, M. D. (2000). Empirically
evaluating the effectiveness of reading interventions: The use of brief
experimental analysis and single case designs. Psychology in the Schools,37(5),
463-473.
Eckert, T. L., Ardoin, S. P., Daly III, E. J., & Martens, B. K. (2002). Improving oral
reading fluency: An examination of the efficacy of combining skill-based and
performance-based interventions. Journal of Applied Behavior Analysis, 35,
271-281.
Ehri, L. C. (2005). Learning to read words: Theory, findings, and issues. Scientific
Studies of Reading, 9(2), 167-188.
121
Ehri, L. C., Nunes, S. R., Stahl, S. A., & Willows, D. M. (2001). Systematic phonics
instruction helps students learn to read: Evidence from the National Reading
Panel’s meta-analysis. Review of Educational Research, 71(3), 393-447.
Erickson, K.A. (2000). All children are ready to learn: An emergent versus readiness
perspective in early literacy assessment. Seminars in Speech and Language,
21(3),193-203.
Fiester, L. (2010). Early Warning! Why Reading by the End of Third Grade Matters.
KIDS COUNT Special Report. Annie E. Casey Foundation.
Fuchs, L. S., & Fuchs, D. (2004). Determining adequate yearly progress from
kindergarten through grade 6 with curriculum-based measurement. Assessment for
Effective Intervention, 29(4), 25-37.
Fuchs, D., Fuchs, L. S., Al Otaiba, S., Thompson, A., Yen, L., McMaster, K. N., ... &
Yang, N. J. (2001). K-PALS. Teaching Exceptional Children, 33(4), 76-80.
Good, R. H., & Kaminski, R. A.(2002). Dynamic indicators of basic early literacy
skills (6th ed.). Eugene, OR: Institute for the Development of Educational
Achievement.
Good, R. H., & Kaminski, R. A. (2011). DIBELS next assessment manual. Eugene, OR:
Dynamic Measurement Group.
Good, R. H., Kaminski, R. A., & Dill, S (2002). DIBELS oral reading fluency. In R.
H. Good & R.A. Kaminski (Eds.), Dynamic Indicators of Basic Early Literacy
Skills (6th ed. ). Eugene, Oregon: Institute for the Development of Educational
Achievement.
Good, R. H., Kaminski, R. A., Dewey, E. N., Wallin, J., Powell-Smith, K. A., & Latimer,
R. J. (2011). DIBELS Next technical manual. Eugene, OR: Dynamic
Measurement Group
Hammill, D. D. (2004). What we know about correlates of reading. Exceptional
Children, 70(4), 453-469.
Hatcher, P. J., Hulme, C., & Snowling, M. J. (2004). Explicit phoneme training combined
with phonic reading instruction helps young children at risk of reading
failure. Journal of Child Psychology and Psychiatry, 45(2), 338-358.
Holdaway, D. (1979). The foundations of literacy. Sydney: Ashton Scholastic.
Hosp, J. L., & Ardoin, S. P. (2008). Assessment for instructional planning. Assessment
For Effective Intervention, 33(2), 69-77.
122
Hulme, C., Bowyer-Crane, C., Carroll, J. M., Duff, F. J., & Snowling, M. J. (2012). The
causal role of phoneme awareness and letter-sound knowledge in learning to read
combining intervention studies with mediation analyses. Psychological
Science, 23(6), 572-577.
Jones, K.M. & Wickstrom, K.F. (2002). Done in sixty seconds: Further analysis of the
brief assessment model for academic problems. School Psychology Review, 31(4),
554-568.
Joseph, L. M. (2006). Incremental rehearsal: A flashcard drill technique for increasing
retention of reading words. The Reading Teacher, 59(8), 803-807.
Juel, C. (1988). Learning to read and write: A longitudinal study of 54 children from first
through fourth grades. Journal of Educational Psychology, 80(4), 437-447.
Kupzyk, S., Daly, E. J., & Andersen, M. N. (2011). A comparison of two flash-card
methods for improving sight-word reading. Journal of Applied Behavior
Analysis, 44(4), 781-792.
Kutner, M., Greenberg, E., Jin, Y., Boyle, B., Hsu, Y., & Dunleavy, E. (2007).
Literacy in Everyday Life: Results From the 2003 National Assessment of Adult
Literacy (NCES 2007-480). National Center for Education Statistics,
Institute of Education Sciences, U.S. Department of Education. Washington, DC
Lafferty, A. E., Gray, S., & Wilcox, M. J. (2005). Teaching alphabetic knowledge to pre
school children with developmental language delay and with typical language
development. Child Language Teaching and Therapy, 21(3), 263-277.
Lesnick, J., Goerge, R., Smithgall, C., & Gwynne, J. (2010). Reading on grade level in
third grade: How is it related to high school performance and college
enrollment. Chicago: Chapin Hall at the University of Chicago.
Lonigan, C. J., Burgess, S. R., & Anthony, J. L. (2000). Development of emergent
literacy and early reading skills in preschool children: evidence from a latentvariable longitudinal study. Developmental Psychology, 36(5), 596.
Lonigan, C. J., Purpura, D. J., Wilson, S. B., Walker, P. M., & Clancy-Menchetti, J.
(2013). Evaluating the components of an emergent literacy intervention for
preschool children at risk for reading difficulties. Journal of Experimental Child
Psychology, 114(1), 111-130.
123
Lonigan, C. J., Schatschneider, C., & Westberg, L. The National Early Literacy Panel.
(2008). Identification of children’s skills and abilities linked to later
outcomes in reading, writing, and spelling. National Early Literacy Panel (US) &
National Center for Family Literacy (Eds.), Developing early literacy: Report of
the National Early Literacy Panel: Scientific synthesis of early literacy
development and implications for intervention, 55-106.
MacQuarrie, L. L., Tucker, J. A., Burns, M. K., & Hartman, B. (2002). Comparison of
retention rates using traditional, drill sandwich, and incremental rehearsal flash
card methods. School Psychology Review, 31(4), 584-595.
Magit, E. R., & Shinn, M. R. (2002). Administration and scoring for use with AIMSweb .
Eden Prairie, MN: Edformation.
Martens, B. K., & Gertz, L. E. (2009). Brief experimental analysis: A decision tool for
bridging the gap between research and practice. Journal of Behavioral
Education, 18(1), 92-99.
McComas, J. J., & Burns, M. K. (2009). Brief experimental analyses of academic
performance: Introduction to the special series. Journal of Behavioral
Education, 18(1), 1-4.
McComas, J. J., Wagner, D., Chaffin, M. C., Holton, E., McDonnell, M., & Monn, E.
(2009). Prescriptive analysis: Further individualization of hypothesis testing in
brief experimental analysis of reading fluency. Journal of Behavioral
Education,18(1), 56-70.
Morphett, M., & Washburne, C. (1931). When should children begin to read? Elementary
School Journal, 31, 496-503.
National Center for Education Statistics (2011). The Nation’s Report Card: Reading 2011
(NCES 2012-457). Institute of Education Sciences, U.S. Department of
Education, Washington, D.C.
National Center for Education Statistics (2013). The Nation’s Report Card: A First Look:
2013 Mathematics and Reading (NCES 2014-451). Institute of Education
Sciences, U.S. Department of Education, Washington, D.C.
Nist, L., & Joseph, L. M. (2008). Effectiveness and efficiency of flashcard drill
instructional methods on urban first-graders' word recognition, acquisition,
maintenance, and generalization. School Psychology Review, 37(3), 294-308.
Noell, G. H., Freeland, J. T., Witt, J. C., & Gansle, K. A. (2001). Using brief assessments
to identify effective interventions for individual students. Journal of School
Psychology, 39(4), 335-355.
124
Peterson, M., Brandes, D., Kunkel, A., Wilson, J., Rahn, N. L., Egan, A., & McComas, J.
(2014). Teaching letter sounds to kindergarten English language learners using
incremental rehearsal. Journal of School Psychology, 52(1), 97-107.
Petursdottir, A. L., McMaster, K., McComas, J. J., Bradfield, T., Braganza, V., Koch,
McDonald, J., Rodriguez, R., & Scharf, H. (2009). Brief experimental analysis of
early reading interventions. Journal of School Psychology, 47(4), 215-243.
Piasta, S. B., & Wagner, R. K. (2010a). Developing Early Literacy Skills: A Meta
Analysis of Alphabet Learning and Instruction. Reading Research Quarterly,
45(1), 8-38.
Piasta, S. B., & Wagner, R. K. (2010b). Learning letter names and sounds: Effects of
instruction, letter type, and phonological processing skill. Journal of
Experimental Child Psychology, 105(4), 324-344.
Piasta, S. B., Justice, L. M., McGinty, A. S., & Kaderavek, J. N. (2012). Increasing
young children’s contact with print during shared reading: Longitudinal effects on
literacy achievement. Child Development, 83(3), 810-820.
Scarborough, H. (2001). Connecting early language and literacy to later reading
(dis)abilities: Evidence, theory, and practice. In S.B. Neuman & D.K. Dickinson
(Eds.), Handbook of early literacy research (pp. 97-110). New York: Guilford
Press.
Schatschneider, C., Fletcher, J. M., Francis, D. J., Carlson, C. D., & Foorman, B. R.
(2004). Kindergarten prediction of reading skills: A longitudinal comparative
analysis. Journal of Educational Psychology, 96(2), 265.
Schreder, S. J., Hupp, S. D., Everett, G. E., & Krohn, E. (2012). Targeting reading
fluency through brief experimental analysis and parental intervention over the
summer. Journal of Applied School Psychology, 28(2), 200-220.
Sénéchal, M., LeFevre, J. A., Smith-Chant, B. L., & Colton, K. V. (2001). On refining
theoretical models of emergent literacy the role of empirical evidence. Journal of
School Psychology, 39(5), 439-460.
Silver, Burdett, & Ginn. (1991). Word of Reading. Morristown, NJ: Author.
Snow, C., Burns, S., & Griffin, P. (Eds.) (1998). Preventing reading difficulties in young
children.Washington, DC: National Academies Press.
Spira, E. G., Bracken, S. S., & Fischel, J. E. (2005). Predicting improvement after first
grade reading difficulties: the effects of oral language, emergent literacy, and
behavior skills. Developmental Psychology, 41(1), 225.
125
Teale, W. H., & Sulzby, E. (1986). Emergent literacy as a perspective for examining how
young children become writers and readers. Emergent literacy: Writing and
reading. Norwood, NJ: Ablex.
Teale, W. H., & Sulzby, E. (1989). Emergent literacy: New perspectives.Emerging
literacy: Young children learn to read and write, 1-15.
Torgesen, J. K. (2002). The prevention of reading difficulties. Journal of School
Psychology, 40(1), 7-26.
Vadasy, P. F., & Sanders, E. A. (2008). Code-oriented instruction for kindergarten
students at risk for reading difficulties: a replication and comparison of
instructional groupings. Reading and Writing, 21(9), 929-963.
Volpe, R. J., Burns, M. K., DuBois, M., & Zaslofsky, A. F. (2011). Computer‐assisted
tutoring: Teaching letter sounds to kindergarten students using incremental
rehearsal. Psychology in the Schools, 48(4), 332-342.
Whitehurst, G. J., & Lonigan, C. J. (1998). Child development and emergent
literacy. Child Development, 69(3), 848-872.
Whitehurst, G. J., & Lonigan, C. J. (2001). Emergent literacy: Development from
prereaders to readers. Handbook of early literacy research, 1, 11-29.
126
APPENDIX A
Historical View of Reading Development
Reading development has been the subject of literature and research for centuries,
but the current review focuses on the last 90 years highlighting historical perspectives
that were integral in paving the way for contemporary beliefs (Teale & Sulzby, 1986).
Beginning in the 1920s and continuing into the 1950s, maturationalist philosophy drove
theories of child development (Crawford, 1995; Durkin, 1982; Teale & Sulzby, 1986). As
described by physician Arnold Gesell, children’s development was a result of biological
maturation. Through a sequence of stages, neural ripening automatically resulted in new
behaviors and allowed children to gain needed knowledge and self-awareness (Durkin,
1982; Teale & Sulzby, 1986). Progression through the developmental stages could not be
hurried, but would occur naturally and accompany biological ripening (Crawford,
1995).With this, the idea of reading readiness was born and a boundary was created
between “real” reading and everything that occurred prior (Whitehurst & Lonigan, 2001).
As the testing movement began to take hold in United States in the 1930s,
Morphett and Washburne (1931) further explored this concept of reading readiness and
asserted that children could not learn to read until an appropriate mental age of six years
and six months (Durkin, 1982; Teale & Sulzby, 1986). Both Morphett and Washburne
(1931) and maturationalists presented views that advocated waiting until children were
“reading ready” or had obtained the mental age and biological maturity for reading
instruction. As these views became increasing popular, reading readiness tests were
developed throughout the mid-1930s and early-1940s and were used to determine if
children were mature enough to begin reading instruction (Erickson, 2000; Teale &
127
Sulzby, 1986). Educators began to use reading readiness tests diagnostically as a way to
determine areas of deficit for skill remediation (Durkin, 1982). The continued use of
reading readiness tests into the 1950s marked a shift away from the maturitionalist view
of reading toward a developmentalist view of reading, as educators began to believe that
reading readiness was not simply a product of biology or maturity, but could be taught.
Although developmentalists echoed similar beliefs that children must be ready
before they could learn to read, they believed readiness was creating by building on
children’s experiences (Crawford, 1995; Durkin, 1982). For much of the late-1950s and
into the early-1960s, the belief remained that reading readiness was influenced by
environmental factors and thus could be manipulated (Teale & Sulzby, 1986). Because of
this, reading readiness programs became widespread and focused on the teaching of
prerequisite reading skills (Durkin, 1982; Teale & Sulzby, 1986). Typically programs
addressed visual and auditory discrimination and memory, letter names and sounds, and
word recognition (Teale & Sulzby, 1986). The use of reading readiness tests and
programs remained prominent through the 1980s and exemplified the belief that students
must complete preparatory work successfully before reading can be explicitly and
formally taught.
In summary, the reading readiness paradigm present in the 1980s and 1990s
asserted that reading is comprised of distinct skills that develop in a sequential fashion
(Crawford, 1995; Teale & Sulzby, 1986). First, children must master a set of prerequisite
skills before considered ready for reading instruction (Crawford, 1995, Erickson, 2000;
Teale & Sulzby, 1986). According to reading readiness supporters, reading does not
develop prior to the onset of formal instruction so activities that occur before school entry
128
are irrelevant to reading development (Teale & Sulzby, 1986). Chall’s (1979, 1983)
stages of reading development clearly illustrate the reading readiness perspective. Chall
identified six stages of reading development: stage zero to stage six. Each stage develops
in succession with increasingly advanced skills in the subsequent stages. According to
Chall (1983), stage zero is the prereading stage and occurs from birth to six years old.
During this stage students master prerequisite reading skills and it is not until stage one
(six to seven years old) that reading begins (Chall, 1979; 1983). These stages suggest that
literacy development cannot begin until stage one and highlights the divide that reading
readiness places between “real” reading and prior literacy experiences.
Although reading readiness beliefs dominated reading research throughout the
early part of the century, perspectives started to shift when Marie Clay’s (1966)
dissertation examined the reading behaviors of first-grade students (Crawford, 1995;
Erickson, 2000). Clay is credited as being the first to acknowledge that behaviors
occurring prior to school are important in the development of reading, and she presented
findings that were contradictory to the reading readiness perspective. Specifically, Clay
(1966) acknowledged that students come to school with already-developed reading and
writing knowledge and are able to apply this knowledge within the school setting. She
argued that reading and writing began to develop prior to formal instruction, during early
life. Clay (1966) coined the term emergent literacy to describe these early reading and
writing skills (Crawford, 1995; Joseph, 2006; Teale & Sulzby, 1986).
Clay’s (1966) use of the term emergent emphasized the belief that reading is part
of a developmental continuum (Teale & Sulzby, 1986; 1989). Clay did not believe that
reading began when children met a prescribed maturity level or age, but rather that
129
literacy development is part of an ongoing process for children with no set start time
(Teale & Sulzby, 1986). Therefore, all early literacy behaviors and interactions should be
considered emerging as they are related to behavior and skills that occur later in the
developmental continuum. Further, Clay’s use of the term literacy helped to frame
children’s development in a way that included reading, writing, and use of oral language
(Crawford, 1995; Teale & Sulzby, 1986). As Clay believed literacy development was a
continuum, the ideas of prereading and reading readiness were rejected.
Although reading readiness advocates were prevalent when Clay’s ideas were
introduced, support for emergent literacy continued to spread. Holdaway (1979)
presented a developmental model of reading and writing through an examination of
behaviors exhibited by children prior to school entry. Echoing Clay (1966), Holdaway
(1979) asserted that using the term pre-reading skills to categorize literacy skills such as
alphabet knowledge and conventions of print de-emphasized the necessity of these early
skills. Holdaway (1979) argued that these skills represent an early manifestation of more
advanced skills and should be described appropriately as emergent reading skills.
Teale and Sulzby (1986) presented strong support and a formal introduction to
emergent literacy when they suggested adopting the term to describe early development
of reading and writing. Teale and Sulzby (1986) argued that new terminology was needed
to represent a paradigm shift in early reading to describe reading as a developmental
continuum, not an all-or-nothing occurrence (Whitehurst & Lonigan, 1998). Further,
emergent literacy was meant to distinguish between previous readiness views (Sénéchal,
LeFevre, Smith-Chant, & Colton, 2001). Teale and Sulzby (1986) stated explicitly that
“literacy development is the appropriate way to describe what was called [previously]
130
reading readiness” (pp. xviii). Children concurrently develop reading, writing, and
language skills, so literacy is the term needed to encompass all areas (Teale & Sulzby,
1989). Perhaps the most influential contribution from this paradigm describes literacy
development as beginning at birth (Teale & Sulzby, 1986; 1989). Children are exposed to
and use literacy behaviors within their home and school community from birth until
school entry. Through exposure and active engagement children create a knowledge base
that is used once instruction begins (Crawford, 1995). All literacy behaviors that occur
prior to school entry are considered legitimate and important for literacy development
(Whitehurst & Lonigan, 1998). The emergent literacy paradigm is now considered a
contemporary tradition and has resulted in theoretical models (i.e., Whitehurst &
Lonigan, 1998) outlining specific elements of emergent literacy.
131
APPENDIX B
Baseline Task Analysis (Letter Sound Fluency)
Student:
Session:
IOA Initials:
Instruction
1) Read directions: “I am going to show you some letters
on a page, you will tell me the sounds of each letter.”
2a) Administer sample items. “This is a /f/. Now your
turn. What is the sound of this letter?”
2b) Point or indicate 2nd sample letter. “What is the sound
of this letter?” Independently answer
3) Repeat sample items if not correctly identified.
4) Read standardized directions. “Here are some more
letter sounds for you to read. When I say begin say the
sound of each letter. Read across the page and then go to
the next line. Try to say each letter sound. If there is a
letter sound you don’t know, I will tell it to you. Put your
finger under the first letter. Ready, begin.”
5) Timer is set for one minute.
6) Letter sounds provided if student doesn’t know or
pauses for 3 sec.
Occurrence (+/-)
132
APPENDIX C
Baseline Task Analysis (Nonsense Word Fluency)
Student:
Session:
IOA Initials:
Instruction
1) Present the sample page.
2a) Administer sample directions. “We are going to read
some make believe words. This word is sog, thse sounds
are /s/ /o/ /g. I can say the sounds of the word /s/ /o/ /g/
or I can read the whole word, sog.
2b) Administer sample items. “Your turn. Read this make
believe word. If you don’t know the whole word tell me
any sounds you know.”
*3a) Praise if correct and present whole probe
OR
*3b) Repeat sample items if not correctly identified.
4) Sample items removed and probe presented.
5) Read standardized directions. “Here are some more
make-believe words When I say, “Begin,” read each
word. If you can’t read the whole word, tell me any
sounds you know. Put your finger under the first word.
Ready, begin.
6) Timer is set for one minute.
7) Probe administered until timer sounds.
*Either or, both will not occur.
Occurrence (+/-)
133
APPENDIX D
Reward Task Analysis
Student:
Session:
IOA Initials:
Instruction
Occurrence
+/-
1. Reward contingency told to student (i.e., “you now need to get 54 letter
sounds right to earn a prize)
2. Test of Letter Sounds probe administered (one minute)
3. Student told results of Test of Letter Sounds probe.
4a. Student meets contingency, prize awarded.
4b. Student does not meet contingency, prize not awarded.
134
APPENDIX E
Incremental Rehearsal + Reward Task Analysis
Student:
Session:
IOA Initials:
Unknown letter sounds:
Known letter sounds:
Instruction
1. First unknown letter sound presented. “This letter makes the [letter
sound] sound.
2. Student prompt, “what sound does it make?”
3. Student response.
4. First known letter sound presented. “What sound?”
5. Student Response
6. First unknown letter sound presented. “What sound?”
7. Student Response
8. K1 presented, “What sound?”
9. Student Response
10. K2 presented, “What sound?”
11. Student Response.
12. UI presented, “What sound?”
13. Student Response.
14. KI presented, “What sound?”
15. Student Response.
16. K2 presented, “What sound?”
Error Correction (Includes
reminder, prompt, and accurate
response from student)
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
Occurrence
+/-
135
17.Student Response.
18. K3 presented, “What sound?”
19. Student Response.
20. U1 presented, “What sound?”
21. Student Response.
22. K1 presented, “What sound?”
23. Student Response.
24. K2 presented, “What sound?”
25.Student Response
26. K3 presented, “What sound?”
27. Student Response.
28. K4 presented, “What sound?”
29. Student Response
30. U2 (Second unknown letter sound) presented. “This letter makes
the [letter sound] sound.
31. Student prompt, “what sound does it make?”
32. Student response.
33. K1 presented, “What sound?”
34.Student Response.
35. U2 presented, “What sound?”
36. Student Response.
37. K1 presented, “What sound?”
38. Student Response.
39. K2 presented, “What sound?”
40. Student Response.
41.U2 presented, “What sound?”
42. Student Response.
43. K1 presented, “What sound?”
44. Student Response.
45. K2 presented, “What sound?”
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
136
46. Student Response.
47. K3 presented, “What sound?”
48. Student Response.
49. U2 presented, “What sound?”
50. Student Response.
51. K1 presented, “What sound?”
52. Student Response.
53. K2 presented, “What sound?”
54. Student Response.
55. K3 presented, “What sound?”
56. Student Response.
57. K4, presented “What sound?”
58. Student Response.
59. Reward contingency told to student (i.e., “you now need to get 54
letter sounds right to earn a prize)
60. Test of Letter Sounds probe administered
61. Student told results of Test of Letter Sounds probe.
62a. Student meets contingency, prize awarded.
62b. Student does not meet contingency, prize not awarded.
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
137
APPENDIX F
Systematic Incremental Rehearsal + Reward Task Analysis
Student:
Session:
IOA Initials:
Unknown letter sounds:
Known letter sounds:
Instruction
1. First unknown letter sound presented. “This letter makes the [letter sound]
sound.
2. Student prompt, “what sound does it make?”
3. Student response.
4. Second unknown letter sound presented. ““This letter makes the [letter
sound] sound.”
5. Student prompt, “what sound does it make?”
6. First unknown letter sound presented. “This letter makes the [letter sound]
sound.
7. Student prompt, “what sound does it make?”
8. Student response.
9. Second unknown letter sound presented. “This letter makes the [letter
sound] sound.”
10. Student prompt, “what sound does it make?”
11. Student Response.
12. Cards Shuffled
13. Unknown letter sound card presented without prompt.
14. Student Response
15.Unknown letter sound card presented without prompt.
Error Correction
(Includes reminder,
prompt, and accurate
response from student)
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
Occurrence
+/-
138
16. Student Response.
17. Cards shuffled and presented again.
18. Third unknown letter sound presented. “This letter makes the [letter sound]
sound.”
19. Student prompt, “What sound does it make?”
20.Student Response.
21.Cards Shuffled
22.Unknown letter sound card presented without prompt
23. Student response
24.Unknown letter sound card presented without prompt
25. Student response
26.Unknown letter sound card presented without prompt
27. Student response
28. Fourth unknown letter sound presented. “This letter makes the [letter sound]
sound.”
29. Student prompt, “What sound does it make?”
30.Student Response.
31. Cards Shuffled
32. Unknown letter sound card presented without prompt
33. Student response
34.Unknown letter sound card presented without prompt
35. Student response
36.Unknown letter sound card presented without prompt
37. Student response
38.Unknown letter sound card presented without prompt
39. Student response
40. Fifth unknown letter sound presented. “This letter makes the [letter sound]
sound.”
41. Student prompt, “What sound does it make?”
42.Student Response.
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
139
43. Cards Shuffled
44. Unknown letter sound card presented without prompt
45. Student response
46. Unknown letter sound card presented without prompt
47. Student response
48.Unknown letter sound card presented without prompt
49. Student response
50.Unknown letter sound card presented without prompt
51. Student response
52. Unknown letter sound card presented without prompt.
53. Reward contingency told to student (i.e., “you now need to get 54 letter
sounds right to earn a prize)
54. Test of Letter Sounds probe administered
55. Student told results of Test of Letter Sounds probe.
56a. Student meets contingency, prize awarded.
56b. Student does not meet contingency, prize not awarded.
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A