Modeling Polymorphemic Word Recognition

554229
research-article2014
LDXXXX10.1177/0022219414554229Journal of Learning DisabilitiesKearns et al.
Article
Modeling Polymorphemic Word
Recognition: Exploring Differences Among
Children With Early-Emerging and LateEmerging Word Reading Difficulty
Journal of Learning Disabilities
1­–27
© Hammill Institute on Disabilities 2014
Reprints and permissions:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/0022219414554229
journaloflearningdisabilities.sagepub.com
Devin M. Kearns, PhD1, Laura M. Steacy, MEd2,
Donald L. Compton, PhD2, Jennifer K. Gilbert, PhD3,
Amanda P. Goodwin, PhD2, Eunsoo Cho, PhD2,
Esther R. Lindstrom, MEd2, and Alyson A. Collins, MEd2
Abstract
Comprehensive models of derived polymorphemic word recognition skill in developing readers, with an emphasis on
children with reading difficulty (RD), have not been developed. The purpose of the present study was to model individual
differences in polymorphemic word recognition ability at the item level among 5th-grade children (N = 173) oversampled
for children with RD using item-response crossed random-effects models. We distinguish between two subtypes of RD
children with word recognition problems, those with early-emerging RD and late-emerging RD. An extensive set of
predictors representing item-specific knowledge, child-level characteristics, and word-level characteristics were used to
predict item-level variance in polymorphemic word recognition. Results indicate that item-specific root word recognition
and word familiarity; child-level RD status, morphological awareness, and orthographic choice; word-level frequency and
root word family size; and the interactions between morphological awareness and RD status and root word recognition
and root transparency predicted individual differences in polymorphemic word recognition item performance. Results
are interpreted within a multisource individual difference model of polymorphemic word recognition skill spanning itemspecific, child-level, and word-level knowledge.
Keywords
reading, cognitive aspects; reading, individual difference predictors of; reading
An essential development in learning to read is the acquisition of automatic word recognition skills (i.e., the ability to
pronounce written words) that are impenetrable to factors
such as knowledge and expectation (Perfetti, 1992;
Stanovich, 1991). This allows the developing reader to recognize words, while utilizing little processing capacity, and
provides the comprehension processes with the raw materials required to operate efficiently (Perfetti, 1985).
Disruptions in the acquisition of word recognition skills can
significantly compromise reading comprehension development in children (Perfetti & Hart, 2001; Perfetti & Stafura,
2014). As a result, difficulty in the acquisition of contextfree word recognition skill is one of the most reliable indicators of reading difficulties (RDs; Compton, Miller,
Elleman, & Steacy, 2014; Lovett et al., 1994; Stanovich,
1986, 1991; Torgesen, 2000; Vellutino, 1979).
The study of word recognition skill and supporting
subprocesses in typically developing (TD) children and
children with RD constitutes one of the largest literatures in
the fields of educational and cognitive psychology (e.g.,
Rayner, Foorman, Perfetti, Pesetsky, & Seidenberg, 2001;
Vellutino, Fletcher, Snowling, & Scanlon, 2004). However,
this literature has been dominated by the study of monosyllabic word recognition with little attention paid to polysyllabic word recognition skill (Roberts, Christo, & Shefelbine,
2011). Perry, Ziegler, and Zorzi (2010) argued that a
focus on modeling monosyllabic word recognition is
1
University of Connecticut, Storrs, USA
Vanderbilt University, Nashville, TN, USA
3
Metropolitan Nashville Public Schools, TN, USA
2
Corresponding Author:
Devin M. Kearns, University of Connecticut, 249 Glenbrook Road, Unit
3064, Storrs, CT 06269, USA.
Email: [email protected]
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
2
Journal of Learning Disabilities 
50,000
100,000
150,000
200,000
Polysyllabic Words Encountered
By Grade
0
Polysyllabic words encountered
understandable due to the complicated nature of polysyllabic word recognition.
Yet it is unclear whether the behavioral effects reported
for monosyllabic words generalize reliably to polysyllabic
words. Researchers have long suggested that reading polysyllabic words requires processing of syllables (Muncer &
Knight, 2012; Prinzmetal, Treiman, & Rho, 1986; Spoehr &
Smith, 1973) or syllable-like units (e.g., Taft, 1979, 1992,
2001; but see Adams, 1981; Seidenberg, 1987). In naming
studies, syllables affect adults’ reading behavior. Jared and
Seidenberg (1990) found that low-frequency words with
more syllables had longer response times. Regression analyses of naming mega-studies have shown similar effects,
even when controlling for possible confounds (e.g., Chetail,
Balota, Treiman, & Content, in press; Muncer, Knight, &
Adams, 2014; Yap & Balota, 2009). Others have shown syllable effects in lexical decision in mega-study analyses
(Muncer & Knight, 2012; New, Ferrand, Pallier, &
Brysbaert, 2006). In addition, Fitzsimmons and Drieghe
(2011) showed that readers were less likely to skip disyllabic than monosyllabic words in an analysis of eye-tracking data. Thus, it is unlikely that readers process polysyllabic
words in precisely the same way that they process monosyllabic words.
Furthermore, for developing readers, polysyllabic word
recognition appears to be a crucial skill as these words carry
much of the content-specific semantic information needed
to comprehend informational texts (Bryant, Ugel,
Thompson, & Hamff, 1999). In addition, they encounter
these words quite often, with the proportion of these increasing throughout elementary school. As Figure 1 shows, the
number of polysyllabic words children encounter increases
dramatically in the middle elementary grades, increasing by
more than 19,000 in third, fourth, and fifth grade over each
previous year.
Important differences exist between monosyllabic and
polysyllabic words requiring developing readers to bring to
bear more advanced knowledge of orthography, phonology,
and morphology on the problem of recognizing polysyllabic words. For instance, when confronted with an unfamiliar polysyllabic word, developing readers must decide
where to place syllable boundaries (Perry et al., 2010), how
to place stress and reduce vowels (Chomsky, 1970; Ševa,
Monaghan, & Arciuli, 2009), which grapheme combinations constitute a pronounceable unit along with the corresponding phonological representation (e.g., SI = /ʒ/ [vision];
Berninger, 1994), and how to handle the ambiguity of vowel
letter pronunciations (e.g., i in linen versus minor; Venezky,
1999). These complex decisions associated with polysyllabic words pose important challenges to the development
of advanced word recognition skills in TD readers and children with RD.
In addition, polysyllabic words often contain multiple
morphemes, with research suggesting that in both adults
1
2
3
4
5
6
7
8
Grade
Figure 1. Estimated number of polysyllabic words children
encounter in each grade, from first through eighth. The estimate
of the proportion of polysyllabic words is from the Zeno,
Ivens, Millard, and Duvvuri (1995) corpus of words, based on
the number of syllables in each word in each grade-specific
database, weighted by their frequency of occurrence. The
estimate of child reading is from Renaissance Learning’s (2014)
estimate of the number of words children read, as tracked by
its Accelerated Reader program. This likely underestimates the
number of polysyllabic words children encounter. The equation
is as follows:
PW
∑ i=1 k PolyWordik × Freqik × ARWords = PolyARWords ,
k
k
AW
∑ i=1 k AllWordsik × Freqik
where PW represents the total number of polysyllabic words
in the Zeno et al. (1995) database, AW represents the total
number of words in the database, i represents a given word, and
k represents the grade for which the value is calculated.
(e.g., Nagy, Anderson, Schommer, Scott, & Stallman, 1989)
and developing readers (Carlisle & Stone, 2005; Nagy,
Berninger, & Abbott, 2006; Nagy, Berninger, Abbott,
Vaughn, & Vermeulen, 2003) morphemes function as perceptual units that influence word recognition. However,
children’s ability to identify and use morphological units in
polymorphemic words likely depends initially on the transparency of the morpheme (Carlisle & Stone, 2005;
Schreuder & Baayen, 1995), consistency of the relationships between the orthographic and phonological units of
the morphological units (Chateau & Jared, 2003; Jared &
Seidenberg, 1990; Yap & Balota, 2009), and the frequency
of the base morphemes that make up the word (Carlisle &
Katz, 2006; Nagy, Anderson, Schommer, Scott, & Stallman,
1989).
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
3
Kearns et al.
For derived polymorphemic words (i.e., base words with
one or more prefixes or suffixes) studies have shown that
both child and word characteristics affect word recognition,
with children higher in morphological awareness (MA)
skill generally able to take greater advantage of morphological structure in recognizing words (Carlisle, 2000;
Verhoeven, Schreuder, & Baayan, 2003) and words with
high-frequency phonologically transparent base morphemes
being easier for children to recognize (e.g., Carlisle, 2000;
Singson, Mahony, & Mann, 2000). Phonological transparency in a derived polymorphemic word refers to a word in
which the pronunciation of the base word remains intact
(e.g., culture in cultural). The importance of morphological
structure in word recognition and vocabulary development
should not be underestimated, as Nagy et al. (1989) estimated that about 60% of the unfamiliar words encountered
by students above 4th grade are polymorphemic and sufficiently transparent in structure and meaning so that a reader
might be able to read and infer the meaning of the word in
context.
Several studies have observed a developmental shift in
children’s ability to use morphological units, particularly
those lacking phonological transparency, to help recognize
polymorphemic words (Carlisle, 2000; Singson et al.,
2000). Both the Carlisle (2000) and Singson et al. (2000)
studies reported that older elementary students (Grades 5
and 6, respectively) were significantly better than younger
elementary students (Grade 3) in reading derived polymorphemic words, with older students being noticeably better
at reading words that lacked phonological transparency
(such as natural). Results suggest that younger developing
readers have difficulty negotiating phonological shifts from
base to derived words and thus were less likely to take
advantage of morphological structure when the base word
lacked phonological transparency.
Similar to younger TD readers, children with RD have
been shown to have difficulties recognizing the morphemic
structure of polymorphemic words that are not phonologically transparent (Carlisle, Stone, & Katz, 2001; Champion,
1997; Leong, 1989; McCutchen, Logan, & Biangardi-Orpe,
2009; Windsor, 2000). However, there are some data to suggest that children with RD can benefit from transparent
morphemic structure in recognizing words (Elbro &
Arnbak, 1996). Carlisle and Stone (2005) speculate that this
might be because recognition of high-frequency base words
and affixes assist children with RD in decoding long and
seemingly unfamiliar words.
To date, comprehensive models of derived polysyllabicpolymorphemic word recognition (hereafter, polymorphemic word recognition) skill in developing readers, with an
emphasis on children with RD, have not been developed. In
particular, models have not combined item-level, childlevel, and word-level predictors that allow the complex
nature of these effects to be modeled simultaneously. The
purpose of the present exploratory study was to model individual differences in polymorphemic word recognition ability among 5th-grade children oversampled for children with
RD. We further distinguish between two subtypes of RD
children with word recognition problems, those with earlyemerging RD (ERD) and late-emerging RD (LERD). We
use item-response crossed random-effects models (Bates,
Maechler, & Bolker, 2013) to explain variability in children’s polymorphemic word recognition ability at the item
level using a comprehensive set of item-, child-, and wordlevel predictors (described below). These models allow
child, word, and child by word interactions to be modeled
simultaneously; to determine whether variance in word recognition ability is related to child or word factors; and to
estimate how much variance due to child- and word-level
predictors can be explained. Item-response crossed randomeffects models also allow us to consider how children’s
item-specific knowledge relates to their polymorphemic
word recognition ability (e.g., the probability of a child
reading natural correctly given nature was read correctly).
Because the models involve item-level responses, we can
examine whether general child-level and word-level characteristics matter after accounting for child item-specific
knowledge. In this study we examine predictors representing item-specific child-level knowledge (isolated root word
recognition skill and word familiarity), general child-level
characteristics (RD status, phonological processing, orthographic knowledge, rapid automatized naming (RAN),
semantic knowledge, MA, sentence repetition, working
memory, and attention rating), general word-level characteristics (word frequency, orthographic Levenshtein distance [OLD], root word family frequency, root word
frequency, suffix frequency, and root word transparency),
and various interactions (RD status by MA, phonological
processing skill, and root word family frequency along with
root word recognition skill by root word transparency) as
predictors of individual differences in polymorphemic word
recognition. We provide a brief explanation and rationale
for including each predictor in our models.
Item-Specific Child Effects
We examine the effects of two measures of item-specific
child-level knowledge—root word recognition and word
familiarity. We expect, consonant with the results of
Goodwin and colleagues (Goodwin, Gilbert, & Cho, 2013;
Goodwin, Gilbert, Cho, & Kearns, 2014) and Kearns (in
press), that children’s ability to read root words will be an
important predictor of their polymorphemic word recognition ability. Clearly, root words share considerable orthographic and phonological overlap with derived words,
which makes them especially salient predictors. Our second
measure of item-specific knowledge was children’s subjective familiarity ratings with the polymorphemic words that
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
4
Journal of Learning Disabilities 
make up the recognition task. Gernsbacher (1984) showed
that self-reported familiarity explained adults’ naming
latencies for low-frequency words. Similarly, Connine,
Mullennix, Shernoff, and Yelen (1990) observed consistent
familiarity naming latency effects across experiments
manipulating familiarity and frequency, while frequency
effects were sometimes absent. As mentioned previously,
our models allow us to examine whether general child-level
and word-level characteristics matter after accounting for
these important item-specific knowledge measures.
Child-Level Effects
We included multiple child-level language measures that
have previously been shown to be strongly associated with
word reading development. We incorporated two measures
tapping phonological processing skill—phonological
awareness (PA) and RAN. Both PA (P. G. Bowers, 1995;
Georgiou, Parrila, & Kirby, 2006; Kirby, Parrila, & Pfeiffer,
2003) and RAN (Compton, 2000; Kirby, Georgiou,
Martinussen, & Parrila, 2010; Norton & Wolf, 2012;
Steacy, Kirby, Parrila, & Compton, 2014; Wolf, Bowers, &
Biddle, 2000) have been shown to account for independent
and unique variance for concurrent and future word recognition achievement. We also include MA, a metalinguistic
skill related to morphological knowledge that has been
shown to predict significant variance in word reading, even
after accounting for vocabulary knowledge and phonological awareness skill (e.g., Carlisle & Nomanbhoy, 1993;
Deacon & Kirby, 2004; Kearns, in press; Mahony, Singson,
& Mann, 2000). Evidence suggests that MA skill plays an
important role in children’s use of morphemic units in
polymorphemic word recognition (Carlisle & Katz, 2006;
Carlisle & Stone, 2005; Nagy et al., 1989). Vocabulary
knowledge was also included in the model as studies indicate a potentially important role for semantic knowledge in
word recognition (Nation and Snowling, 2004; Ricketts,
Nation, and Bishop, 2007; Taylor, Plunkett, & Nation,
2011). We included two additional language measures,
sentence repetition and working memory skill, in the models as potential nuisance variables representing important
language and cognition skills.
A measure of child-level orthographic knowledge skill
was included in the model. Measures of orthographic
knowledge have been shown to be significant predictors of
word recognition development above and beyond the effects
of phonological awareness and semantic knowledge
(Cunningham, Perry, & Stanovich, 2001). In the present
study, we use a measure of orthographic choice (OC) to
examine whether children’s ability to make OCs relates to
their ability to recognize polymorphemic words. We also
included a measure tapping child attention (i.e., teacher rating) as it has been linked to reading performance and
response to intervention (Miller et al., in press; Torgesen
et al., 2001).
Finally, we consider reading classification (i.e., ERD,
LERD, and TD) as a child-level predictor in the models.
There is a growing body of literature to suggest that some
students who exhibit typical reading skill growth in the
early years develop difficulty with reading later in elementary school as the demands of reading increase (Catts,
Compton, Tomblin, & Bridges, 2012; Leach, Scarborough,
& Rescorla, 2003; Lipka, Lesaux, & Siegel, 2006). Research
suggests that these students with LERD can be classified
into three distinct groups: (a) late-emerging word reading
difficulties (LERD-W), (b) late-emerging comprehension
difficulties (LERD-C), and (c) late-emerging comprehension and word recognition difficulties (LERD-CW). Lipka
et al. (2006) approximate that the RDs of between 36% and
46% of students who have RD in Grade 5 can be attributed
to LERD. Of these students with LERD, nearly half of them
exhibit some kind of difficulty at the word level, be it independent of or in conjunction with comprehension difficulties (Catts et al., 2012). One contributor to these unexpected
RDs may be the increasing demands placed on the reader at
the word level; as students are expected to read more complex text, they are faced with polysyllabic words that require
advanced skills in phonological decoding, orthographic
processing, semantics, and derivational morphology (Catts
et al., 2012; Leach et al., 2003). In the present study, we
consider the characteristics of children with LERD, children with ERD, and TD children and their relationship with
polymorphemic word recognition.
Word-Level Effects
In addition to child-level characteristics, the features of the
words themselves may also relate to polymorphemic word
recognition. Word frequency effects play a role in most
studies of polysyllabic word recognition (e.g., Jared &
Seidenberg, 1990; Yap & Balota, 2009). Further, studies
have shown that frequency effects are moderated by the
consistency of the polysyllabic words, such that the effect
of frequency is only present for low-frequency, low-consistency words (Chateau & Jared, 2003; Jared & Seidenberg,
1990). We also included OLD, a measure of orthographic
neighborhood size, which has been shown to predict performance on both lexical decision and word recognition tasks
(Yarkoni, Balota, & Yap, 2008). For derived words, research
has shown that the frequency of a word’s morphological
features may be important. Carlisle and Katz (2006)
observed that a factor combining information about the size
of a word’s morphological family (e.g., the number of
words derived from nature) and the root word’s frequency
predicted fourth- and sixth-grade readers’ ability to pronounce derived words. Studies by Reichle and Perfetti
(2003) and Schreuder and Baayen (1997) support the idea
that the size of the root word family affects word recognition. Suffix frequency describes how often a given suffix
occurs in the lexicon, and evidence from Arciuli, Monaghan,
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
5
Kearns et al.
and Ševa (2010) and Jarmulowicz (2002) suggest that word
reading may be more accurate for words with more frequent
suffixes. Finally, Goodwin et al. (2013) found evidence that
seventh and eighth graders’ ability to pronounce root words
correctly interacted with transparency, such that reading a
root word correctly related to derived word reading accuracy only when the word was morphologically transparent.
In this study, we examined whether these features related to
polymorphemic word recognition ability for children with
LERD, children with ERD, and TD children.
Research Questions
In the present study we ask two related research questions.
The first investigates the relative role of item-specific
knowledge: root word recognition ability and subjective
familiarity; child-level skills thought to be important for
word recognition acquisition: reader RD status, PA, MA,
orthographic knowledge, RAN, and vocabulary (also controlling for sentence repetition, working memory, and attention rating); and word-level characteristics: word and root
frequency, root word family size, and morphological transparency, as predictors of individual differences in children’s
polymorphemic word recognition.
We hypothesized that being able to recognize the root
word accurately (i.e., item-specific knowledge) would substantially increase the likelihood that the polymorphemic
word would be recognized correctly. We also anticipated
that familiarity (i.e., the presence of a phonological representation in the lexicon) with the target polymorphic word
would have separate predictive power over root reading by
allowing lexical information to aid phonological recoding.
Among general child-level indices of orthographic, phonological, rapid naming, semantic, and morphological skill,
we expected PA and MA to be most important. In this
diverse sample, children with RD have poor phonological
skills, but they are likely to rely on these skills in the same
way as TD children (for a detailed discussion see Metsala,
Stanovich, & Brown, 1998). In addition, polymorphemic
word recognition requires subtle phonological adjustments
after recoding, and these adjustments are likely easier for
children with good phonological skills. We anticipated that
MA would play an important role since the words were
derived and children would likely apply their knowledge of
morpho-syntactic relationships, particularly for words they
were less familiar with.
The second research question examines how children
with different reading profiles (defined based on RD status:
ERD, LERD, and TD) compare in their ability to recognize
polymorphemic words. Children with LERD may differ
from children with ERD and TD children in the skills that
relate to their ability to recognize polymorphemic words.
We anticipated an order effect, with the ERD group being
least accurate reading polymorphemic words, followed by
LERD and then TD groups. We consider two child-level
skills expected to differ between children with LERD, children with ERD, and TD groups. We anticipated that the
effect of phonological awareness would vary across groups,
with a greater impact of PA for children in the ERD group.
Our hypothesis is that the LERD group has relatively intact
PA skills that allowed typical word reading development
during early reading instruction. We also expected the effect
of MA to differ across groups. We predicted that children
with LERD might exhibit specific difficulties using morphological knowledge to aid in recognition of polymorphemic words. Our hypothesis regarding the differential effect
of MA across RD groups’ polymorphemic word recognition
skill is consistent with speculation by Catts et al. (2012) and
Leach et al. (2003) that more complex words require
advanced morphological knowledge that may not be available to the LERD group.
Methods
Participants
Participants were drawn from a multiyear cohort longitudinal study examining response-to-intervention decision rules
(see Compton et al., 2010) and prevention efficacy (see
Gilbert et al., 2013) in first-grade children. For this study
children were assessed from the end of first grade through
fourth grade on measures of word identification and comprehension. At the end of fourth grade, single indicator hidden Markov models were fit separately for the four time
points representing word reading and reading comprehension development. These models are considered a firstorder Markov process where the transition matrices are
specified to be equal over time (i.e., measurement invariance across time; Langeheine & van de Pol, 2002). Hidden
Markov models are a form of latent class analysis known as
latent transition analysis (LTA), where class indicators (categorical variables indicating RD and TD groups) are measured over time and individuals are allowed to transition
between latent classes. LTA addresses questions concerning
prevalence of discrete states and incidence of transitions
between states and produces parameter estimates corresponding to the proportion of individuals in each latent
class initially, as well as the probability of individuals
changing classes with time. LTA models were generated
using mixture-modeling routines contained in Mplus 5.0
(Muthén & Muthén, 1998–2012). Model estimation was
carried out using a maximum likelihood estimator with
robust standard errors. Detailed discussions of LTA can be
found in Collins and Wugalter (1992) and Reboussin,
Reboussin, Liang, and Anthony (1998). A cut point of the
25th percentile, based on national norms, was used at each
time point to represent RD in word reading and reading
comprehension. Model fit was estimated with the
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
6
Journal of Learning Disabilities 
Table 1. Total Sample of Fourth-Grade Students, Fifth-Grade Students Sampled, and Counts of the Various Reading Classes Derived
From Latent Transitional Analysis as a Function of Cohort.
Reading class
TD
ERD-W
ERD-C
ERD-CW
LERD-W
LERD-C
LERD-CW
Total
Cohort 1: Fourth
grade/fifth grade
172/38
1/1
28/11
15/8
10/6
36/27
23/16
285/107
Cohort 2: Fourth
grade/fifth grade
Cohort 3: Fourth
grade/fifth grade
Total sample: Fourth
grade/fifth grade
64/30
1/0
23/10
10/4
6/3
35/12
18/4
157/63
86/41
1/0
13/5
10/6
8/6
30/17
16/10
164/85
322/109
3/1
64/26
35/18
24/15
101/56
57/30
606/235
Note. TD = typically developing; ERD-W = early-emerging word reading difficulties; ERD-C = early-emerging comprehension difficulties;
ERD-CW = early-emerging comprehension and word recognition difficulties; LERD-W = late-emerging word reading difficulties; LERD-C = lateemerging comprehension difficulties; LERD-CW = late-emerging comprehension and word recognition difficulties. Bolded fifth-grade numbers
represent the sample used in the present study.
likelihood ratio chi-square. The likelihood ratio compares
the observed response proportions with the response proportions predicted by the model (Kaplan, 2008). As with
most structural equation modeling–based models, the null
hypothesis for chi-square model tests is that the specified
model holds for the given population, and therefore accepting the null hypothesis implies that the model is plausible.
All models across cohorts were found to fit the data
adequately.
The output of interest from each LTA model (i.e., word
reading and reading comprehension) was the assignment of
each child to either RD or TD classes as a function of time.
In this study LERD membership was defined as a child who
transitioned from an initial classification of TD to RD over
time; ERD as a child who was assigned to the RD class at the
end of first grade and remained in the class across time; and
TD as a child who was assigned to the TD class at the end of
first grade and remained in the class over time. (A very small
number of children in the sample transitioned from RD to
TD over time, but this group was not included in this study.)
In the case of word reading, LTA allowed the identification
of classes representing TD, early-emerging word reading
difficulties (ERD-W), and LERD-W and for reading comprehension classes representing TD, early-emerging comprehension difficulties (ERD-C), and LERD-C. Results
from the two LTA models were combined to further identify
early-emerging comprehension and word recognition difficulties (ERD-CW) and LERD-CW classes. Thus, seven
latent classes were identified: TD, ERD-W, ERD-C,
ERD-CW, LERD-W, LERD-C, and LERD-CW. Counts for
the various reading classes identified through LTA in fourth
grade as a function of cohort are displayed in Table 1. We
then selected from the larger fourth-grade sample a subsample of children to be assessed in the fall of fifth grade. The
parents of the target children consented to their participation
in three 1-hr testing sessions measuring reading, language,
knowledge, executive function, and attention. Our sampling
strategy attempted to consent all children with ERD and
LERD in a given cohort and then to randomly select TD
children to assess. Table 1 provides the number of children
who were consented and administered the fifth-grade battery. Since this study specifically targeted word-reading
skills we selected only ERD and LERD classes in which
word-reading deficits were present: ERD-W (n = 1),
ERD-CW (n = 18), LERD-W (n = 15), and LERD-CW (n =
30) along with TD children (n = 109). The word only and
the combined word and comprehension deficit groups were
merged for the ERD (n = 19) and LERD (n = 45) groups.
Descriptive statistics for the full sample and a subsample
that received the word familiarity rating are provided in
Table 2. Description of the sampling plan for each of the
cohorts is described below.
Cohort 1 (Compton et al., 2010). Participants in the first
cohort were selected from 56 first-grade classrooms in 14
schools within an urban district located in southeast region
of the United States. Seven study schools were Title I institutions. We assessed every formally consented child (n =
712) with three 1-min study identification measures: word
identification fluency (WIF) screen, rapid letter naming
(RLN), and rapid letter sound. With WIF screen, children
are presented with a single page of 50 high-frequency words
randomly sampled from 100 high-frequency words from
the Dolch preprimer, primer, and first-grade-level lists
(Fuchs, Fuchs, & Compton, 2004). They have 1 min to read
words. With RLN and rapid letter sound, the speed at which
children name an array of the 26 letters and the sounds of
the letters is measured. For all three measures, scores were
prorated if a child named all items in less than 1 min. We
used these data to divide the 712 children into high-, average-, and low-performing groups with the use of latent class
analysis and then randomly selected study children from
each group (for details see Compton et al., 2010). We oversampled low-performing children to increase the number of
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
7
Kearns et al.
Table 2. Demographic Statistics for Children in Study.
Full sample analysis N = 173
Variable
Age (years)
Gender
Female
Male
Grade
3
4
5
Group
ERD
LERD
TD
Race
African American
Asian
Caucasian
Hispanic
Kurdish
Biracial
Other
n
%
Familiarity analysis n = 103
M
(SD)
10.77
(0.45)
n
%
M
(SD)
10.77
(0.42)
93
80
53.76
46.24
56
47
54.37
45.63
1
9
163
0.58
5.20
94.22
6
97
5.83
94.17
19
45
109
10.98
26.01
63.01
10
22
71
9.71
21.36
68.93
84
7
65
5
5
5
2
48.55
4.05
37.57
2.89
2.89
2.89
1.16
50
4
38
3
2
5
1
48.54
3.88
36.89
2.91
1.94
4.85
0.97
Note. ERD = early-emerging reading difficulty; LERD = late-emerging reading difficulty; TD = typically developing.
struggling readers in the prediction models. We included
485 children: 310 low study entry, 83 average study entry,
and 92 high study entry. Follow-up testing was performed
at the end of first through fourth grade. At follow-up in the
spring of fourth grade, 200 of the original 485 children
(41% of the sample) had moved from the district and were
unavailable for assessment.
Cohorts 2 and 3 (Gilbert et al., 2013). The sampling procedures for Cohorts 2 and 3 were identical and are therefore
combined here. Initially we asked first-grade teachers to
identify the lowest half of their class in terms of reading
skill. Children in Cohort 2 were drawn from nine schools (5
Title I) in 37 first-grade classrooms and children in Cohort
3 from nine schools (5 Title I) and 32 first-grade classrooms
within an urban district located in southeast region of the
United States. We screened 628 of the identified and consented students with three 1-min measures: two WIF lists
and an RLN. Scores were prorated if a student named all
items in less than 1 min. To identify an initial pool of students potentially at elevated risk for poor reading outcomes
we applied latent class analysis (Nylund, Asparouhov, &
Muthén, 2007) to the three screening measures. The purpose of such an analysis was to obtain model-based latent
(unobserved) categories of students who are performing
similarly on the three screening measures. Models were
developed and evaluated using Mplus Version 6 (Muthén &
Muthén, 1998–2010). A clear category of at-risk students
was identified for Cohort 2 (n = 223) and Cohort 3 (n = 214)
at-risk students. Students not populating the at-risk category
were excluded from further follow-up. A portion of the atrisk first-grade children were randomly assigned to 14
weeks of small group tutoring or a business as usual control
group (for details see Gilbert et al., 2013). Follow-up testing was performed at the end of first through fourth grade.
At follow-up in the spring of fourth grade, 66 of the original
223 children (30% of the sample) in Cohort 2 and 50 of the
original 223 children (23% of the sample) in Cohort 3 had
moved from the district and were unavailable for assessment. A chi-square test was performed to examine the relationship between first-grade tutoring (treatment and control)
and fourth-grade LERD status (ERD, LERD, and TD).
Results indicate no relationship between first-grade treatment and reading class assignment in fourth grade, χ2(2,
N = 172) = 0.28, p = .868.
Measures
LTA measures (Grades 1–4)
Word identification. Word identification was measured
with the Word Identification subtest from the Woodcock
Reading Mastery Tests–Revised/Normative Update (Woodcock, 1998). For this task, children were asked to read
words aloud one at a time. The test was not timed, but chil-
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
8
Journal of Learning Disabilities 
dren were encouraged to move to the next item after 5 s
of silence. Correct pronunciations were counted as correct,
and the total score was the sum of correct items. Basal and
ceiling rules were applied. The examiner’s manual reports
the split half reliability for fifth-grade students as .91
(Woodcock, 1998).
Passage comprehension. General comprehension was measured with the Passage Comprehension subtest from the
Woodcock Reading Mastery Tests–Revised/Normative Update
(Woodcock, 1998). For this test, children are asked to
silently read one to two sentence prompts in which a single
word has been removed. Children were asked to provide the
omitted word. Basal and ceiling rules were applied. Splithalf reliability, provided by the Technical Manual, is .91 for
9-year-olds and .89 for 10-year-olds (Woodcock, 1998).
Child measures (Grade 5)
Attention (SWAN). On the SWAN (Swanson et al., 2006),
teachers answered nine questions regarding children’s
attention. We created a composite SWAN attention score by
taking the mean of the nine ratings for each child. Since we
had more SWAN data for fourth-grade data collection than
for fifth, we used the fourth-grade ratings rather than the
fifth. The coefficient alpha for the SWAN is .97.
Recalling sentences. We administered the Recalling Sentences subtest of the Clinical Evaluation of Language Fundamentals–Fourth Edition (Semel, Wiig, & Secord, 2003).
The measure relies on syntax and phonological memory.
Instructions to the child were to listen carefully and repeat
exactly what the examiner said. Items are 32 single sentences in order of least to most syntactically complex.
Answers were assigned a score of 3 for no errors, 2 for a
single error, 1 for two or three errors, or 0 for four or more
errors. Errors were mispronunciations, omissions, substitutions, additions, repetitions, and self-corrections, although
leeway was given for local dialect per the instructions found
in the test manual. The examiner stopped administration
after five consecutive scores of 0 or when all items had been
administered. The total score was the sum of item scores.
Interrater agreement on scoring was computed for at least
20% of each examiner’s sessions and was found to be .97.
The test manual provides a coefficient alpha of .89 for children age 10 and 11.
MA. We created an MA composite by taking the mean
of each child’s scores on four different MA tests for three
reasons. First, recent reviews of morphological research
highlight measurement problems with MA measures (P. N.
Bowers, Kirby, & Deacon, 2010; Carlisle, 2010; Carlisle
& Goodwin, 2013). Second, combining measures addresses
the multifaceted nature of MA. Third, inclusion of multiple
measures increases reliability (Shadish, Cook, & Camp-
bell, 2002). Three of the four measures were suffix choice
tests (Nagy et al., 2003). The examiner read the items and
answer choices aloud to the students individually, a procedure designed to separate MA from reading skill. For
the first test, the child saw 25 incomplete sentences and
chose the derivational form of the word that completed
the sentence correctly (e.g., Did you hear the [directs,
directions, directing, directed]?). For the second test, the
examiner presented the child with a pseudo-derived word
(e.g., dogless) and four sentences using it. The child was
asked to choose the sentence in which it made sense (e.g.,
When he got a new puppy, he was no longer dogless.).
The test had five items. For the third test, the examiner
presented the child with four nonword options containing grammatical information (e.g., jittling, jittles, jittled,
jittle) and asked her to choose the one that fit a sentence.
This test had 14 items. The fourth measure in the composite was the morphological relatedness test (adapted from
Derwing, 1976, as used in Nagy et al., 2003). After two
example items, test administrators read 12 pairs of words
aloud (while students had visual access to the items) and
asked students to judge whether one word (e.g., quickly)
comes from another word (e.g., quick). Foils were pairs of
orthographically but not semantically related words (e.g.,
mother, moth). The sample specific coefficient alpha for
these collective tasks is .87, which is consistent with a
review of the literature that suggests these tasks are reliable (nonword: α = .73, Lesaux & Kieffer, 2010; combined
real word and nonword: α = .77, Ramirez, Chen, Geva, &
Kiefer, 2010; relatedness combined with multiple tasks: α
= .92, Goodwin et al., 2013; α = .79, McCutchen, Green,
& Abbott, 2008).
OC. The OC task in this study was a shortened version of
the one used by Olson, Kliegl, Davidson, and Foltz (1985).
Whereas the original test had 80 items, the version we used
had only 40 (only the odd-numbered items from the original test were retained). Children were presented two sheets
of paper, each containing two columns of test items. They
were asked to circle the real word in each pair. Each item
comprised the correctly spelled word and a pseudohomophone foil (e.g., rain and rane). Children completed and
received feedback on 4 practice items prior to beginning the
test. The total score was the sum of the correct items. Coefficient alpha for our sample was .76.
PA. PA was measured with the Elision subtest of the
Comprehensive Test of Phonological Processing (Wagner,
Torgesen, & Rashotte, 1999). In this test, children were presented a word, asked to repeat the word, and then asked
to say the word without a specified syllable for the first
3 items and without a specified phoneme for the remaining 17 items. Items were ordered by increasing difficulty,
and the examiner discontinued administration after 3
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
9
Kearns et al.
consecutive incorrect items. In addition to the 20 test items
(for 5 of which examiners provided performance feedback),
6 practice items were administered. The total score was the
sum of correct items. Coefficient alpha provided by the
manual for age 10 is .91 and age 11 is .86.
RAN. RAN was assessed using the Rapid Letter Naming subtest of the Comprehensive Test of Phonological Processing (Wagner et al., 1999). Two versions of the test were
given. On both, six letters were randomly printed in four
rows of nine letters. After ensuring each child could identify
the letters, she was told to name the letters as fast as possible. The total score was the number of seconds it took the
child to name the letters on both test. Test-retest reliability
is .72 for children of ages 8 to 17 years per the test manual.
Working memory. On the Working Memory Test Battery
for Children, Listening Recall subtest (Gathercole & Pickering, 2000), children were asked to state whether series of
sentences were true or false and remember the final word
of each sentence. For example, the examiner said, “Lions
have four legs,” and the child responded, “True.” Then the
examiner said, “Pineapples play football,” and the child
responded, “False, legs, football.” Children’s scores were
based on their ability to recall the words in the correct order.
The examiner administered the items in spans of 6 items,
beginning with one-sentence items. The examiner discontinued testing if the child reached the ceiling of 3 or more
errors within a span. At the beginning of testing, the child
responded to 2 one-sentence practice items and received
feedback. Then, the child responded to the 6 items in the
one sentence span. If the child answered 4 or more items
correctly, the child responded to 2 two-sentence practice
items and received feedback. Then, the child responded to
the 6 items in the two-sentence span. Testing continued up
to six-sentence spans without further practice or until the
child reached the ceiling. The sample-specific coefficient
alpha for this task was .85.
Reading group. Children were classified into one of three
reading groups based on the LTA analyses (see above): ERD
(ERD-W and ERD-CW), LERD (LERD-W and LERDCW), and TD children. To contrast group performance two
dummy codes were created comparing the ERD group to
the LERD group (designated as ERD) and the other comparing the TD group to the LERD group (designated as TD).
Missing data. Four children had missing data on some
measures. Two children were missing a single item on the
SWAN, and their composite scores were computed for eight
items. One child was missing SWAN data for fourth and
fifth grades, so we substituted the mean score for his RD
subgroup (LERD). Another child was missing data for the
OC task, so we substituted the mean OC score for his RD
subgroup (LERD).
Word measures
Frequency. Word frequency for the items on the polymorphemic word recognition list was coded using the Educator’s Word Frequency Guide (Zeno et al., 1995). This
corpus contains more than 60,000 samples of text derived
from multiple sources ranging from textbooks to popular
fiction literature. The chosen index for this study was the
standard frequency index (SFI). Breland’s (1996) formula
for SFI is as follows: SFI = 10 × (log10U + 4). U represents
a word’s type frequency per million tokens, adjusted for the
dispersion across content areas.
OLD. OLD was obtained from The English Lexicon
Project database (Balota et al., 2007). OLD is determined
by calculating the mean Levenshtein distance between the
word and its 20 closest neighbors, meaning the minimum
number of substitutions, insertions, or deletions required to
get from the given word to the 20 words with the greatest
amount of orthographic overlap (Yarkoni et al., 2008).
Root word family size. To determine the root word family size, we identified all of the cases in which our polymorphemic derived word’s root word was present in other
words. We used the roots given for each derived word in
the CELEX lemma database (Baayen, Piepenbrock, & van
Rijn, 1993). We then linked these to the word form database
and counted all unique word forms containing that root,
including inflected and derived forms.
Root word frequency. Root frequency was obtained from
the SFI for the root word, as determined by the Educator’s
Word Frequency Guide (Zeno et al., 1995).
Suffix frequency. CELEX (Baayen et al., 1993) reports the
roots and affixes for 52,447 English lemmas along with the
lemmas’ frequencies. The frequencies for the lemmas containing each suffix were summed to create a token-based suffix
frequency. For example, the suffix -ly occurred in 3,101 lemmas, and the sum of these lemmas’ frequencies was 237,503.
The log of the token suffix frequency was taken to normalize
the distribution. Thus, the suffix frequency for -ly was 12.38.
Transparency. To determine the morphological transparency of each polymorphemic word, we examined whether
there was a shift in pronunciation or spelling when a suffix
was added to create a derived word (Carlisle, 2000). Words
were coded as containing (1) phonological and orthographic shifts (e.g., peace to pacify), a phonological shift
(e.g., confess-/kən ˈfɛs/ to confession-/kən ˈfɛ ʃən/), or an
orthographic shift (e.g., judge to judgment) or (0) no shifts
(i.e., transparent words; e.g., classic to classical).
Child-by-item measures (Grade 5)
Polymorphemic word recognition. Polymorphemic word
recognition was assessed with a 30-item experimenter-
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
10
Journal of Learning Disabilities 
created list of words. A subset of words used by Carlisle
and Katz (2006) served as a basis for the present word list.
All words were morphologically complex derived words
(e.g., intensity), containing a root word (e.g., intense) plus a
suffix (e.g., -ity). Students were presented the list of words
and asked to read the words aloud one at a time. Correct
pronunciations were scored 1, and incorrect pronunciations
were scored 0. Coefficient alpha for our sample was .94.
Interrater agreement for item scoring was .95. Data for one
word, bucketful, were not available for OLD frequency in
The English Lexicon Project database, so this word was not
included in analyses. Therefore, the full analysis was conducted with 29 items (for a list see Appendix A).
Familiarity. Familiarity with the items on the polymorphemic word list was a measure of item-specific
knowledge. We considered familiarity a proxy for lexical knowledge in the form of a semantic representation,
phonological representation, or both. We asked students
to judge their familiarity with each word. To do this, the
examiner presented a word orally and asked the child to
respond “yes” if she had ever heard the word, “no” if she
had never heard the word, or “not sure” if she was unsure.
A list of 60 words was presented to the child, 30 target
polymorphemic words and 30 rare polymorphemic words.
We operationalized rare words as those with a minimum
age of acquisition of 600 (where 700 is the index maximum; Gilhooly & Logie, 1980), a maximum written frequency of 1 (Kučera & Francis, 1967), and a maximum
verbal frequency of 10 (Brown, 1984). These indices were
obtained from the MRC Psycholinguistic Database (Wilson, 1988). Each word’s familiarity was coded 1 if the child
had ever heard the word and 0 if the child had never heard
the word or was unsure. The foil items were not used in
the analysis. Coefficient alpha for all items was .83 for our
sample. Only children in the second and third cohorts rated
the words’ familiarity. A fraction of these children did not
rate familiarity for the word convention, so we dropped this
word from the familiarity data set. As a result, we analyzed
the data first without the familiarity data (the full analysis,
reflecting the inclusion of all 173 children and 29 words)
and then with the familiarity data (the familiarity analysis,
reflecting the use of the familiarity ratings, available for
103 children and 28 words).
Root word recognition. Each word on the experimentercreated polymorphemic word list comprised a root word
and a suffix. In a separate testing session several days after
the children read the polymorphemic words, they were
asked to read each of the root words associated with the
polymorphemic words. Each item was scored 1 for a correct
pronunciation or 0 for an incorrect pronunciation. Coefficient alpha for our sample was .94. Interrater agreement for
item scoring was .98.
Procedure
Test examiners were graduate research assistants who had
been trained on tests until procedures were implemented
with 90% fidelity. Most students were tested in three 1-hr
sessions, although a minority were tested in two 1.5-hr sessions or in one 3-hr session. All tests were given individually, audio-recorded for reliability/fidelity purposes, and
scored by the original examiner. Children received small
school-related prizes or a $5 gift card for participating in
each testing session. All tests were double-scored and double-entered; discrepancies were resolved by a third examiner. Average fidelity of implementation procedures
exceeded 94% for all tests. Study data were entered and
managed using REDCap electronic data capture tools
(Harris et al., 2009).
Analyses
We used a series of item-response crossed random effects
models (Wilson & De Boeck, 2004), also called explanatory crossed random effects response models (De Boeck,
2008; Janssen, Schepers, & Peres, 2004). We explained
item-level polymorphemic word recognition variability in
terms of person abilities and item difficulties with (a) the
root reading and familiarity item-specific covariates, (b)
child measures of reading and reading-related skills, and (c)
word characteristics. We implemented this approach using
Laplace approximation available through the lmer function
(Bates et al., 2013) from the lme4 library in R (R
Development Core Team, 2012).
Data analyses were conducted twice. First, we conducted
the full analysis with 173 children and 29 words. Second,
we conducted the familiarity analysis for 103 children and
28 words for which we had complete familiarity-rating
data. Before conducting analyses, the child and word
covariates were grand-mean centered. Variables with very
different scales can cause convergence failure, so three variables with large standard deviations—RAN, recalling sentences, and vocabulary—were rescaled by dividing each by
10. This affects neither the results nor the interpretation.
We first fit an unconditional model (Model 0). Here, we
added a person-specific random effect ( r010 j ) and an itemspecific random effect ( r020i ) because we expected random
variation related to each of these variables. Equation 1
shows the structure of Model 0 using Raudenbush and
Bryk’s (2002) multilevel form:
(
)
Level − 1 Responses ji logit ( p ji ) = λ0 ji
(
)
Level − 2 Person j & Itemi λ 0 ji = γ 000 + r010 + r001 (1)
(
)
(
)
r010 ~ N 0, σ 2u 010 , r001 ~ N 0, σ 2u 020 ,
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
11
Kearns et al.
where pji is the probability of a correct response from person j on item i, λ0ji is the logit of the probability of a correct
polymorphemic word response from an average person
(child) on an average item (word), γ000 is the intercept representing the mean logit of a correct response, r010 is the
person random effect, and r001 is the item random effect.
Because the outcome is binary, pji was assumed to follow
the Bernoulli distribution. Random effects were assumed to
be normally distributed. The unconditional model estimated
the variability associated with persons and items, allowing
us to determine how well the subsequent models explained
this variability.
We next created Model 1, containing only fixed effects
for ERD ( γ 010 ) and TD ( γ 020 ) with LERD as the reference
group. This was the base model for all subsequent analyses
because the groups were established a priori, and group differences would naturally account for considerable variability. With this model and all subsequent models, we also
needed to establish the correct random effects structure.
This procedure is described in Appendix B.
Using Model 1 as a base, we built a series of models.
Model 2 was the item-specific model, containing the itemspecific covariates, root word recognition for the full analysis ( λ1 ), and the familiarity parameter ( λ 2 ) for the
familiarity analysis. Familiarity models are distinguished
from the full analysis models by the addition of an “F” to
the model number. For example, Model 2F referred to the
item-specific model including root word recognition and
word familiarity covariates. Model 3 represents the child
and word covariate model, which included the child-reading-related skills (γ 030 – γ 0100 ) and the word characteristics
(γ 001 – γ 006 ). Model 4 was the combined model that
included both the item-specific covariates and the child and
word covariates. This combined model allowed us to answer
Research Question 1 and to understand whether children
rely primarily on word-specific knowledge or whether they
use this knowledge in combination with other readingrelated skills. Last, Model 5 was an interaction model
including interactions designed to answer Research
Question 2. For the full analysis, we interacted ERD and
TD with MA ( γ 0110 , γ 0120 ), PA ( γ 0130 , γ 0140 ), and root
word family size ( γ 013 , γ 023 ) and interacted item-specific
root reading with morphological transparency ( γ106 ). For
the familiarity analysis, we also added an interaction
between item-specific root reading and item-specific familiarity ( λ3 ), an interaction of familiarity and transparency
( γ 206 ), and the three-way interaction of item-specific root
reading, item-specific familiarity, and transparency ( γ 306 ).
We examined the practical significance of each predictor by calculating the probability of a correct response
1
, where v is the
following this formula: p ji =
1 + exp (−λ 0 + γ )
v
variable of interest. Given that the LERD group was the
reference category and that root word recognition and
familiarity values were not centered, predicted probabilities
represent the probability of correct polymorphemic word
reading response for an average item with average scores
for all word characteristics and for an average child who
was in the LERD group, did not read the root word correctly, and had average scores on all other predictors. In the
familiarity analysis, the probabilities are for cases where the
average child in the LERD group was unfamiliar with the
word. We calculated the reduction in variance using 95%
plausible values ranges and by calculating the reduction in
variance directly. The calculation methods are given in
Appendix C.
Results
Demographic data on participants from the full analysis
sample (N = 173) and the subsample of children completing
the familiarity task (n = 103) are presented in Table 2. For
the full sample analysis, there was a greater percentage of
females and 10 children who were retained. The sample represents the demographics of the local district in terms of the
percentage African American (48.55% sample; 47% district)
and Caucasian (37.57% sample; 35% district) children. The
sample has a lower percentage of Hispanic children compared to the district (2.89% sample; 16% district) due to the
initial sampling requirement across the three cohorts that
children enrolled in first-grade English language learner
instruction be eliminated from the sample. The familiarity
subsample had similar demographic attributes.
Table 3 displays child-level item-specific performance
disaggregated by reading group for the full analysis sample
and the familiarity sample. Across all children and words in
the full analysis sample (N = 5,017), the proportion of correct responses for polymorphemic word recognition was
.61, and the average proportion of correct root word
responses was .80. Children in the TD group recognized far
more polymorphemic words and root words than children in
the LERD and ERD groups, and children in the LERD group
recognized more than those in the ERD group. The itemlevel correlation between polymorphemic word recognition
and root word recognition was r = .41. This item-level correlation might appear low because both are measures of
word reading, but this reflects that these item-level scores
function much differently than traditional aggregate word
knowledge scores (i.e., zero-order correlations based on list
performance), as Nation and Cocksey (2009) observed. For
the familiarity analysis, nearly identical patterns were
observed to those for the full analysis. For the additional
familiarity data, children said they were familiar with words
in 67% of cases, with TD children reporting greater familiarity than children with LERD and children with LERD more
than ERD. The correlation between item-level familiarity
and polymorphemic word recognition was .20, and the correlation with root word recognition was .27. Familiarity had
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
12
Journal of Learning Disabilities 
Table 3. Item-Specific Variables in Analyses.
Correlations
(all children)
Variable
Full analysis
1 Polymorphemic word recognition
2 Root word recognition
Familiarity analysis
1 Polymorphemic word recognition
2 Root word recognition
3 Familiarity
TD
ERD
LERD
All
n = 3,161
.77
.91
n = 1,988
.74
.93
.70
n = 551
.17
.35
n = 280
.06
.23
.49
n = 1,305
.40
.69
n = 616
.39
.70
.65
N = 5,017
.61
.80
N = 2,884
.60
.81
.67
1
—
.41
—
.42
.27
2
—
—
.20
Note. TD = typically developing; ERD = early-emerging reading difficulty; LERD = late-emerging reading difficulty.
a much lower correlation with the word recognition measures
(r = .27 polymorphemic word reading; r = .20 root word
reading) than they had with each other (r = .42), likely reflecting that familiarity is a proxy for phonological and semantic
lexical knowledge but not orthographic knowledge.
For child-level characteristics (see Table 4), results show
a clear ordering effect across reading groups, with the TD
group scoring better than the LERD group followed by the
ERD group. Mean comparisons between TD, ERD, and
LERD groups (using Bonferroni post hoc comparisons)
indicated significant differences between the TD and LERD
and ERD groups, with the TD group having higher scores on
all measures in the full analysis sample and familiarity sample. Also, the LERD group had significantly higher scores
than the ERD group on all measures in both analyses, except
recalling sentences, vocabulary, PA, and working memory.
Table 5 shows correlations for the child-level variables.
As predicted, the child-level correlations between polymorphemic word recognition and root word recognition were
much higher than the item-level correlations (child-level:
r = .90; item-level: r = .41). In the familiarity data, there
were similarly large differences between polymorphemic
word recognition and familiarity correlations at the child
and item levels (child: r = .49; item: r = .27). For a parallel
finding, see Nation, Angell, and Castles (2007). Correlations
between polymorphemic word recognition and the childlevel predictors were moderate to high, ranging in magnitude from .33 to .75 in the full analysis sample and from .35
to .77 in the familiarity sample. Relations among the various child-level predictors ranged in magnitude from .27 to
.68 for the full sample and from .23 to .71 for the familiarity
sample.
Descriptive statistics for word characteristics are presented in Table 6 for the full analysis and familiarity samples. Results for the full analysis sample show that the mean
word frequency was 45.09 and the mean OLD was 2.74,
meaning that it took nearly three changes to reach the 20
nearest neighbors for the average polymorphemic word.
Mean root word family size was 24.48, indicating that the
average root word had about 25 words in its root word family. Suffix frequency was 10.82, referring to the log of a
count of all words containing those suffixes. Mean transparency was .59, with 17 transparent words and 12 words with
shifts. The means were quite similar for the familiarity analysis, as that analysis contained only one less word. In terms
of correlations (see Table 7), frequency significantly correlated with suffix frequency (full: r = .40; familiarity: r =
.38), meaning that higher frequency words tended to have
more frequent suffixes. Frequency also correlated with
transparency (full: r = −.46; familiarity: r = −.48), meaning
that higher frequency words tended to be less transparent.
Root word frequency and root word family frequency were
also strongly correlated (full: r = .43; familiarity: r = .58),
meaning that more frequent words in our sample tended to
have more morphologically related words. Given that the
correlations among the word variables, like the child variables, were not high enough to cause collinearity concerns,
statistical analysis with the theorized predictor set was
appropriate.
Full Analysis
First, we fit the unconditional model (Model 0) containing
only person and item random effects. The intercept estimate
was λ 0 = 0.71, corresponding to a predicted probability of
a correct polymorphemic word recognition response of .67
for the average child and the average item. Variability
around that average was evident for both children ( σ 2 r010 =
4.235) and items ( σ 2 r001 = 1.860).

The group model (Model 1) with ERD ( γ 010 = −2.167)
and TD ( γ 020 = 2.498) fixed effects and a random item
slope for TD ( σ 2 r003 = 0.144) improved model fit over the
2
unconditional model as expected, Δ χ3 = 164.61, p < .0001.
The random item slope with respect to TD suggests there
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
13
Kearns et al.
Table 4. Child-Level Variable Performance Delineated by Reading Group.
Variable
Full analysis
Attention
RS
MA
PA
OC
RAN
Vocabulary
WM
Familiarity analysis
Attention
RS
MA
PA
OC
RAN
Vocabulary
WM
Score
type
Mean
Raw
Mean
Raw
Raw
Time
Raw
Raw
Mean
Raw
Mean
Raw
Raw
Time
Raw
Raw
Min Max
(SD)
M
(SD)
M
All children
LERD
ERD
TD
(SD)
M
(SD)
M
n = 109
n = 19
n = 45
N = 173
1
7
4.64 (1.25)
2.31 (0.71)
3.80 (1.18)
4.16 (1.40)
23 92 60.37 (11.84) 44.79 (10.26) 50.27 (13.15) 56.03 (13.34)
5 13 10.85 (1.42)
7.04 (1.79)
8.64 (1.56)
9.85 (2.02)
1 20 13.87 (4.48)
6.58 (3.92)
9.22 (3.70)
11.86 (5.02)
21 40 36.17 (2.37)
26.79 (4.42)
33.39 (3.07)
34.41 (4.08)
21 80 35.46 (8.11)
48.68 (15.82) 41.44 (9.63)
38.47 (10.53)
103 195 158.49 (17.28) 130.21 (21.69) 138.89 (19.06) 150.28 (21.23)
0 22 12.78 (3.40)
8.84 (4.55)
10.87 (3.62)
11.85 (3.82)
n = 71
n = 10
n = 22
N = 103
1
7
4.46 (1.12)
2.30 (0.68)
3.76 (1.12)
4.10 (1.26)
23 83 59.48 (11.85) 42.50 (10.30) 49.59 (14.19) 55.72 (13.50)
5 13 10.68 (1.31)
6.22 (1.20)
8.35 (1.51)
9.75 (2.01)
1 20 13.48 (4.42)
5.20 (2.49)
9.00 (3.82)
11.72 (4.99)
21 40 36.03 (2.24)
25.30 (3.95)
33.73 (2.83)
34.50 (4.07)
21 80 35.49 (8.38)
51.50 (15.39) 41.50 (8.93)
38.33 (10.50)
103 193 157.32 (17.59) 123.90 (16.70) 134.91 (17.36) 149.29 (21.25)
0 22 12.76 (3.79)
8.50 (5.10)
10.14 (3.83)
11.79 (4.18)
Group
comparisonsa
TD > LERD > ERD
TD > LERD = ERD
TD > LERD > ERD
TD > LERD = ERD
TD > LERD > ERD
TD > LERD > ERD
TD > LERD = ERD
TD > LERD = ERD
TD > LERD > ERD
TD > LERD = ERD
TD > LERD > ERD
TD > LERD = ERD
TD > LERD > ERD
TD > LERD > ERD
TD > LERD = ERD
TD > LERD = ERD
Note. Min = minimum; Max = maximum; TD = typically developing; ERD = early-emerging reading difficulty; LERD = late-emerging reading difficulty; RS
= recalling sentences; MA = morphological awareness; PA = phonological awareness; OC = orthographic choice; RAN = rapid automatized naming;
WM = working memory. Attention and MA minimum and maximum scores have been rounded to the nearest whole number. For RAN, higher scores
indicate longer naming times and thus poorer performance.
a
Mean comparisons based on ANOVA (p < .05). Full analysis df: TD – LERD = 1, 152; ERD – LERD = 1, 62. Familiarity analysis df: TD – LERD = 1, 92;
ERD – LERD = 1, 31.
Table 5. Zero-Order Correlations Between Child Variables.
1
2
3
4
5
6
7
8
9
10
11
1. Polymorphemic word
—
.90*** .50*** .44*** .74*** .59*** .77*** −.46*** .61*** .35*** .49***
recognition (total)
2. Root word recognition (total)
.90*** —
.50*** .46*** .71*** .60*** .81*** −.56*** .61*** .39*** .48***
.52*** .51*** —
.36*** .48*** .29**
.53*** −.27**
.45*** .30** .24*
3. Attention (SWAN Rating Scale)
4. RS (CELF4)
.43*** .44*** .40*** —
.51*** .39*** .29** −.31**
.47*** .55*** .28***
5. MA (morphological composite) .72*** .70*** .53*** .55*** —
.49*** .53*** −.33*** .71*** .37*** .35***
6. PA (CTOPP Elision)
.60*** .58*** .29*** .37*** .49*** —
.42*** −.37*** .55*** .42*** .31**
7. OC (Olson OC)
.75*** .79*** .54*** .28*** .52*** .40*** —
−.43*** .43*** .31** .40***
8. RAN (CTOPP RLN)
−.51*** −.58*** −.27*** −.30*** −.38*** −.38*** −.43*** —
−.33*** −.31** −.31***
9. Vocabulary (PPVT4)
.55*** .56*** .44*** .53*** .68*** .46*** .37*** −.28*** —
.33*** .38***
10. WM (WMTB LR)
.33*** .37*** .33*** .54*** .38*** .34*** .31*** −.33*** .37*** —
.23
11. Familiarity
—
—
—
—
—
—
—
—
—
—
—
Note. RS = recalling sentences; CELF4 = Clinical Evaluation of Language Fundamentals (fourth edition); MA = morphological awareness; PA = phonological
awareness; CTOPP = Comprehensive Test of Phonological Processing; OC = orthographic choice; RAN = rapid automatized naming; RLN = rapid letter
naming; PPVT4 = Peabody Picture Vocabulary Test (fourth edition); WM = working memory; WMTB-LR = Working Memory Test Battery for Children,
Listening Recall. Correlations for the full sample analysis (N = 173) appear below the diagonal. Correlations for the familiarity analysis (n = 103) appear
above the diagonal.
*p < .05. **p < .01. ***p < .001.
was greater variability in word difficulty for TD children
than for other children. This is predictable, given that the
TD group had more varied ability than LERD and ERD
groups. The Model 1 person random variance was 1.474;
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
14
Journal of Learning Disabilities 
Table 6. Descriptive Statistics for Words in Analyses.
Full sample analysis (N = 29)
Variable
Frequency
OLD
Root word family size (log, type)
Root word frequency
Suffix frequency (log, token)
Transparency
Familiarity analysis (n = 28)
M
(SD)
M
(SD)
45.09
2.74
2.73
54.37
10.82
.59
(8.88)
(0.36)
(0.97)
(8.99)
(1.14)
(.50)
44.81
2.76
2.72
55.56
10.77
.61
(8.91)
(0.36)
(0.99)
(6.37)
(1.14)
(.50)
Note. OLD = orthographic Levenshtein distance. Standard frequency index data were taken from The Educator’s Word Frequency Guide (Zeno, Ivens,
Millard, & Duuvuri, 1995).
Table 7. Correlations Between Word Variables.
1
1 Frequency (SFI)
2 OLD frequency
3 Root word family size
4 Root word frequency (SFI)
5 Suffix frequency (log, token)
6 Transparency
—
−.03†
−.03†
.04†
.40*
−.46*
2
3
†
−.04
—
−.14†
.03†
.19†
−.01†
4
†
−.02
−.14†
—
.43*
−.20†
.31†
†
.24
.15†
.58**
—
−.33†
.36†
5
6
.38*
.17†
−.20†
−.27†
—
−.23†
−.44*
.01†
.31†
.29†
−.20†
—
Note. SFI = standard frequency index; OLD = orthographic Levenshtein distance. Correlations for the full sample analysis (N = 29) appear below the
diagonal. Correlations for the familiarity analysis (n = 28) appear above the diagonal.
*p < .05. **p < .01. †p > .05.
Research Question 1. The first research question concerned
the degree to which item-specific, child-reading-related
skills and word characteristics covariates predicted polymorphemic word recognition ability. We examined this
question with an item-specific model (Model 2), a child and
word covariate model (Model 3), and a model that combined the two (Model 4). Results of Models 2, 3, and 4 are
shown in Table 8.
to item-specific root word recognition ( σ2 r101 = 0.479) in
addition to the random item variability due to TD ( σ 2 r003
= 0.142). The intercept ( λ 0 = −1.437) indicated that the
average child in the LERD group had a predicted probability of a correct response of .19 when reading the root
word incorrectly. The model explained 20% of the person
variance and 32% of the item variance in the group model,
and the 95% plausible values ranges were .03 to .68 for
both persons and items, reflecting the similar magnitude of
person and item variance ( σ 2 r010 = 1.182 ; σ 2 r001 = 1.189 ).
The significant root word recognition effect, λ 1 = 1.087,
meant that if a child with LERD read the root word correctly, the predicted probability of a correct response would
be .41. The effects of group were also significant and
reflected that the probability of accurate polymorphemic
word recognition was higher for children who were TD than
for children with LERD and was lower for children with
ERD than for children with LERD, even when accounting
for item-specific root word recognition. Specifically, when
the root word was not recognized correctly, the probabilities
for correct polymorphemic word reading were .70, .19, and
.04 for TD, LERD, and ERD groups, respectively.
Item-specific model (Model 2). The version of this model
that best fit the data included random item variability due
Child and word covariate model (Model 3). The best fitting child and word covariate model added random effects
this was the variance against which the subsequent models
were compared. The 95% plausible values range for persons
was .04 to .86, indicating that, for the average word in the
sample ( r001 = 0) , 95% of the probabilities of a child with
LERD getting the word correct would fall within this range.
The random item variance was 1.323, providing the base
item variance for subsequent comparisons. The 95% plausible values range for items was .04 to .88. This indicated that
95% of the probabilities of a correct word response would
fall within this range for an average child ( r010 = 0) in the
LERD group. These ranges indicated considerable person
and item variability not related to the group classifications.
Thus, we proceeded to answer our research questions.
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
15
(0.198)*
1.087*
0.692
0.377
1.086
1.091
SD
(0.365)*
(0.229)*
−1.768*
2.268*
4297 (8)
20
32
Variance
reduction
(%)
(0.296)*
(SE)
−1.437*
Est.
0.272
0.136
0.120
0.410
0.746
0.820
SD
5.498*
4.850*
9.907*
4.855*
z
SD
Variance
reduction
(%)
4175 (24)
0.584
0.209
0.131
0.132
0.378
0.683
0.746
(0.027)*
(0.595)
(0.202)*
(0.022)
(0.202)
(0.477)*
0.101*
0.280
0.531*
−0.018
−0.086
1.991*
62
62
(0.065)
(0.073)
(0.067)*
(0.018)*
(0.028)*
(0.082)*
(0.054)
(0.023)
(0.326)
(0.213)*
(0.214)
(SE)
0.003
0.032
0.320*
0.077*
0.222*
−0.238*
0.020
−0.030
−0.084
0.739*
0.229
Est.
z
0.095*
0.391
0.553*
−0.020
−0.043
1.362*
0.007
0.025
0.305*
0.072*
0.202*
−0.201*
0.011
−0.029
0.017
0.676*
−0.547*
Est.
4119 (26)
68
68
(0.028)*
(0.643)
(0.212)*
(0.022)
(0.218)
(0.531)*
(0.062)
(0.069)
(0.063)*
(0.017)*
(0.026)*
(0.078)*
(0.053)
(0.022)
(0.312)
(0.200)*
(0.236)*
(SE)
0.055
3.386*
2.316*
z
0.522
0.229
0.138
0.133
0.371
0.645
0.695
SD
3.433*
0.608
2.612*
−0.915
−0.199
2.567*
0.112
0.356
4.827*
4.304*
7.676*
2.565*
0.212
−1.299
Model 4: Combined model
Variance reduction (%)
3.789*
0.470
2.626*
0.811
0.423
4.173*
0.043
0.432
4.802*
4.353*
8.023*
2.889*
0.359
1.276
3.474
3.474*
1.068
Model 3: Person and item model
(0.197)*
(0.104)
(0.085)
(0.038)
(0.201)
(0.125)
(0.361)*
0.526*
−0.063
0.020
0.010
0.047
0.014
0.960*
4097 (33)
72
72
Variance reduction (%)
(0.026)*
(0.606)
(0.207)*
(0.021)
(0.205)
(0.529)
(0.060)
(0.670)
(0.094)*
(0.033)
(0.264)*
(0.078)
(0.052)
(0.022)
(0.186)*
(0.445)*
(0.215)*
(0.257)*
(SE)
0.098*
0.405
0.508*
−0.018
−0.066
0.856
0.027
0.028
0.289*
0.060
1.902*
−0.145
0.008
−0.028
1.065*
1.129*
0.847*
−0.740*
Est.
z
2.674*
0.600
0.231
0.265
0.233
0.111
2.660*
3.717*
0.669
2.457*
0.840
0.321
1.619
0.448
0.412
3.070*
1.807
7.213*
1.858
0.157
1.272
5.728*
2.540*
3.933*
2.882*
Model 5: Interaction model
Note. Est. = estimate; ERD = early-emerging reading difficulty; TD = typically developing; RS = recalling sentences; MA = morphological awareness; PA = phonological awareness; OC = orthographic choice; RAN = rapid
automatized naming; WM = working memory; OLD = orthographic Levenshtein distance. Late-emerging reading difficulty group acted as the referent group for the ERD and TD comparison.
*p < .05.
Intercepts
r010 person
r001 item
Person slopes
r020 transparency
Item slopes
r101 root word recognition
r002 TD
r003 MA
r004 vocabulary
Deviance (parameters)
Random effects
Intercept (λ0)
Group
γ010 ERD
γ020 TD
Item-specific covariate
λ1 root word recognition
Child covariates
γ030 attention
γ040 RS
γ050 MA
γ060 PA
γ070 OC
γ080 RAN
γ090 vocabulary
γ0100 WM
Word covariates
γ001 frequency (frequency)
γ002 OLD
γ003 root word family size
γ004 root word frequency
γ005 suffix frequency
γ006 transparency
Interactions
γ0110 ERD × MA
γ0120 TD × MA
γ0130 ERD × PA
γ0140 TD × PA
γ013 ERD × Root Family Size
γ023 TD × Root Family Size
γ106 Root Reading × Transparency
Fixed effects parameter
Model 2: Item-specific model
Table 8. Full Sample Fixed Effects Estimates (Top) and Variance-Covariance Estimates (Bottom).
16
Journal of Learning Disabilities 
for MA on items ( σ 2 r003 = 0.019, correlated with the item
intercept, r = .20), vocabulary on items ( σ 2 r004 = 0.014),
and transparency on persons ( σ 2 r020 = 0.168). The intercept ( λ 0 = 0.229) indicated a mean probability of a correct
response of .56 for the average child with LERD, all other
things being equal. In terms of explanatory power, Model 3
reduced both the person and item variance from the group
model by 62%.
For child-reading-related skills, we found significant
effects for four covariates. Below, we list probabilities for
each covariate for an average child in the LERD group who
did not recognize the root attempting an average nontransparent polymorphemic word. The MA effect ( γ 050 = 0.320)
indicated that an MA score 1 standard deviation greater than
the sample mean—about two additional questions on the
MA measures—would increase the probability of a correct
response to .71, while an MA score 1 standard deviation
less than the mean would decrease the probability to .40.
The phonological awareness effect ( γ 060 = 0.077) indicated
that a PA score 1 standard deviation greater than the sample
mean would increase the probability of a correct response to
.65 while a PA score 1 standard deviation less than the mean
would decrease the probability to .46. For OC ( γ 070 =
0.222), the effect meant that an OC score 1 standard deviation less than the sample mean would mean a .34 probability of a correct response, while an OC score 1 standard
deviation greater than the mean would relate to a .76 probability of a correct response. Finally, the RAN coefficient
( γ 080 = −0.238) indicated that a RAN score 1 standard
deviation faster than average (about 28 s) related to a .49
probability of a correct response, compared with .62 for 1
standard-deviation-slower RAN (about 49 s). For word
characteristics, the effect of frequency was significant, γ 001
= 0.101, indicating that the probability of correct response
for an average child with LERD on a word with frequency
1 standard deviation greater than the sample mean—and all
else being equal—was .75, as opposed to a probability of
.34 for a word 1 standard deviation less than the sample
mean. The root word family size effect, γ 003 = 0.531, related
to a probability of .68 if the log root word family frequency
was 1 standard deviation greater than the sample mean and
.43 if 1 standard deviation less than the mean. For transparency, γ 006 = 1.991, an average word with a transparent root
word had a .74 probability of being read correctly by an
average child with LERD, while a word with a shift had a
.28 probability of being read correctly.
Combined model (Model 4). Model 4 combined the
effects from the prior two models. Model 4 reduced both
person and item variance in the group model by 68%. All
of the effects from Model 2 and Model 3 remained significant, although the ERD effect ( γ 010 = 0.017) was not significantly different than 0. The only noteworthy difference
was that the effect of transparency ( γ 006 = 1.362) reflected
a probability of a correct response of .49 for transparent
words and .21 for those with shifts, smaller than was found
in the child and word covariate model. This difference
makes sense, given that children may read derived words
correctly when their relations with the specific root word
are transparent. Thus, item-specific knowledge is perhaps
more powerful than simple transparency.
Research Question 2. The second research question concerned differences between children with LERD, children
with ERD, and TD children on MA, phonological awareness, and root word family frequency. In addition, it considered the potential interaction of item-specific root word
recognition and transparency (see Table 8, Model 5). The
interaction model reduced both person and item variability
by 72%. Two significant interaction effects were observed
in the combined model. We found a significant interaction
between ERD status and MA, γ 0110 = 0.526. We graphed
the interaction, depicted in Figure 2, to examine how these
variables related. The figure shows the predicted probabilities for children with ERD and LERD who have MA scores
for the number of correct answers less than the mean (M =
9.85). The children in the ERD group appear to have a different relationship between MA and polymorphemic word
recognition than children with LERD. Children in the ERD
group appeared to profit more from MA skill than children
in the LERD group. We also observed an interaction
between item-specific root word recognition and transparency, γ 106 = 0.960. Graphing this interaction (see Figure 3)
suggests that the impact of correct root word recognition on
polymorphemic word recognition accuracy is greater for
transparent words than words with shifts.
Familiarity Analysis
For the familiarity analysis, we replicated the full sample
Models 0 through 5 but added familiarity as an item-specific covariate (see Table 9). The unconditional model had
an intercept estimate of λ 0 = 0.598, corresponding to a predicted probability of a correct polymorphemic word recognition response of .65 across persons and items. Variability
was evident for both persons ( σ 2 r010 = 3.534) and items
( σ 2 r001 = 2.007). The group model (Model 1F; all familiarity model numbers end with F) with ERD ( γ 010 = −3.131)
and TD ( γ 020 = 2.265) fixed effects and a random item
slope on TD fit better than the unconditional model, Δ χ32 =
138.61, p < .0001, and better than Model 1F with fixed
2
effects only, Δ χ1 = 3.726, p = .053. This was considered
2
significant because χ1 tests of random variances cannot
have values less than 0, and thus p values for these tests
should be halved (Bates, 2010). The Model 1 person random effect variance was 0.774, and the item random effect
variance was 1.812. These variances indicated that enough
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
17
Kearns et al.
Figure 2. The interaction between the type of reader and children’s morphological awareness score and their impact on the
probability of a correct polymorphemic word recognition response.
Figure 3. The interaction of item-specific root word
recognition and morphological transparency and their impact on
the probability of a correct polymorphemic word recognition
response.
variability was present in the familiarity analysis to proceed
to answering the research questions.
Research Question 1. Models 2F, 3F, and 4F contained itemspecific familiarity along with item-specific root word recognition and child-level and word-level covariates. In
item-specific Model 2F, the intercept ( λ 0 = −1.921) indicated a probability of a correct response when the root word
was incorrect and the child unfamiliar of .13. The familiarity effect ( λ 2 = 0.839) indicated that being familiar with the
word would increase this probability to .25. There was also a
2
random person effect for familiarity ( σ r210 = 0.677 ), indicating that the impact of familiarity on the probability of a
correct response varied across children for a given word.
In the child and word covariate model (Model 3F), the
random effects structure was somewhat different than that
for the main analysis. Only TD and vocabulary had significant random item variability. The significant predictors
were the same, except a root word frequency effect was
present in the familiarity analysis that did not appear in the
full analysis, and there was no longer a significant RAN
effect. The combined model explained 79% of the person
variance and 71% of the item variance in the group model,
Model 1F. The intercept ( λ 0 = −1.285) indicated a mean
probability of a correct polymorphemic word recognition
response of .22 for the average child in the LERD group
who read the root incorrectly and was unfamiliar with the
word.
Research Question 2. To determine the effects of the interactions, the 10 hypothesized interactions were included. Due
to decreased sample size, and therefore decreased power,
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
18
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
(0.185)*
(0.154)*
1.020*
0.839*
2469 (9)
2446 (21)
0.407
0.729
0.432
0.190
0.514
0.185
53
72
0.512
0.601
0.714
SD
0.115*
0.101
0.628*
−0.092*
−0.111
1.959*
−0.019
0.044
0.217*
0.067*
0.177*
0.007
0.007
−0.022
0.998*
0.744*
−0.668
0.897*
−1.285*
Est.
0.540
43
22
4.250*
0.386
3.329*
2.376*
0.556
4.754*
0.440
0.489
3.212*
2.981*
5.943*
0.616
0.625
0.695
1.722****
3.556*
4.548*
z
Variance reduction (%)
(0.031)*
(0.597)
(0.213)*
(0.037)*
(0.208)
(0.460)*
0.130*
0.230
0.708*
−0.089*
−0.116
2.185*
SD
(0.080)
(0.080)
(0.072)*
(0.021)*
(0.034)*
(0.096)
(0.068)
(0.025)
(0.484)****
(0.272)*
(0.282)*
(SE)
−0.035
0.039
0.231*
0.063*
0.202*
−0.059
0.041
−0.017
−0.833****
0.968*
−1.281*
Est.
3.723*
0.168
2.909*
2.460*
0.526
4.224*
0.254
0.590
3.240*
3.442*
5.480*
0.085
0.120
0.973
5.281*
5.060*
1.391
3.553*
4.671*
z
2378 (24)
78
70
Variance reduction
(%)
(0.031)*
(0.605)
(0.216)*
(0.038)*
(0.211)
(0.464)*
(0.075)
(0.075)
(0.067)*
(0.019)*
(0.032)*
(0.091)
(0.065)
(0.024)
(0.187)*
(0.147)*
(0.480)
(0.252)*
(0.275)*
(SE)
Model 4F: Combined model
0.677
0.667
1.187
5.525*
5.449*
5.753*
7.789*
5.705*
z
Variance reduction
(%)
(0.466)*
(0.255)*
−2.683*
1.987*
SD
(0.337)*
(SE)
−1.921*
Est.
Model 3F: Person and item model
0.459
0.195
0.502
0.402
0.679
0.780
0.834
1.300
0.004
1.964*
0.444
1.258
0.499
1.663*
0.112
3.894*
0.181
2.447*
2.517*
0.534
2.496*
0.414
0.412
2.523*
1.608
5.398*
0.448
0.017
1.117
2.784
0.616
1.231
3.118*
3.015
z
2362 (34)
79
71
Variance reduction
(%)
(0.362)
(0.132)
(0.294)
(0.047)
(0.353)*
(0.174)
(0.522)
(0.647)
(0.377)*
(0.697)
0.282
−0.110
0.383
0.000
0.693*
0.077
0.657
−0.323
0.626*
0.078
SD
(0.029)*
(0.573)
(0.221)*
(0.036)*
(0.200)
(0.632)*
(0.074)
(0.075)
(0.118)*
(0.041)
(0.034)*
(0.092)
(0.066)
(0.023)
(0.278)
(0.361)
(1.690)
(0.282)*
(0.361)
(SE)
0.114*
0.104
0.540*
−0.090*
−0.107
1.578*
−0.031
0.031
0.297*
0.066
0.182*
0.041
−0.001
−0.026
0.775
0.222
2.081
0.879*
−1.089
Est.
Model 5F: Interaction model
Note. Est. = estimate; ERD = early-emerging reading difficulty; TD = typically developing; RS = recalling sentences; MA = morphological awareness; PA = phonological awareness; OC = orthographic choice; RAN = rapid
automatized naming; WM = working memory; OLD = orthographic Levenshtein distance.
*p < .05. ****p < .10.
Intercepts
r010 person
r001 item
Person slopes
r210 familiarity
Item slopes
r002 TD
r004 vocabulary
Deviance (parameters)
Random effects
Intercept (λ0)
Group
γ010 ERD
γ020 TD
Item-specific covariate
λ1 root word recognition
λ2 familiarity (Familiarity)
Child covariates
γ030 attention
γ040 RS
γ050 MA
γ060 PA
γ070 OC
γ080 RAN
γ090 vocabulary
γ0100 WM
Word covariates
γ001 frequency (frequency)
γ002 OLD
γ003 root word family size
γ004 root word frequency
γ005 suffix frequency
γ006 transparency (transparency)
Interactions
γ0110 ERD × MA
γ0120 TD × MA
γ0130 ERD × PA
γ0140 TD × PA
γ013 ERD × Root Family Size
γ023 TD × Root Family Size
γ106 Root Word Recognition × Transparency
γ206 Familiarity × Transparency
λ3 Root Word Recognition × Familiarity
γ306 Root Word Recognition × Familiarity ×
Transparency
Fixed effects parameter
Model 2F: Item-specific model
Table 9. Familiarity Sample Fixed Effects Estimates (Top) and Variance-Covariance Estimates (Bottom).
19
Kearns et al.
we interpret the interaction effects with caution and consider them to be exploratory. The pattern of interactions was
different than in the full model. There was no interaction
between ERD and morphological awareness, λ 013 = 0.282,
p = .44, in contrast to the full analysis. The effect is in the
same direction as the original effect, but with a smaller
magnitude and a larger standard error. We ran an analysis
using the full analysis main effects and interactions but with
the familiarity data set to determine whether this would
affect the results. The coefficient for the interaction was
similar, γ 013 = 0.270, suggesting that the difference reflects
the characteristics of the familiarity child subsample. In
addition, the much larger standard error for the interaction
in the familiarity analysis suggests limited power to detect
effects because of changes in sample size. Therefore, we do
not consider this result as contradicting the ERD-MA interaction in the main analysis.
There was also no interaction between root word recognition and transparency ( γ 106 = 0.657, p = .21). ERD status
did, however, interact with root word family size. The effect
suggested that children in the ERD group performed similarly regardless of root word family size while children in
the LERD group performed better on words with large root
word families than words with those small ones. There was
a marginally significant interaction of root word recognition and familiarity that indicated a synergistic effect of root
word recognition and familiarity, such that the combined
effect of recognition and familiarity was greater than the
effects of the two on their own. This is an interesting effect
of questionable reliability.
Discussion
In this study we have argued that the development of polymorphemic word recognition constitutes an important academic competence that allows access to content-specific
semantic information needed to comprehend texts that are
encountered in later elementary grades and beyond (Bryant
et al., 1999; Nagy et al. 1989). We maintain that comprehensive models of polymorphemic word recognition are
needed to assess the importance of item-specific, childlevel, and word-level variables as predictors of item variance. Furthermore, we assert that models such as the one
developed in this study are necessary to begin the search for
potentially malleable factors that can improve the ability of
children, with particular attention to those with RD, to recognize polymorphemic words. We interpret results of the
study within a multisource individual difference model of
polymorphemic word recognition skill spanning item-specific, child-level, and word-level knowledge. Such a perspective allows for a wide range of attributes to
simultaneously affect the likelihood that a given child will
recognize a particular word. Overall, results suggest that
there are multiple sources that explain polymorphemic
word recognition variance including item-specific knowledge, child-level characteristics, and word-level characteristics. Although the design of this study does not allow for
causal inferences, allowing word and child attributes to
compete for variance in the same model provides an opportunity to consider new, and possibly untested, approaches to
effectively teach polymorphemic recognition skills to struggling readers.
Item-Specific Predictors of Polymorphemic Word
Recognition
In this study we considered the effects of two child-level
item-specific predictors on polymorphemic word recognition—root word recognition and word familiarity. We
found that children’s familiarity with a word, considered a
proxy for lexical knowledge (either in the form of semantic or phonological representations, or both), and their
ability to recognize the root word in isolation significantly
predicted individual variation in polymorphemic word
recognition response accuracy. The fact that the correct
reading of the root word contributed to accurate identification of the polymorphemic word is consistent with previous findings in the literature (Goodwin et al., 2013;
Goodwin et al., 2014; Kearns, in press). The link between
root word and polymorphemic word recognition accuracies is readily apparent and speaks to the relevance of the
base morpheme as an important perceptual unit that influences word recognition (see Carlisle & Stone, 2005; Nagy
et al., 2003; Nagy et al., 2006).
The fact that word familiarity acted as a significant
predictor of polymorphemic word recognition, above and
beyond general child-level vocabulary and item-specific
root word reading, is interesting. Nation and Cocksey
(2009) have reported that item-level familiarity was a significant predictor of word reading in young developing
readers, with the association being stronger when words
contained irregular spelling-sound correspondences.
Furthermore, results indicated that deeper semantic
knowledge of a word did not predict word-reading success above and beyond familiarity with the phonological
form. In considering word familiarity as a proxy for the
existence of an intact phonological representation, our
model results for polymorphemic word recognition are
similar to those of Nation and Cocksey in pointing to the
importance of lexical phonology on word recognition.
However, Taylor et al. (2011) have reported that word
learning of an artificial orthography in adults was
enhanced by preexposure to item definitions but not item
lexical phonology. Furthermore, this semantic benefit
was specific to items containing low-frequency-inconsistent vowels.
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
20
Journal of Learning Disabilities 
So while it is not clear what the overall effects of item
familiarity versus item definition are on word recognition
skills and whether these effects vary by age and task, our
results certainly support a growing literature implicating the
role of item-specific lexical knowledge on word reading
skill. For instance, connectionist models of word recognition (see Harm & Seidenberg, 2004; Plaut, McClelland,
Seidenberg, & Patterson, 1996) have shown that the addition of a semantic processor (represented as item-specific
knowledge) to a model containing phonological and orthographic processors improves both nonword and exception
word recognition. Ricketts et al. (2007) also found that
item-specific vocabulary knowledge accounted for unique
variance in exception word reading in developing readers.
Furthermore, having item-specific vocabulary knowledge
for a word has been shown to be a significant predictor of
orthographic learning within a self-teaching model of reading development (Wang, Nickels, Nation, & Castles, 2013).
Keenan and Betjemann (2008) have speculated that itemspecific semantic activation may help to “fill voids” in phonological-orthographic processing in individuals with poor
mappings, such as children with RD (p. 193). We interpret
our results as supporting a developmental word-reading
model in which orthography-to-phonological pathways
become at least partially depend on lexical input, with this
influence increasing as words become more orthographically and morphologically complex (e.g., polymorphemic
words). We argue that further exploration of the link
between item-specific lexical knowledge and word recognition within the context of training is warranted, with specific attention paid to the underlying orthographic and
morphological structure of polymorphemic words.
Child- and Word-Level Predictors of
Polymorphemic Word Recognition
Our results suggest that there are multiple predictors of
polymorphemic word recognition, at both the child and the
word level. Since results from the full and familiarity samples were similar we focus on our discussion of general
child- and word-level predictors on the full sample analysis
interactive model (Models 5). Model results indicate, after
controlling for isolated root word recognition accuracy, RD
status and general child-level cognitive performance on
tests measuring morphological and orthographic knowledge were significant predictors of individual differences in
item-level polymorphemic word recognition. In terms of
RD status, children with LERD were less accurate than the
TD group but more accurate than children with ERD. This
ordering effect is consistent with speculation that the wordreading deficits of children with LERD may initially be less
severe compared to those in children with ERD and further
provides some support for previous speculation that LERD
in word reading may arise as the orthographic and morphological demands increase with the need to recognize multisyllabic words (see Catts et al., 2102; Leach et al., 2003).
The finding that MA accounts for variance in polymorphemic word recognition is consistent with other studies of
MA that have found MA to be significantly related to wordreading outcomes (Kearns, in press; Carlisle & Katz, 2006;
Carlisle & Stone, 2005; Deacon & Kirby, 2004; Goodwin et
al., 2013; Kirby et al., 2012; Mahony et al., 2000;
McCutchen et al., 2009; Nagy et al., 1989). We found it
remarkable that MA continued to be a significant predictor
of polymorphemic word recognition even when controlling
for root word recognition, root word frequency, and suffix
frequency. As expected, students who demonstrated a
greater understanding and awareness of derivational suffixes on the MA tasks also performed well on polymorphemic word recognition.
Orthographic coding was a second child-level cognitive
process that predicted significant variance in polymorphemic word recognition. Our measure of orthographic coding
(i.e., OC task) required children to choose the correct spelling of a target word when presented the word (take) and a
pseudohomophone foil (taik; Olson, Forsberg, Wise, &
Rack, 1994; Olson, Wise, Conners, Rack, & Fulker, 1989).
Evidence suggests that OC measures a skill distinct from
phonological decoding, text exposure, and other readingrelated skills and that the task is one of the best measures of
orthographic coding skill (e.g., Cunningham et al., 2001;
Hagiliassis, Pratt, & Johnston, 2006). That the orthographic
coding measure remained significant while in the presence
of root word recognition, MA, PA, and RAN suggests that
there may be unique orthographic demands associated with
polymorphemic word recognition (see Catts et al., 2012).
However, it is difficult to directly infer the relationship
between OC and polymorphemic word recognition beyond
the idea that OC taps general orthographic processing skill
and polymorphemic words are orthographically complex
due to increased letter length and the presence of multiple
syllables.
In the interaction model (Model 5) we identified two significant predictors of polymorphemic word recognition performance at the word level: frequency and root word family
frequency. These findings are consistent with other findings
of a significant contribution of word frequency (Yap &
Balota, 2009) and root word family frequency (Carlisle &
Katz, 2006). While we did not identify a main effect of
transparency as previously reported by Goodwin et al.
(2013), we found a significant interaction between children’s root word recognition and root word transparency.
This interaction (see Figure 3) suggests that there is a
greater benefit to reading the root word correctly when the
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
21
Kearns et al.
word is morphologically transparent than when the word
involves a morphological shift. This effect is certainly logical, but it has not been shown in other studies.
Polymorphemic Word Recognition in Students
with ERD and LERD
The value of distinguishing between students who were TD
and those with early- and late-identified RDs was of particular interest in this study. The use of item-based random
effects models allowed us to probe for interaction between
child- and word-level characteristics. While we did not find
the anticipated relationship between PA and reading group,
we did identify the predicted interaction between MA and
RD status. This interaction (see Figure 2) indicates that, all
else being equal, the effect of MA on polymorphemic word
reading is stronger for children with ERD than children
with LERD. So while children with ERD perform lower on
the MA task overall, their relative position on the MA distribution has a stronger relationship with polymorphemic
word recognition compared to children who have LERD.
This finding suggests that children with LERD may have
specific difficulties exploiting morphological knowledge to
aid in recognition of polymorphemic words. This is consistent with thoughts by Catts et al. (2012) and Leach et al.
(2003) that more complex words require advanced morphological knowledge that may not be available to children
with LERD.
An alternative interpretation is that children with LERD
have MA but simply do not use it to read words. One source
of evidence supporting this idea is that the performance of
children in the LERD group on the pseudo-derived word
(i.e., dogless) task correlated strongly with their Peabody
Picture Vocabulary Test (fourth edition) vocabulary scores,
r(45) = .53, p < .001, but had no predictable correlation with
word-reading performance, r(45) = .07, p = .65. For the
overall morphological composite, the correlation with the
Peabody Picture Vocabulary Test (fourth edition) was .60
and with polymorphemic word reading just .35. By contrast, children in the ERD group had a morphologicalvocabulary correlation of .63 and a morphological-word
reading correlation of .82. Thus, children with LERD may
simply not use their morphological skills for word reading
tasks, even if these skills are strong.
Pushing these results past the correlational design used
in the study, it may be that children with LERD
would benefit from explicit training in identifying and using
morphemes when decoding unfamiliar polysyllabic words
and in particular polymorphemic words. This is consonant
with the results of Goodwin and Ahn’s (2010) meta-analysis, which suggested that children with learning disabilities,
reading disabilities, and RD obtained benefits from morphological training. It also aligns with Goodwin and Ahn’s
(2013) meta-analysis, which suggested morphological
training improved decoding (see also Reed, 2008). Such
speculation requires research designs that allow causal
inferences between morphological training and polymorphemic word recognition to be made in children with LERD
before such programs can be advocated. Thus we cautiously
suggest that the significant RD status by MA finding may
have important implications for earlier identification and
intervention for students who will develop LERD.
Conclusion
In conclusion, this study has identified multiple sources
affecting individual differences in polymorphemic word
recognition among fifth-grade children oversampled for
RD. Results indicate that item-specific root word recognition and word familiarity; child-level RD status, MA, and
OC; word-level frequency and root word family frequency;
and the interactions between MA and RD status and root
word recognition and root transparency predicted individual differences in polymorphemic word recognition item
performance. Results point to the importance of meaning,
both in terms of semantics and morphological knowledge,
as an important predictor of individual differences in polymorphemic word recognition. Thus we concluded that morphological processes do appear to be very important in
polymorphemic word recognition. This is especially noteworthy in that we reported differential effects of MA on
polymorphemic word recognition between children with
ERD and LERD. Results suggest that further work is warranted examining the role of MA for earlier identification
and intervention of students who will develop LERD. In
addition, there are other features of polymorphemic words
(e.g., stress assignment, Clin, Wade-Woolley, & Heggie,
2009; variability in vowel pronunciation, Chomsky, 1970;
Elbro, de Jong, Houter, & Nielsen, 2012) that should be
addressed in future studies. With additional research, we
hope that progress can be made to prevent and remediate
word RDs in late-elementary-age children.
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
22
Journal of Learning Disabilities 
Appendix A
List of Polymorphemic Words
intensity
convention
oddity
entirely
flowery
majority
beastly
confusion
finality
masterful
idealize
workable
precision
organist
magician
natural
fearsome
secretive
dependence
confession
security
bucketful
movement
agility
preventive
heavenly
odorous
classical
showy
cultural
Step 3 applied only when we rejected the null hypothesis
in Step 2. If the model with the random effect slope fit better, we dropped the assumption of zero covariance between
the random effect intercept and slope. For the TD model,
Equation 2 shows the covariance structure of the random
item effects.
 0   σ r2001
σ r2001, r 002  
 r001 

MN
~
,


r 
2
 0  σ r2001, r 002
σ
 
 002 
2
r
00


(2)
We compared the Step 3 model to the Step 2 model using
the likelihood ratio test and the same null hypothesis that
the more parsimonious model best fit of the data. Step 4
involved merely combining the results of the iterations of
Step 2 and Step 3. The Step 4 model was assumed to have
the best fit. This multistep procedure follows Bates’s (2011)
recommendations and helps ensure the final model provides
the best fit for the data.
Appendix C
Appendix B
Method for Establishing the Random Effects
Structure
The steps to establish the random effects structure were (1)
adding fixed effects together, (2) adding random slopes
with respect to each fixed effect, (3) permitting correlations
between random slopes and intercepts, and (4) estimating a
model based on the iterations of Steps 1 through 3.
To illustrate the steps, we describe how we did this for
the Model 1. In Step 1, we included only the typically
developing (TD) and early-emerging reading difficulty
(ERD) fixed effects, the person random effect, and the item
random effect, as follows: Logit(pji) = λ0 + (γ010)ERD +
(γ020)TD + r010 + r001. In Step 2, we permitted random effect
slopes with respect to the fixed effects. For ERD and TD, as
with all child covariates, we examined whether the effect
varied randomly across items. We did this separately for the
two covariates. For TD, the model was this: Logit(pji) = λ0
+ γ010ERD + (γ020 + r002)TD + r010 + r001. For ERD, it was
this: Logit(pji) = γ000 + (γ010 + r005)ERD + γ020TD + r010 +
r001. The intercept and slope random effects (e.g., r001 and
r002 for the TD model) were assumed to have zero covariance. After adding each slope, we conducted a likelihood
ratio test to determine whether the random slope improved
model fit or whether the simpler model without the effect
adequately represented the data. The likelihood ratio test
statistic is the difference in deviance between the previously
tested simpler model (H0) and a more complex one (Ha).
The reference distribution is the χv2 distribution where v
represents the degrees of freedom between H0 and Ha. When
the more parsimonious model (H0) fit worse, the more complex model (Ha) was considered to fit the data better.
Methods for Calculating Explanatory Power
The first method was to calculate 95% plausible values
ranges. For persons, we used the formula
1
for the upper bound and

1 + exp[−( γ + {2 × σ 2 r })]
010 j
000
1
1 + exp[−( γ 000 − {2 × σ 2 r010 j })] for the lower bound. There
was an equivalent formula for items. The range is an index
of remaining variability. For persons, the range indicates the
range within which 95% of children’s responses would fall,
for a given item. As covariates explain variability, the range
becomes smaller, so the range allows us to evaluate the
explanatory power of the model.
The second method was to calculate the reduction in
child and item variance from the base model containing
only the typically developing and early-emerging reading
difficulty
covariates.
The
formula
was
(r010( Base model ) − r010( Model n ) ) / r010( Base model ) ) for persons,
where n represents the model to which the base model was
compared.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with
respect to the research, authorship, and/or publication of this
article.
Funding
The author(s) disclosed receipt of the following financial support
for the research, authorship, and/or publication of this article: This
research was supported in part by Grants R324G060036 and
R305A100034 from the Institute of Education Sciences (IES) in
the U.S. Department of Education and by Core Grant HD15052
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
23
Kearns et al.
from the Eunice Kennedy Shriver National Institute of Child
Health & Human Development (NICHD), all to Vanderbilt
University. The content is solely the responsibility of the authors
and does not necessarily represent the official view of IES or
NICHD.
References
Adams, M. J. (1981). What good is orthographic redundancy?
In O. J. L. Tzeng & H. Singer (Eds.), Perception of print:
Reading research in experimental psychology (pp. 197–221).
Hillsdale, NJ: Erlbaum.
Arciuli, J., Monaghan, P., & Ševa, N. (2010). Learning to assign
lexical stress during reading aloud: Corpus, behavioral,
and computational investigations. Journal of Memory and
Language, 63, 180–196.
Baayen, R. H., Piepenbrock, R., & van Rijn, H. (1993). The
CELEX lexical database [CD-ROM]. Philadelphia, PA:
Linguistic Data Consortium.
Balota, D. A., Yap, M. J., Cortese, M. J., Hutchison, K. A.,
Kessler, B., Loftis, B., & Treiman, R. (2007). The English
Lexicon Project. Behavior Research Methods, 39, 445–459.
Bates, D. (2010). lme4: Mixed-effects modeling with R. New York,
NY: Springer.
Bates, D. (2011, June 22). [r-sig-ME] Correlated random effects
in lmer and false convergence. Retrieved from https://stat.
ethz.ch/pipermail/r-sig-mixed-models/2010q2/003921.html
Bates, D., Maechler, M., & Bolker, B. (2013). lme4: Linear mixedeffects models using S4 classes (R package version 0.999
999-0). Retrieved from http://cran.r-project.org/web/packages/lme4/index.html
Berninger, V. W. (1994). The varieties of orthographic knowledge I: Theoretical and developmental issues. Dordrecht, the
Netherlands: Kluwer Academic.
Bowers, P. G. (1995). Tracing symbol naming speed’s unique
contributions to reading disabilities over time. Reading and
Writing, 7(2), 189–216.
Bowers, P. N., Kirby, J. R., & Deacon, S. H. (2010). The effects
of morphological instruction on literacy skills: A systematic
review of the literature. Review of Educational Research,
80(2), 144–179.
Breland, H. M. (1996). Word frequency and word difficulty: A
comparison of counts in four corpora. Psychological Science,
7, 96–99.
Brown, G. D. A. (1984). A frequency count of 190,000 words
in their London-Lund Corpus of English Conversation.
Behavior Research Methods, Instrumentation, & Computers,
16, 502–532.
Bryant, D. P., Ugel, N., Thompson, S., & Hamff, A. (1999).
Instructional strategies for content-area reading instruction.
Intervention in School and Clinic, 34, 293–302.
Carlisle, J. F. (2000). Awareness of the structure and meaning of
morphologically complex words: Impact on reading. Reading
and Writing, 12(3/4), 169–190.
Carlisle, J. F. (2010). Effects of instruction in morphological
awareness on literacy achievement: An integrative review.
Reading Research Quarterly, 45(4), 464–487.
Carlisle, J. F., & Goodwin, A. P. (2013). Morphemes matter: How
morphological knowledge contributes to reading and writing.
In C. A. Stone, E. R. Silliman, B. J. Ehren, & K. Apel (Eds.),
Handbook of language and literacy: Development and disorders (pp. 265–282). New York, NY: Guildford.
Carlisle, J. F., & Katz, L. A. (2006). Effects of word and morpheme familiarity on reading of derived words. Reading and
Writing, 19, 669–693.
Carlisle, J. F., & Nomanbhoy, D. M. (1993). Phonological
and morphological awareness in first graders. Applied
Psycholinguistics, 14(2), 177–195.
Carlisle, J. F., & Stone, C. A. (2005). Exploring the role of morphemes in word reading. Reading Research Quarterly, 40(4),
428–449.
Carlisle, J. F., Stone, C. A., & Katz, L. A. (2001). The effects of
phonological transparency on reading derived words. Annals
of Dyslexia, 51, 249–274.
Catts, H. W., Compton, D. L., Tomblin, J. B., & Bridges, M. S.
(2012). Prevalence and nature of late-emerging poor readers.
Journal of Educational Psychology, 104, 166–181.
Champion, A. (1997). Knowledge of suffixed words: A comparison of reading disabled and nondisabled readers. Annals of
Dyslexia, 47, 29–55.
Chateau, D., & Jared, D. (2003). Spelling-sound consistency
effects in disyllabic word naming. Journal of Memory and
Language, 48(2), 255–280.
Chetail, F., Balota, D., Treiman, R., & Content, A. (in press).
What can megastudies tell us about the orthographic structure of English words? Quarterly Journal of Experimental
Psychology.
Chomsky, C. (1970). Reading, writing, and phonology. Harvard
Educational Review, 40, 287–309.
Clin, E., Wade-Woolley, L., & Heggie, L. (2009). Prosodic sensitivity and morphological awareness in children’s reading.
Journal of Experimental Child Psychology, 104, 197–213.
Collins, L. M., & Wugalter, S. E. (1992). Latent class models
for stage-sequential dynamic latent variables. Multivariate
Behavioral Research, 27, 131–157.
Compton, D. L. (2000). Modeling the growth of decoding skills
in first-grade children. Scientific Studies of Reading, 4(3),
219–259.
Compton, D. L., Fuchs, D., Fuchs, L. S., Bouton, B., Gilbert, J.
K., Barquero, L. A., & Crouch, R. C. (2010). Selecting at-risk
readers in first grade for early intervention: Eliminating false
positives and exploring the promise of a two-stage screening
process. Journal of Educational Psychology, 102, 327–340.
Compton, D. L., Miller, A. C., Elleman, A. M., & Steacy, L. M.
(2014). Have we forsaken reading theory in the name of
“quick fix” interventions for children with reading disability?
Scientific Studies of Reading, 18(1), 55–73.
Connine, C. M., Mullennix, J., Shernoff, E., & Yelen, J. (1990).
Word familiarity and frequency in visual and auditory word
recognition. Journal of Experimental Psychology: Learning,
Memory, and Cognition, 16(6), 1084–1096.
Cunningham, A. E., Perry, K. E., & Stanovich, K. E. (2001).
Converging evidence for the concept of orthographic processing. Reading and Writing: An Interdisciplinary Journal, 14,
549–568.
De Boeck, P. (2008). Random item IRT models. Psychometrika,
73, 533–559.
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
24
Journal of Learning Disabilities 
Deacon, S. H., & Kirby, J. R. (2004). Morphological awareness:
Just “more phonological”? The roles of morphological and
phonological awareness in reading development. Applied
Psycholinguistics, 25(2), 223–238.
Derwing, B. (1976). Morpheme recognition and the learning
of rules for derivational morphology. Canadian Journal of
Linguistics, 21, 38–66.
Elbro, C., & Arnbak, E. (1996). The role of morpheme recognition
and morphological awareness in dyslexia. Annals of Dyslexia,
46, 209–240.
Elbro, C., de Jong, P. F., Houter, D., & Nielsen, A. M. (2012).
From spelling pronunciation to lexical access: A second step
in word decoding? Scientific Studies of Reading, 16, 341–359.
Fitzsimmons, G., & Drieghe, D. (2011). The influence of number
of syllables on word skipping during reading. Psychonomic
Bulletin & Review, 18, 736–741.
Fuchs, L. S., Fuchs, D., & Compton, D. L. (2004). Monitoring
early reading development in first grade: Word identification
fluency versus nonsense word fluency. Exceptional Children,
71(1), 7–21.
Georgiou, G. K., Parrila, R., & Kirby, J. (2006). Rapid naming
speed components and early reading acquisition. Scientific
Studies of Reading, 10(2), 199–220.
Gernsbacher, M. A. (1984). Resolving 20 years of inconsistent interactions between lexical familiarity and orthography, concreteness, and polysemy. Journal of Experimental
Psychology: General, 113(2), 256–281.
Gathercole, S. E, & Pickering, S. J. (2000). Working memory
deficits in children with low achievements in the national curriculum at 7 years of age. Canadian Journal of Educational
Psychology, 70, 177–194.
Gilbert, J. K., Compton, D. L., Fuchs, D., Fuchs, L. S., Bouton, B.,
Barquero, L. A., & Cho, E. (2013). Efficacy of a first-grade
responsiveness-to-intervention prevention model for struggling readers. Reading Research Quarterly, 48, 135–154.
Gilhooly, K. J., & Logie, R. H. (1980). Age-of-acquisition, imagery, concreteness, familiarity, and ambiguity measures for
1,944 words. Behavior Research Methods & Instrumentation,
12, 395–427.
Goodwin, A. P., & Ahn, S. (2010). A meta-analysis of morphological interventions: Effects on literacy achievement of children
with literacy difficulties. Annals of Dyslexia, 60(2), 183–208.
Goodwin, A. P., & Ahn, S. (2013). A meta-analysis of morphological interventions in English: Effects on literacy outcomes
for school-age children. Scientific Studies of Reading, 17(4),
257–285.
Goodwin, A. P., Gilbert, J. K., & Cho, S. (2013). Morphological
contributions to adolescent word reading: An item response
approach. Reading Research Quarterly, 48, 39–60.
Goodwin, A. P., Gilbert, J. K., Cho, S., & Kearns, D. M. (2014).
Probing lexical representations: Simultaneous modeling of
word and reader contributions to multidimensional lexical
representations. Journal of Educational Psychology, 106,
448–468.
Hagiliassis, N., Pratt, C., & Johnston, M. (2006). Orthographic
and phonological processes in reading. Reading and Writing,
19, 235–263.
Harm, M. W., & Seidenberg, M. S. (2004). Computing the
meanings of words in reading: cooperative division of labor
between visual and phonological processes. Psychological
Review, 111, 662–720. doi:10.1037/0033-295X.111.3.662
Harris, P. A., Taylor, R., Thielke, R., Payne, J., Gonzalez, N.,
& Conrad, J. G. (2009). Research electronic data capture
(REDCap): A metadata-driven methodology and workflow
process for providing translational research informatics support. Journal of Biomedical Information, 42, 377–381.
Janssen, R., Schepers, J., & Peres, D. (2004). Models with item
and item group predictors. In P. De Boeck & M. Wilson
(Eds.), Explanatory item response models: A generalized linear and nonlinear approach (pp. 189–212). New York, NY:
Springer.
Jared, D., & Seidenberg, M. S. (1990). Naming multisyllabic words. Journal of Experimental Psychology: Human
Perception and Performance, 16, 92–105.
Jarmulowicz, L. D. (2002). English derivational suffix frequency
and children’s stress judgments. Brain and Language, 81,
192–204.
Kaplan, D. (2008). An overview of Markov chain methods for
the study of stage-sequential developmental processes.
Developmental Psychology, 44, 457–467.
Kearns, D. M. (in press). How elementary-age children read
polysyllabic polymorphemic words. Journal of Educational
Psychology.
Keenan, J. M., & Betjemann, R. S. (2008). Comprehension of single words: The role of semantics in word identification and
reading disability. In E. Grigorenko (Ed.), Single-word reading: Behavioral and biological perspectives (pp. 191–209).
Mahwah, NJ: Lawrence Erlbaum.
Kirby, J. R., Deacon, S. H., Bowers, P. N., Izenberg, L., WadeWoolley, L., & Parrila, R. (2012). Children’s morphological awareness and reading ability. Reading and Writing, 25,
389–410.
Kirby, J. R., Georgiou, G. K., Martinussen, R., & Parrila, R.
(2010). Naming speed and reading: From prediction to
instruction. Reading Research Quarterly, 45(3), 341–362.
Kirby, J. R., Parrila, R. K., & Pfeiffer, S. L. (2003). Naming speed
and phonological awareness as predictors of reading development. Journal of Educational Psychology, 95, 453–464.
Kučera, H., & Francis, W. N. (1967). Computational analysis of
present-day American English. Sudbury, UK: Dartmouth.
Langeheine, R., & van de Pol, F. (2002). Latent Markov chains. In
J. A. Hagenaars & A. L. McCutcheon (Eds.), Applied latent
class analysis (pp. 304–341). Cambridge, UK: Cambridge
University Press.
Leach, J. M., Scarborough, H. S., & Rescorla, L. (2003). Lateemerging reading disabilities. Journal of Educational
Psychology, 95, 211–224.
Leong, C. K. (1989). Productive knowledge of derivational rules
in poor readers. Annals of Dyslexia, 39, 94–115.
Lesaux, N. K., & Kieffer, M. J. (2010). Exploring sources of reading comprehension difficulties among language minority
learners and their classmates in early adolescence. American
Educational Research Journal, 47, 596–632.
Lipka, O., Lesaux, N. K., & Siegel, L. S. (2006). Retrospective
analyses of the reading development of grade 4 students with
reading disabilities: Risk status and profiles over 5 years.
Journal of Learning Disabilities, 39, 364–378.
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
25
Kearns et al.
Lovett, M. W., Borden, S. L., DeLuca, T., Lacerenza, L., Benson,
N. J., & Brackstone, D. (1994). Testing the core deficits of
developmental dyslexia: Evidence of transfer of learning
after phonologically- and strategy-based reading training programs. Developmental Psychology, 30, 805–822.
Mahony, D., Singson, M., & Mann, V. (2000). Reading ability and
sensitivity to morphological relations. Reading and Writing,
12, 191–218.
McCutchen, D., Green, L., & Abbott, R. D. (2008). Children’s
morphological knowledge: Links to literacy. Reading
Psychology, 29, 289–314.
McCutchen, D., Logan, B., & Biangardi-Orpe, U. (2009). Making
meaning: Children’s sensitivity to morphological information during word reading. Reading Research Quarterly, 44,
360–376.
Metsala, J. L., Stanovich, K. E., & Brown, G. D. A. (1998).
Regularity effects and the phonological deficit model of
reading disabilities: A meta-analytic review. Journal of
Educational Psychology, 90, 279–293.
Miller, A. C., Fuchs, D., Fuchs, L., Compton, D. C., Kearns, D.,
Zhang, W., & Peterson, D. (in press). Behavioral attention:
A longitudinal study of whether and how it influences the
development of word reading and reading comprehension
among at-risk readers. Journal of Research on Educational
Effectiveness.
Muncer, S. J., & Knight, D. C. (2012). The bigram trough hypothesis and the syllable number effect in lexical decision.
Quarterly Journal of Experimental Psychology, 65, 2221–
2230.
Muncer, S. J., Knight, D., & Adams, J. W. (2014). Bigram frequency, number of syllables and morphemes and their
effects on lexical decision and word naming. Journal of
Psycholinguistic Research, 43, 241–254.
Muthén, L. K., & Muthén, B. O. (1998–2010). MPLUS user’s
guide (6th ed.). Los Angeles, CA: Muthen & Muthen.
Muthén, L. K., & Muthén, B. O. (1998–2012). MPLUS user’s
guide (7th ed.). Los Angeles, CA: Muthen & Muthen.
Nagy, W., Anderson, R. C., Schommer, M., Scott, J. A., &
Stallman, A. C. (1989). Morphological families in the internal
lexicon. Reading Research Quarterly, 24(3), 262–282.
Nagy, W., Berninger, V. W., & Abbott, R. D. (2006). Contributions
of morphology beyond phonology to literacy outcomes of
upper elementary and middle-school students. Journal of
Educational Psychology, 98(1), 134–147.
Nagy, W., Berninger, V., Abbott, R., Vaughan, K., & Vermeulen,
K. (2003). Relationship of morphology and other language
skills to literacy skills in at-risk second-grade readers and atrisk fourth-grade writers. Journal of Educational Psychology,
95(4), 730–742.
Nation, K., Angell, P., & Castles, A. (2007). Orthographic learning via self-teaching in children learning to read English:
Effects of exposure, durability, and context. Journal of
Experimental Child Psychology, 96, 71–84. doi: 10.1016/j.
jecp.2006.06.004
Nation, K., & Cocksey, J. (2009). The relationship between knowing a word and reading it aloud in children’s word reading
development. Journal of Experimental Child Psychology,
103, 296–308.
Nation, K., & Snowling, M. J. (2004). Beyond phonological
skills: Broader language skills contribute to the development
of reading. Journal of Research in Reading, 27(4), 342–356.
New, B., Ferrand, L., Pallier, C., & Brysbaert, M. (2006).
Reexamining the word length effect in visual word recognition: New evidence from the English Lexicon Project.
Psychonomic Bulletin & Review, 13, 45–52.
Norton, E. S., & Wolf, M. (2012). Rapid automatized naming
(RAN) and reading fluency: Implications for understanding and treatment of reading disabilities. Annual Review of
Psychology, 63, 427–452.
Nylund, K. L., Asparouhov, T., & Muthén, B. O. (2007).
Deciding on the number of classes in latent class analysis and
growth mixture modeling: A Monte Carlo simulation study.
Structural Equation Modeling, 14(4), 535–569.
Olson, R., Forsberg, H., Wise, B., & Rack, J. (1994). Measurement
of word recognition, orthographic, and phonological skills. In
G. R. Lyon (Ed.), Frames of reference for the assessment of
learning disabilities: New views on measurement issues (pp.
243–278). Baltimore, MD: Brookes.
Olson, R., Kliegl, R., Davidson, B., & Foltz, G. (1985). Individual
and developmental differences in reading disability. In T.
Waller (Ed.), Reading research: Advances in theory and
practice (Vol. 4, pp. 1–64). London, UK: Academic Press.
Olson, R., Wise, B., Conners, F., Rack, J., & Fulker, D. (1989).
Specific deficits in component reading and language skills:
Genetic and environmental influences. Journal of Learning
Disabilities, 22, 339–348.
Perfetti, C., & Stafura, J. (2014). Word knowledge in a theory of
reading comprehension. Scientific Studies of Reading, 18(1),
22–37.
Perfetti, C. A. (1985). Reading ability. New York, NY: Oxford
University Press.
Perfetti, C. A. (1992). The representation problem in reading
acquisition. In P. B. Gough, L. C. Ehri, & R. Treiman (Eds.),
Reading acquisition (pp. 145–174). Hillsdale, NJ: Lawrence
Erlbaum.
Perfetti, C. A., & Hart, L. (2001). The lexical basis of comprehension skill. In D. S. Gorfein (Ed.), On the consequences of
meaning selection: Perspectives on resolving lexical ambiguity (pp. 67–86). Washington, DC: American Psychological
Association.
Perry, C., Ziegler, J. C., & Zorzi, M. (2010). Beyond single syllables: Large-scale modeling of reading aloud with the connectionist dual process (CDP++) model. Cognitive Psychology,
61(2), 106–151.
Plaut, D. C., McClelland, J. L., Seidenberg, M. S., & Patterson,
K. (1996). Understanding normal and impaired word reading: Computational principles in quasi-regular domains.
Psychological Review, 103(1), 56–115.
Prinzmetal, W., Treiman, R., & Rho, S. H. (1986). How to see a
reading unit. Journal of Memory and Language, 25, 461–475.
R Development Core Team. (2012). R: A language and environment for statistical computing. Vienna, Austria: R Foundation
for Statistical Computing. Retrieved from http://www.R-project.org/
Ramirez, G., Chen, X., Geva, E., & Kiefer, H. (2010).
Morphological awareness in Spanish-speaking English
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
26
Journal of Learning Disabilities 
language learners: Within and cross-language effects on word
reading. Reading and Writing, 23, 337–358.
Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear
models: Applications and data analysis methods (2nd ed.).
London, UK: Sage.
Rayner, K., Foorman, B. R., Perfetti, C. A., Pesetsky, D., &
Seidenberg, M. S. (2001). How psychological science informs
the teaching of reading. Psychological Science in the Public
Interest, 2(2), 31–74.
Reboussin, B. A., Reboussin, D. M., Liang, K. L., & Anthony,
J. C. (1998). Latent transition modeling of progression of
health-risk behavior. Multivariate Behavioral Research, 33,
457–478.
Reed, D. K. (2008). A synthesis of morphology interventions
and effects on reading outcomes for students in grades
K-12. Learning Disabilities Research & Practice, 23(1),
36–49.
Reichle, E. D., & Perfetti, C. A. (2003). Morphology in word
identification: A word-experience model that accounts for
morpheme frequency effects. Scientific Studies of Reading,
7, 219–237.
Renaissance Learning. (2014). What kids are reading: The bookreading habits of students in American schools. Retrieved
from http://doc.renlearn.com/KMNet/R004101202GH426A.
pdf
Ricketts, J., Nation, K., & Bishop, D. V. M. (2007). Vocabulary
is important for some, but not all reading skills. Scientific
Studies of Reading, 11(3), 235–257.
Roberts, T. A., Christo, C., & Shefelbine, J. A. (2011). Word
recognition. In M. L. Kamil, P. D. Pearson, E. B. Moje, &
P. P. Afflerbach (Eds.), Handbook of reading research
(pp. 229–258). New York, NY: Routledge.
Schreuder, R., & Baayen, R. H. (1995). Modeling morphological processing. In L. B. Feldman (Ed.), Morphological
aspects of language processing (pp. 131–154). Hillsdale,
NJ: Erlbaum.
Schreuder, R., & Baayen, R. H. (1997). How complex simplex words can be. Journal of Memory and Language, 37,
118–139.
Seidenberg, M. S. (1987). Sublexical structures in visual word
recognition: Access units or orthographic redundancy? In M.
Coltheart (Ed.), Attention and performance XII: The psychology of reading (pp. 245–263). Hillsdale, NJ: Erlbaum.
Semel, E., Wiig, E. H., & Secord, W. A. (2003). Clinical evaluation of language fundamentals (4th ed.). Bloomington, MN:
Pearson.
Ševa, N., Monaghan, P., & Arciuli, J. (2009). Stressing what is
important: Orthographic cues and lexical stress assignment.
Journal of Neurolinguistics, 22(3), 237–249.
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002).
Experimental and quasi-experimental designs for generalized
causal inference. Boston, MA: Houghton, Mifflin.
Singson, M., Mahony, D., & Mann, V. (2000). The relation between
reading ability and morphological skills: Evidence from derivation suffixes. Reading and Writing, 12(3/4), 219–252.
Spoehr, K. T., & Smith, E. E. (1973). The role of syllables in perceptual processing. Cognitive Psychology, 5, 71–89.
Stanovich, K. E. (1986). Matthew effects in reading: Some consequences of individual differences in the acquisition of literacy. Reading Research Quarterly, 21, 360–406.
Stanovich, K. E. (1991). Word recognition: Changing perspectives. In R. Barr, M. L. Kamil, P. B. Mosenthal, & P.
D. Pearson (Eds.), Handbook of reading research (Vol. 2,
pp. 418–452). New York, NY: Longman.
Steacy, L. M., Kirby, J. R., Parrila, R., & Compton, D. L. (2014).
Early identification and the double deficit hypothesis: An
examination of group stability over time. Scientific Studies of
Reading, 18, 255–273.
Swanson, J., Schuck, S., Mann, M., Carlson, C., Hartman, K.,
Sergeant, J., & McCleary, R. (2006). Categorical and dimensional definitions and evaluations of symptoms of ADHD: The
SNAP and SWAN rating scales. Retrieved from www.adhd.
net
Taft, M. (1979). Lexical access-via an orthographic code: The
basic orthographic syllabic structure (BOSS). Journal of
Verbal Learning and Verbal Behavior, 18, 21–39.
Taft, M. (1992). The body of the BOSS: Subsyllabic units in the lexical processing of polysyllabic words. Journal of Experimental
Psychology: Human Perception and Performance, 18, 1004–
1014. doi: 10.1037/0096-1523.18.4.1004
Taft, M. (2001). Processing of orthographic structure by adults of
different reading ability. Language and Speech, 44, 351–376.
Taylor, J. S. H., Plunkett, K., & Nation, K. (2011). The influence of consistency, frequency, and semantics on learning to read: An artificial orthography paradigm. Journal
of Experimental Psychology: Learning, Memory, and
Cognition, 37, 60–76.
Torgesen, J. K. (2000). Individual differences in response to early
interventions in reading: The lingering problem of treatment
resisters. Learning Disabilities Research and Practice, 15,
55–64.
Torgesen, J. K., Alexander, A. W., Wagner, R. K., Rashotte, C.
A., Voeller, K. K. S., & Conway, T. (2001). Intensive remedial instruction for children with severe reading disabilities:
Immediate and long-term outcomes from two instructional
approaches. Journal of Learning Disabilities, 34(1), 33–58.
Vellutino, F. R. (1979). Dyslexia: Theory and research.
Cambridge, MA: MIT Press.
Vellutino, F. R., Fletcher, J. M., Snowling, M. J., & Scanlon, D. M.
(2004). Specific reading disability (dyslexia): What have we
learned in the past four decades? Journal of Child Psychology
and Psychiatry, 45(1), 2–40.
Venezky, R. L. (1999). The American way of spelling: The structure and origins of American English orthography. New
York, NY: Guilford.
Verhoeven, L., Schreuder, R., & Baayen, H. (2003). Units of
analysis in reading Dutch bisyllabic pseudowords. Scientific
Studies of Reading, 7(3), 255–271.
Wagner, R. K., Torgesen, J. K., & Rashotte, C. A. (1999).
Comprehensive Test of Phonological Processing. Austin, TX:
PRO-ED.
Wang, H. C., Nickels, L., Nation, K., & Castles, A. (2013).
Predictors of orthographic learning of regular and irregular
words. Scientific Studies of Reading, 17, 369–384.
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015
27
Kearns et al.
Wilson, M., & De Boeck, P. (2004). Descriptive and explanatory
item response models. In P. De Boeck & M. Wilson (Eds.),
Explanatory item response models: A generalized linear and
nonlinear approach (pp. 43–74). New York, NY: Springer.
Wilson, M. D. (1988). The MRC psycholinguistic database:
Machine readable dictionary, version 2. Behavioral Research
Methods, Instruments and Computers, 20, 6–11.
Windsor, J. (2000). The role of phonological opacity in reading
achievement. Journal of Speech, Language, and Hearing
Research, 43(1), 50–61.
Wolf, M., Bowers, P. G., & Biddle, K. (2000). Naming-speed processes, timing, and reading: A conceptual review. Journal of
Learning Disabilities, 33(4), 387–407.
Woodcock, R. W. (1998). Woodcock Reading Mastery Tests–
Revised/Normative Update: Examiner’s manual. Circle
Pines, MN: American Guidance Service.
Yap, M. J., & Balota, D. A. (2009). Visual word recognition of
multisyllabic words. Journal of Memory and Language, 60,
502–529.
Yarkoni, T., Balota, D., & Yap, M. (2008). Moving beyond
Coltheart’s N: A new measure of orthographic similarity.
Psychonomic Bulletin & Review, 15, 971–979.
Zeno, S. M., Ivens, S. H., Millard, R. T., & Duvvuri, R. (1995).
The educator’s word frequency guide [CD-ROM]. New York,
NY: Touchstone Applied Science Associates.
Downloaded from ldx.sagepub.com at FLORIDA STATE UNIV LIBRARY on September 2, 2015