Improving DHH Students` Grammar Through an

Journal of Deaf Studies and Deaf Education
Special Section on Literacy
Improving DHH Students’ Grammar Through an
Individualized Software Program
Joanna E. Cannon*,1, Susan R. Easterbrooks2, Phill Gagné2, Jennifer Beal-Alvarez2
1
University of British Columbia
2
Georgia State University
Received September 19, 2010; revisions received April 4, 2011; accepted April 13, 2011
The purpose of this study was to determine if the frequent
use of a targeted, computer software grammar instruction
program, used as an individualized classroom activity, would
influence the comprehension of morphosyntax structures
(determiners, tense, and complementizers) in deaf/hardof-hearing (DHH) participants who use American Sign Language (ASL). Twenty-six students from an urban day school
for the deaf participated in this study. Two hierarchical linear modeling growth curve analyses showed that the influence of LanguageLinks: Syntax Assessment and Intervention
(LL) resulted in statistically significant gains in participants’
comprehension of morphosyntax structures. Two dependent
t tests revealed statistically significant results between the
pre- and postintervention assessments on the Diagnostic
Evaluation of Language Variation—Norm Referenced. The
daily use of LL increased the morphosyntax comprehension
of the participants in this study and may be a promising
practice for DHH students who use ASL.
Deaf/hard-of-hearing (DHH) students often experience delays in the acquisition of morphosyntax or the
functional rules that govern linguistic units in English
(Ewoldt, 1990; Quigley & Power, 1972; Quigley, Wilbur,
& Montanelli, 1976). Morphosyntax is an essential
building block in the language and reading
development for this low-incidence population. Poor
models of standard English language affect a child’s
acquisition of morphosyntax; these deficits are a problem for DHH students because limited syntactic ability
influences reading comprehension (Kelly 1996; Power
*Correspondence should be sent to Joanna E. Cannon, University of
British Columbia, 2125 Main Mall, Vancouver, British Columbia, Canada V6T 1Z4 (e-mail: [email protected]).
& Quigley, 1973; Quigley & King, 1980; Quigley,
Wilbur, Montanelli, Power, & Steinkamp, 1976).
Reading skills are crucial for DHH students to become
productive citizens; yet, the median reading level of an
18-year-old adult who is DHH has remained around
a 3.9-grade equivalent for the past 30 years (Babbidge,
1965; Commission On the Education for the Deaf,
1988; Holt, Traxler, & Allen, 1997; National Agenda,
2005). Therefore, additional intervention research is
needed to address deficits in morphosyntax and
enhance language development for DHH students.
This population may experience many challenges
while learning to read. Their acquisition of a basic foundation of English is fundamental to the task of successfully reading and comprehending text in the classroom.
The components of English (e.g., syntax, semantics,
and pragmatics) are the building blocks of this foundation. When compared to hearing students, DHH students who use American Sign Language (ASL) may
develop language and reading skills differently due to
the bilingual nature of the process and the differences
between the components of English and ASL (Brown &
Brewer, 1996; McNally, Rose, & Quigley, 2007).
Language delays in DHH students often result in
academic deficits during the school years (Holt et al.,
1997). Specifically, DHH students who utilize ASL
may have problems learning English pronominalization because there are only two variations in ASL
(Peyton, 2009). They may also struggle with determiners because in ASL, definite and indefinite determiners are signed with the same handshape, and the
Ó The Author 2011. Published by Oxford University Press. All rights reserved.
For Permissions, please email: [email protected]
doi:10.1093/deafed/enr023
Advance Access publication on July 6, 2011
438
Journal of Deaf Studies and Deaf Education 16:4 Fall 2011
nonmanual markers of location and proximity characterize this functional category (Neidle, Kegl,
MacLaughlin, Bahan, & Lee, 2000). ASL provides
visual detail to represent the morphosyntax that is
either implied or embedded in the main clauses of
English (i.e., gender-specific pronouns, verb tenses,
affixes, and articles; Aarons, Bahan, Kegl, & Neidle,
1992; Neidle et al.). This may result in restricted
models of English syntax for DHH students, which
may cause language delays, difficulty with comprehension of academic content, and academic deficits.
The Functional Category in Language
Acquisition
Chomsky’s Minimalist Program may provide the field
of deaf education with a theoretical basis for English
grammar instruction (Chomsky, 1995, 2005; Fitch,
Hauser, & Chomsky, 2005). Chomsky proposed that
language can be divided into two categories: the lexical
category and the functional category. The lexical category includes nouns, verbs, adjectives, adverbs, and
prepositions and develops across one’s life span. For
example, one might never learn what ‘‘apogee’’ means
but still be considered well educated because the lexical category is seen as an open category that grows
throughout one’s lifetime.
Figure 1 continued.
The functional category contains a finite set of
word types (see Figure 1). This is considered to be
a closed set because one can never add to it. For example, it is highly unlikely that the English language
will ever acquire a new verb tense. Chomsky (2005)
proposed that if language learners master the components of the functional category, then they can master
the syntax of language.
DHH students who use ASL are at risk for expressing and comprehending nonstandard mainstream
American English (MAE) because the grammatical
structures of ASL differ from English word order
(Lane, Hoffmeister, & Bahan, 1996). Although ASL
is similar to English in how it represents the lexical
category, it organizes the functional category very differently. For example, ASL functional categories, such
as determiners and wh-questions, are represented
through nonmanual markers, such as proximity and
location (Neidle et al., 2000). The variations between
MAE and ASL may cause confusion between the functional categories of English and ASL because English is
not the students’ native language (Bishop, 1983).
Three examples of the functional category include
determiners, tense, and complementizers. Determiners are the head of the sentence and dictate the grammar that follows in the sentence. Determiners modify
Effectiveness of Software Program 439
Figure 1 LanguageLinks Functional categories with examples (Wilson, 2008).
nouns and cannot exist outside of a noun phrase
(Moats, 2000; Wilson, Fox, & Pascoe, 2003, 2008).
For example, when starting a sentence with ‘‘the,’’
the next word can be ‘‘boy’’ but it can never be
‘‘he.’’ Tense is associated with verbs and also dictates
the grammar of the entire sentence (Moats; Wilson,
2008). For example, when using the phrase ‘‘jumped,’’
the next word can be ‘‘high’’ or ‘‘over the fence’’ but it
can never be ‘‘will.’’ Complementizers introduce and
characterize complement clauses, which tell the audience more information about the proposition
expressed in a clause (Adger, 2003; Wilson). For example, the complementizers ‘‘if ’’ and ‘‘whether’’ can
be used in ‘‘We asked whether the dinner was ready’’
and ‘‘We wondered if the dinner was ready’’ to let the
audience know that both clauses are interrogative. The
comprehension of these three components of morphosyntax are necessary to increase DHH students’ comprehension of English; yet, direct instruction of these
components is absent from most standard school curriculums. Therefore, instructional strategies that focus
on the development of the functional category in
DHH students may be a promising practice for the
field of deaf education.
Instructional Strategies in Deaf Education
Teachers of the deaf have struggled for generations
with determining effective strategies for teaching English language skills to all DHH students and
440
Journal of Deaf Studies and Deaf Education 16:4 Fall 2011
recognize their relationship to reading success. Research regarding instructional strategies to increase
language acquisition in the field of deaf education is
scarce (Easterbrooks, 1999; Schimmel & Edwards,
2003). However, there are a few best practices supported by research that could be used to address the
problem. One widespread best practice is teaching
language structures through visual representation
(Berent et al., 2006; Conlin & Paivio, 1975;
Easterbrooks, 2010b; Easterbrooks & Baker, 2002;
Moeller, Osberger, & Eccarius, 1986), a sequential approach based on Chomsky’s theory of transformational
generative grammar (Chomsky, 1957; Fitch et al.,
2005), which evolved into the Minimalist Program
(Chomsky, 1995, 2005; Fitch et al.). Visual representation can include scaffolding instruction to teach
grammatical structures in a developmental order using
visual cues (e.g., color coding, key symbols, and signs;
Easterbrooks & Baker). Scaffolding instruction refers
to the practice of combining new knowledge with prior
knowledge to create a higher level understanding of
complex sentence structures (Blackwell, Engen,
Fischgrund, & Zarcadoolas, 1978).
Another research-based instructional strategy that
could be used in classroom activities with DHH students is the use of multimedia tools, which may increase
motivation and attention when completing academic
tasks (Cannon, Fredrick, & Easterbrooks, 2010;
Dangsaart, Naruedomkul, Cercone, & Sirinaovakul,
2008; Massaro & Light, 2004). This increased attention
may be due to the use of imagery, which is a correlate of
good reading instruction for this population and may
support retention and memory ability during academic
tasks (Easterbrooks, 2010b). Motivation and timeon-task are important components of any language intervention program that aims to increase student
achievement through socially valid interventions in
the classroom (National Reading Panel, 2000).
LanguageLinks Intervention Software
These two strategies, scaffolding language instruction and the use of multimedia tools, are combined
in a software program developed for students with
special needs, entitled LanguageLinks: Syntax Assessment and Intervention (LL) by Laureate Learning
Systems, Inc. This software program is based on
Chomsky’s Minimalist Program (Wilson, 2008;
Wilson et al., 2003, 2008). To increase English language acquisition, LL is designed to teach students
grammatical forms such as determiners, tense, and
complementizers from Chomsky’s functional category (see Figure 1; Wilson; Wilson et al.). LL is
a self-paced classroom activity that scaffolds instruction in developmental order by advancing students
through the program depending on their mastery of
each of the six modules within six different levels (Finn,
Futernick, & MacEachern, 2005; Wilson; Wilson et al.).
Students cannot advance to the next module until they
have met the 80% mastery criteria; if they do not meet
the criteria, they are given additional practice items
until the grammatical form is mastered.
The LL software displays various components of
morphosyntax embedded within printed sentences on
the computer screen. The software then allows the
user to choose one of the two objects viewed on the
computer screen, which are alternately outlined in red
for a visual cue. The correct answer matches the
printed sentence on the bottom of the screen. Each
of the six modules in each of the six levels of the
program review different functional categories, allowing the participant to practice the rules of morphosyntax and increase his or her skills through
a scaffolded lesson (Finn et al., 2005; Wilson, 2008;
Wilson et al., 2003, 2008).
Review of Literature
The purpose behind LL is to make language learning
more motivating for struggling learners. By incorporating scaffolded instruction of the functional categories of English in a multimedia format, LL attempts to
gain students’ interest and maintain it throughout the
modules (Wilson, 2008; Wilson et al., 2003, 2008).
Finn et al. (2005) used the LL software program with
22 preschool students with language impairments between the ages of 3 and 4 years for 12 weeks as a supplemental classroom activity. All participants were
assessed prior to the intervention using the Comprehensive Assessment of Spoken Language (CASL;
Carrow-Woolfolk, 1999). Participants were matched
by age and by their Core Composite (CC) on the
Effectiveness of Software Program 441
CASL and then randomly assigned to the experimental or control group. Three teachers supervised use of
the LL software program for 10- to 15-min sessions,
up to 3 times per week, for the experimental group.
The control group used a software program from the
same publisher that focused on vocabulary and categorization concepts for the same amount of time as the
experimental group. Although both groups made
progress, gains on the CC scores of the CASL, in
terms of test-age equivalents, were greater for the experimental group (8.7 months) than the control group
(5.3 months). Finn et al. noted that implementation of
the intervention by the teacher in a typical classroom
environment enhanced social validity of the study.
Merchant, deVilliers, and Smith (2008) replicated the use of the LL software program and the
vocabulary and categorization software as a control
with five kindergarteners and five first graders who
functioned as hard of hearing through the use of cochlear implants and utilized spoken English as their
primary language and mode of communication. The
researchers recorded the pretest scores (Time 1) of
the participants on vocabulary using the Expressive
One-Word Picture Vocabulary Test (EOWPVT;
Gardner, 1979) and on morphosyntax using the
Diagnostic Evaluation of Language Variation—Norm
Referenced (DELV-NR; Seymour, Roeper, &
deVilliers, 2005) assessments. The experimental and
control groups were matched according to pretest
scores and randomly assigned to either group. Both
groups used the software programs for 20 min 3
times per week for 10 weeks. At the midpoint of
the study, the participants were evaluated using the
EOWPVT and the DELV-NR assessments (Time 2)
and then the groups switched programs so that each
group received the LL software program. At the
conclusion of the study, the participants were evaluated using the EOWPVT and the DELV-NR assessments (Time 3) as posttest data. The results of the
paired t test on Time 3 scores (end of the training)
versus Time 1 scores (beginning of the training)
revealed significant improvement of the participants’
comprehension of morphosyntax. Merchant et al.
reported that the participants’ expressive language
scores increased even though LL provided instruction in receptive language. The researchers also
noted that the software is self-paced, which allowed
teachers to address underlying skills while continuing
to teach required standards.
Whereas previous studies examined the use of the
LL software with hearing participants and DHH children who use the oral/aural method of communication,
the current study extended the research to participants
who utilize ASL as their primary mode of communication. The age range for participants was also increased beyond previous samples of preschool,
kindergarten, and first graders. The research question
was to what extent does the LL software program, as an
individualized classroom activity, affect the comprehension of morphosyntax in DHH students?
Methods
Setting
The setting for this study was an urban day school
for DHH students located in a major metropolitan
area in the United States. The school enrolled students from preschool through 12th grade from 28
counties in and around the area. Each classroom consisted of small group instruction with class sizes
ranging from four to eight students and was equipped with up to five computers. This setting provided
a homogeneous sample of DHH students in that all
the students in this setting used some form of sign
language, as opposed to spoken English, as their
mode of communication and that all school personnel
were reported to be proficient in ASL.
Participants
The inclusion criteria for participation in this study
were as follows: (a) students between 5 and 12 years of
age who are DHH and attend an urban day school for
the deaf; (b) degree of hearing loss of at least 40 dB
aided pure-tone average of the better ear; and (c)
scores at or above the instructional reading rating of
preprimer level based on teacher report of the child’s
most recent Basic Reading Inventory (BRI; Johns,
2008). The BRI scores were obtained from the participants’ records via the classroom teacher because it is
a required assessment at the beginning of every year at
the participating school. Parent permission forms were
442
Journal of Deaf Studies and Deaf Education 16:4 Fall 2011
sent home to all kindergarten through fifth grade students (n 5 45). Thirty-eight forms were returned, and
of those, 27 students met the inclusion criteria. One
participant dropped out of the study due to school
transfer resulting in a total of 26 participants.
A total of eight teachers participated in the study,
two from kindergarten classes, one from a combined
first/second grade class, one from third grade, two
from fourth grade classes, and two from fifth grade
classes. These eight teachers were recruited as
facilitators of the intervention for this study. Inclusion criteria included the following: (a) current
certification in deaf education, (b) classroom teachers
of the current grade level of the participants of the
study, and (c) proficiency in ASL as determined by
the school’s evaluation of their skills prior to
employment.
Materials and Assessments
Materials necessary for the implementation of the intervention included the following: (a) a site license for
22 computers to use the LL software program simultaneously, which was obtained gratis from the publisher,
Laureate Learning Systems, Inc.; (b) 22 computers
equipped with keyboard and mouse controls; (c) participant records of degree of hearing loss, age at the time
of this research study, and his or her most recent BRI
scores; (d) the examiner manual, stimulus manual, and
record forms for each participant from the DELV-NR;
(e) stimulus manual for the subset of the Comprehension
of Written Grammar (CWG; Easterbrooks, 2010a) test
and participant score sheets; and (f) teacher clipboards
that included teacher scripts, weekly checklists, and
digital timers.
Diagnostic Evaluation of Language Variation—Norm
Referenced. The first author assessed language levels
using the DELV-NR, a diagnostic test that can be
used to evaluate students who have linguistic backgrounds that are not MAE (Seymour et al., 2005).
The DELV-NR is based on the theories of Brown
(1973), Bruner (1986), Chomsky (1973, 1977, 1986),
and Labov (1969, 1972), thus making it ideally suited
to assess the functional categories that are the focus
of LL. For a review of theoretical foundations of the
DELV-NR, consult the technical manual (Seymour
et al.). The DELV-NR was administered as a pre- and
posttest to measure comprehension of morphosyntax.
Only the semantic and syntactic subtests of the
DELV-NR were administered for this study.
The syntax subtest consists of three subdomains:
‘‘wh-questions,’’ ‘‘passives,’’ and ‘‘articles’’ to measure
the participants’ comprehension of wh-movement
within sentences, comprehension of passive sentences,
and use of articles (Seymour et al., 2005). Articles are
a component of the demonstrative domain of grammar
and are therefore a measure of the corresponding concept of determiners in the LL program. Passive sentences are a measure of the corresponding concept of
complementizers in the LL program. The semantic
subtest has four subdomains: ‘‘verb contrast,’’ ‘‘preposition contrast,’’ ‘‘quantifiers,’’ and ‘‘fast mapping.’’
These subdomains measure the participants’ ability
to organize vocabulary, contrast verbs, and prepositions; comprehend quantifiers; and retain new meanings from the context of a sentence (Seymour et al.).
Verb contrasts are a measure of the corresponding
concept of tense in the LL program. We chose the
syntax and semantic subtests because they most
closely represented the concepts in the LL software
program used as the intervention in this study. The
DELV-NR was chosen as an independent measure of
the skills addressed in the LL program and was previously used as an assessment of morphosyntax by
Merchant et al. (2008). The DELV-NR examiners’
manual reports’ test–retest reliability for the syntax
and semantics domains were .78 and .82, respectively,
across all ages. Internal consistency as measured by
split-half reliability coefficients were .83 for the syntax
domain and .78 for the semantics domain. Overlapping functional categories between the intervention
and assessment (see Figures 1 and 4; Wilson et al.,
2003) in the current study included determiners
(i.e., articles and quantifiers), tense (i.e., negation,
aspect, and agreement), and complementizers (i.e.,
wh-questions). Trained graduate research assistants
conducted interobserver reliability (IOR) for 20% of
the pre- and postassessments of the syntax and semantic subtests of the DELV-NR. IOR was calculated using point-by-point agreement on both the DELV-NR
pretests and posttests.
Effectiveness of Software Program 443
Optimized Intervention. LL assessment raw scores
were collected using the Optimized Intervention (OI;
Wilson et al., 2003) system during the study. The OI
system is a component of the LL software that tracks
individual participant progress. Weekly student
reports were downloaded and printed for data analysis
and were maintained using the LL software program.
Up to 45 data points were collected for each participant in the study using the OI system because the
intervention took place 5 times per week for 9 weeks.
Each of the raw assessment scores collected was coded
to represent the module and level the participant
completed each day of the intervention.
Comprehension of Written Grammar. A subset of 15
items from an emerging assessment, the CWG
(Easterbrooks, 2010) test, was also used as an informal
measure of written grammar comprehension. The
subset of CWG was administered by having the participant read an English sentence below three pictures
from which to choose. The participant was instructed
to pick the picture that matched what the sentence
stated. The pictures contained characteristics that required comprehension of the morphosyntax in the
sentence. An example of a sentence assessing tense
was ‘‘The rabbit jumped,’’ where the participants
had to pick between a picture of a rabbit landing on
a table, a rabbit about to jump on a table, and a rabbit
sleeping beside the table.
The 15 grammatical structures were selected
from the entire battery of 30 structures in this emerging assessment because they had overlapping morphosyntax between the LL software program and
the CWG. The first author and three graduate students in deaf education, all proficient in ASL, administered the subset of the CWG. Four probes of the
15 sentences were administered in randomized
order over the course of 2 weeks to establish the
participants’ level of English grammar prior to the
intervention and exposure to their classroom curriculum. Weekly probes of the same 15 test items were
administered in a randomized order during the
9-week intervention to serve as an ongoing assessment of fidelity of the LL software program’s OI
system raw scores. The participants did not receive
any feedback regarding whether their answers were
correct or incorrect and the test items were randomized to minimize any effect of replication. Coefficient
a for each of the 13 measurement occasions ranged
from .672 (third measurement prior to the LL intervention) to .797 (end of Week 7 of the LL intervention). Although these are not indicative of a high
degree of reliability, they were considered high
enough, given the relatively low sample size and
given the fact that only a subset of the CWG was
utilized in this article.
Research Design
Variables. The independent variables in this study
served as predictors of where the participants started
on two hierarchical linear modeling (HLM) growth
curve analyses. These included (a) Degree of Hearing
Loss and (b) Age at the time of this research study.
A second set of independent variables served as predictors of change over time on two HLM growth
curve analyses and included (a) DELV-NR pretest
scores on the syntax and semantic subtests and (b)
the BRI scores from the participants’ records. The
dependent variable for the first model was the LL
software program’s OI report data and for the second
model the data from a subset of the CWG.
Time was the only level-1 predictor in the multilevel growth models. In the first model, Time represented days of LL intervention (one session per
day), and it was coded with 0 as the first day, yielding
a range of 0–44. In the second model, Time represented weeks (one assessment per week), starting at
0 with the first administration of the CWG, yielding
a range of 0–10.5 (0, .5, 1, and 1.5 being the four
measurements in the 2 weeks prior to the LL intervention and 2.5–10.5 running concurrently with the
LL). The other independent variables in this study
served as predictors of the level-1 intercept (i.e., the
participant’s initial score on the dependent variable)
or of the level-1 slope (i.e., the growth rate). The
intercept predictors were Degree of Hearing Loss
and Age at the beginning of the research study.
The independent variables (Degree of Hearing Loss
and Age) were used as predictors of the initial LL
software modules completed and the score of the
CWG initial subset.
444
Journal of Deaf Studies and Deaf Education 16:4 Fall 2011
Analyses. Two HLM growth curve analyses were
used to examine the influence of the intervention
(LL software program) on the comprehension of morphosyntax by the participants. Both HLM growth
curve analyses were used to measure the same outcome variable at multiple times in order to investigate
a pattern of change over time. Measuring each participant multiple times created nested data structures;
the measurement occasions were nested within the
participants.
Letting Y stand for either of the dependent variables, the level-1 equation was
Yti 5p0i 1p1i Timet 1eti ;
the level-2 equations were
p0i 5b00 1b01 Agei 1b02 Hearing Lossi 1r0i ;
and
p1i 5b10 1b11 BRIi 1b12 Syntaxi 1b13 Semantici 1r1i ;
and the combined equation was
Yti 5b00 1b01 Agei 1b02 Hearing Lossi 1b10 Timet
1b11 BRIi Time1b12 Syntaxi Time
1b13 Semantici Time1r0i 1r1i Timet 1eti :
In addition to the multilevel growth models, two
dependent t tests were conducted on the DELV-NR
pre- and posttest scores on the syntax and semantic
subtests to examine more broadly the influence of the
intervention on the participants’ comprehension of
morphosyntax.
Procedures
Preintervention. Prior to the intervention, the lead
author conducted three workshop sessions to demonstrate the procedures of the study and how to administer the intervention software program for the eight
teacher participants. In addition, after receiving informed consent from all the teachers, the first author
conducted two individual 30-min training sessions in
the teachers’ classrooms on their computers. The
teachers received clipboards with teacher scripts,
weekly checklists that stated each student participant’s name, and a section for the lead researcher
to fill in the module and level each participant should
be working on each day of the week. Teachers also
received manual timers as a backup for the automatic
timer in the software program in the event that the
software timer malfunctioned.
Intervention. This intervention occurred across
9 weeks, every school day for 10 min per session, with
a total of up to 45 data points recorded for each participant. The sound was muted on the computers during the intervention, and the participants read the
printed sentence on the computer screen and selected
the picture that matched the morphosyntax of the
sentence. The software was self-paced so that each
participant worked individually through each module
(1–6) of each level (1–6) that matched his or her current ability level. The LL software did not require
a set amount of time for each module. For example,
some participants spent 20 min (two sessions) on one
module, whereas others could take up to an hour (10
sessions) on the same module. All participants began
with module 1; level 1 and the OI system individually
advanced the participants to the next module after
mastery of each skill. When a participant was ready
to move to the next level of the program, the teacher
used the weekly checklist to confirm that the participant had completed the six modules and was permitted to move to the next level. The checklists were
updated daily by the first author after inspection of
the OI student reports. The software timers were individually set by the software program for each
10-min session, but the software would stop prior to
10 min if the participant advanced to the next level.
Therefore, the teachers set the manual timers during
each session to ensure fidelity of implementation.
Teacher scripts were used for procedural fidelity.
The first author printed the raw scores for each
participant from the OI student reports weekly and
coded them to represent the module and level the participant completed each day of the intervention. To assess treatment fidelity during the intervention, the first
author compared teachers’ weekly checklists to the student reports from the OI system to monitor how many
times participants logged onto the system per week and
Effectiveness of Software Program 445
the duration of their sessions. The first author conducted procedural fidelity checks of the classroom intervention procedures during 20% of the intervention
period (approximately 9 times during the 9-week period). Procedural fidelity was assessed using a checklist
(see Figure 2) that followed the teacher script, which the
first author completed during spontaneous visits to the
teachers’ classrooms during the intervention.
Fidelity of intervention and social validity. Three specific settings in the LL software program were
important for fidelity of intervention with the participants (see Figure 3). First, the ‘‘interface’’ option in the
Figure 2 continued.
program settings was set to ‘‘scanning.’’ The sentences
voiced on the software program were printed on the
screen so that no captioning of the software was necessary. Directional words were voiced but not printed on
the computer screen. Therefore, the scanning accommodation gave visual cues in place of the directional
words by flashing a red circle alternately around each
answer choice. Second, the ‘‘duration of session’’ option
was set to 10 min. Finally, the ‘‘scans per item’’ setting
option was set at ‘‘infinite’’ so that the participants had
extended time to read the sentence and match it to the
correct object on the screen. This was consistent with
the program’s intent to provide self-paced instruction.
446
Journal of Deaf Studies and Deaf Education 16:4 Fall 2011
Figure 2 continued.
Additional modifications were made to the computers in the classrooms to give participants visual aids
while completing the daily intervention. Recall that participants selected one of two visuals displayed on the
computer screen. When participants attempted to use
the mouse for selection, the red circle frequently surrounded the alternate visual before participants could
complete their selection, resulting in an unintended answer. To alleviate this dilemma, red circle stickers were
added to the space bars of all 22 computers as a visual
cue to press the space bar for selection of the picture on
the computer screen that was surrounded by a red circle.
Social validity was assessed using a short answer
teacher questionnaire that was distributed a week after
the intervention ended. The social validity of the intervention was enhanced because the intervention was
administered by a teacher in a typical classroom environment and not by the researcher. In addition, a 5point Likert scale questionnaire was distributed to the
students who served as participants in the study 1
week after the intervention ended. The first author
read the items on the questionnaire to the participants
in ASL. These questionnaires served as an informal
and subjective evaluation (Wolf, 1978) to examine
the social validity of using the intervention in the
classroom setting.
Results
LanguageLinks HLM Growth Curve
With progress through the LL software as the
dependent variable, Time represented days in the
intervention. With Time as the only predictor in the
model, there was statistically significant growth, t(25) 5
4.510, p , .001, with participants averaging an increase
of 0.4 modules per day. The variance across people
in their initial scores (i.e., the intercept variance) was
statistically significant, s00 5 4.116, v2(25) 5 591.933,
Effectiveness of Software Program 447
Figure 2 Procedural fidelity checklist.
p , .001, and the variance in the growth coefficients
across people (i.e., the slope variance) was statistically
significant, s11 5 0.212, v2(25) 5 6626.581, p , .001.
In the fully conditional model (i.e., the model with
the level-2 predictors), Age, t(23) 5 1.867, p . .05,
and Degree of Hearing Loss, t(23) 5 20.102, p . .05,
were not statistically significant predictors of where
participants began on the outcome variable. The
DELV-NR presyntax, t(22) 5 20.713, p . .05, and
presemantic, t(22) 5 1.722, p . .05, subtest scores
were not statistically significant predictors of change
over time for the participants in this study. The results
indicated that the participants’ BRI scores, t(22)
5 3.522, p , .01, were statistically significant predictors of rate of progress on the LL software program
modules. The variance in the initial scores and in
the growth coefficients remained statistically significant, p , .001, in the conditional model.
448
Journal of Deaf Studies and Deaf Education 16:4 Fall 2011
Figure 3 Program settings screen shot from LanguageLinks software program.
CWG HLM Growth Curve
With CWG as the outcome variable, Time was measured in weeks, and it was a statistically significant
predictor, t(25) 5 4.273, p , .001, with scores
increasing by an average of 0.153 sentence structures
per week. The variance among the initial scores
was statistically significant, s00 5 8.635, v2(25) 5
505.981, p , .001, as was the variance in the growth
rates, s11 5 0.0176, v2(25) 5 55.318, p , .01. Degree
of Hearing Loss was not a statistically significant predictor of the initial score on the CWG, t(23) 5
20.953, p . .05. Age was a statistically significant
predictor of the initial score, t(23) 5 2.183, p , .05,
with an average increase in initial score of 0.577 sentence structures per year of age, holding the other
predictors constant. Scores on the DELV-NR pretest
syntax subtest, t(22) 5 20.118, p . .05, and pretest
semantic subtest, t(22) 5 1.727, p . .05, were not
statistically significant predictors of change over time
on the CWG scores. The BRI scores were also not
statistically significant predictors of change over time
on the CWG scores, t(22) 5 0.029, p . .05. Both the
variance in initial scores and the variance in the
growth rates remained statistically significant, p ,
.001, after the predictors were entered into the
model.
To isolate the effect of the OI report data on the
CWG scores, a third multilevel growth model was
run, with Time still measured in weeks but beginning at 0 with the last measurement before the OI
report data yielding a range of 0–9. In this model,
Time was not a statistically significant predictor of
CWG scores, t(25) 5 0.796, p . .05, with an average
increase in CWG per week of only 0.041 sentences.
The variance in the initial scores, s00 5 9.848, v2(23) 5
441.635, p , .001, and the variance in the growth
rates, s11 5 0.0436, v2(25) 5 75.105, p , .001, were
statistically significant. As was the case with the
model that included all 13 measurements of the
CWG, Age significantly predicted initial score,
b 5 0.562, t(23) 5 2.195, p , .05, but none of the
other predictors of the initial score and none of predictors of the growth rate were statistically significant
at the .05 level, and the variances remained statistically significant, p , .001.
DELV Results
Pre- and postsyntax subtests. A dependent t test was
conducted to compare the pretest scores of the syntax
subtest of the DELV-NR assessment and the posttest
scores of the syntax subtest of the DELV-NR assessment with alpha set at .05. There was a statistically
significant difference between the scores on the pretest
syntax subtest (M 5 10.884, SD 5 5.778) and the
posttest syntax subtest (M 5 12.769, SD 5 4.966),
t(25) 5 22.394, p . .05.
Effectiveness of Software Program 449
Table 1 Social validity participant questionnaire results
Questionnaire item
Strongly agree
Agree
Unsure
Disagree
Strongly disagree
1.
2.
3.
4.
5.
6.
22
18
18
11
12
14
2
1
0
2
1
1
2
6
8
5
2
5
0
0
0
1
1
0
0
1
0
7
10
6
I enjoyed using LanguageLinks
I learned a lot using LanguageLinks
LanguageLinks was fun
I make fewer mistakes writing now
I would use LanguageLinks at home
I would recommend LanguageLinks to a friend
Pre- and postsemantic subtests. A dependent t test was
conducted to compare the pre- and posttest scores of
the semantics subtest of the DELV-NR assessment
with alpha set at .05. There was not a statistically
significant difference between the scores on the pretest
semantic subtest (M 5 20.384, SD 5 7.228) and the
posttest semantic subtest (M 5 22.153, SD 5 5.661),
t(25) 5 22.004, p . .05.
IOR and Procedural Fidelity
IOR was calculated using point-by-point agreement.
The data on both protocols were compared for agreements and disagreements for 20% of the pretest and
posttest sessions (five participants for the pretest and
five participants for the posttest). The DELV-NR pretest point-by-point agreement was 98%, and the
DELV-NR posttest point-by-point agreement was
94%. Weekly procedural fidelity checklists were completed for 20% of the 45 intervention sessions for 9
weeks. The results of the procedural fidelity checklists
indicated that the teachers complied with the scripts
during intervention for 100% of the observed sessions.
Social Validity
Five out of eight teachers who participated in the
study completed the teacher social validity questionnaire. These consisted of five open-ended questions.
The responses varied among the teachers, and results
were mixed regarding the social validity of the multimedia software format. Participant social validity
questionnaires were completed during posttesting,
and all questions were read to the participants in
ASL. There were six questions in the 5-point Likert
scale survey. Results are presented in Table 1.
Discussion
This article investigated whether the frequent use of the
LL software program as an individualized classroom
activity increased the morphosyntax of the DHH participants who use ASL. The results indicated that LL
did increase the morphosyntax of the participants, as
indicated by the significant effect of Time in the HLM
growth curve analysis on the participants’ progression
through the LL modules and levels. These results were
supported by the statistically significant effect of Time
in the HLM growth curve analysis of the subset of the
CWG scores. The subset of the CWG was administered as an additional measure of whether the results
of the progression through the modules were due to the
LL software program or language instruction in the
classroom setting during the course of this research
study. The corroboration of these statistically significant
results indicated that the software program was at least
partially responsible for the increase in morphosyntax
of the participants in this study. These results were also
supported by the statistically significant results of the
dependent t test on the DELV-NR pre- and postsyntax
subtest scores of the participants. Examination of all
three statistically significant results supports the finding
that participants in the study increased their comprehension of morphosyntax due to the duration of time
they spent on the LL software program, which included up to 45 sessions.
HLM Analysis
HLM growth curve analysis is a group design
methodology, which allows the sample size to be small
(n 5 20–30) participants, in comparison to other group
design analyses. But the amount of measurement occasions for these individuals is large (n 5 20–50) and
450
Journal of Deaf Studies and Deaf Education 16:4 Fall 2011
makes up for the small sample size in the statistical
analysis of the data. HLM growth curve is especially
useful with the DHH population because it allows for
individual differences among the participants and
examines these differences using statistical analysis.
For example, in some single-subject research, participants must have matched characteristics, such as age
range, teacher/classroom placement, IQ , etc. But as
is evident in the current study, HLM growth curve
participants can range in age (5–12 years old), disability
characteristics (degree of hearing loss), classroom placement (kindergarten through fifth and eight different
teachers), pretest scores (DELV-NR), and inclusionary
criteria scores. The reason these areas can vary is because HLM growth curve uses these independent variables to explain which ones are significant predictors of
where the participants begin on the initial intercept of
the growth curve (Age and Degree of Hearing Loss)
and which ones are significant predictors of where participants will fall on the slope of the growth curve
(DELV-NR pretest scores and BRI scores). Predictors
of success in language development for DHH students
include age of onset of hearing loss, degree of hearing
loss, age of amplification, age of intervention, parental
involvement, parental hearing status, and reading level
(Stinson & Kluwin, 2003). Although for the current
study, only degree of hearing loss and age of intervention were chosen as predictors of success for the
participants.
This research design allowed analysis of whether
or not the dependent variables (number of LL modules mastered in the first model and CWG scores in
the second model) influenced morphosyntax comprehension, as determined by whether or not the outcome
variables showed significant growth over time. The
overall concept of HLM growth curve analysis is similar to running multiple regression analyses but has the
added benefit of incorporating individualized variables
to explain significant predictors of individual performance and therefore influence future research regarding significant variables (Raudenbush & Chan, 1992).
HLM growth curve analysis allowed for the
inclusion of a larger sample size when compared to
single-subject research while utilizing a similar design
concept. This is important because DHH is a lowincidence disability area with scarce research of effec-
tive, evidence-based classroom strategies that increase
language acquisition. The need for additional research
to increase the below-average language scores is vital for
this diverse population (Easterbrooks, 1999; Schimmel
& Edwards, 2003). Yet, funding for research from a federal level is difficult to acquire without group design
research studies (Cawthon, 2007). HLM growth curve
analysis has rarely been used in research with DHH
participants, and this new avenue to incorporate group
design analysis may be beneficial not only for funding of
research but also for a larger sample size of DHH
students to determine evidence-based strategies.
LanguageLinks
The results of the HLM growth curve analysis of the
LL software program showed that participants in the
study progressed through the program’s six modules
within the six levels at an average increase of 0.4 modules per time period. Each module allowed for practice
and assessment of one of three concepts of morphosyntax (determiners, tense, and/or complementizers).
Therefore, participants either reviewed or initially
practiced four tenths of a lesson on one of these concepts of morphosyntax within a 10-min session on
a daily basis, which means they could cover two full
lessons in a 5-day period. The speed of acquisition
and/or reinforcement of these morphosyntax concepts
suggest that this program may be a promising evidencebased, individualized classroom strategy for DHH participants who use ASL.
Prior research on learning outcomes in general education has identified that the amount of time a student
is actively engaged in learning (time-on-task) is an important factor for teachers to consider when planning
instruction (Anderson, 1984). The LL software program encourages students to spend time on the task
of learning language. The multimedia component of
the program also allows students an alternate language-learning format to classroom instruction. The
LL software did provide them with a multimedia approach to learning language and may have accounted
for the pace at which they progressed through the program. Given the fact that some DHH students plateau
in language at a very early age (Holt et al., 1997), the
result that they were continuing to advance through the
program at a statistically significant rate indicates that
Effectiveness of Software Program 451
the LL software may be an available option in supporting time-on-task. Davey, LaSasso, and MacReady
(1983) found that teachers of the deaf spent very little
time-on-task in reading instruction, and there is minimal information available regarding the amount of time
on the task teachers spend on language instruction. The
LL software program might increase the amount of
time students could spend on direct instruction of morphosyntax if they were given a choice between the software program and the typical paper and pencil tasks.
The participants’ scores on the BRI were statistically
significant predictors of their ability to progress through
the modules and levels of the LL software program. In
previous studies of this software, the participants were
able to listen to the prompt and could have ignored the
printed sentence (Finn et al., 2005). In this study, the
participants did not have access to the voiced prompts so
they tended to pay more attention to the printed sentences, which may account for why the BRI scores were
predictive of progress. Therefore, the participants used
their reading ability to comprehend the printed sentences and the directions in the software program on the
screen. Prior researchers (Allen, 1986; Ewoldt, 1990;
Kelly, 1996; Schirmer & McGough, 2005) support the
finding that participants with more complex vocabulary
knowledge were more successful at applying the rules of
morphosyntax when reading sentence prompts.
Comprehension of Written Grammar
The results of the subset of the CWG showed that
participants were able to add an average of .153 sentence
structures per week to their knowledge pool of morphosyntax from the beginning of baseline through the end
of the intervention. These findings indicate that the
participants were able to transfer their prior knowledge
and knowledge of morphosyntax from the intervention
to this assessment. Although the increase in comprehension of structures appeared small (i.e., 1.989 sentence
structures from the beginning of baseline until the end
of the study), this comprehension was statistically significant. This is important because there are no other
research studies available that document rates of morphosyntax comprehension in DHH children. However,
Kyle and Harris (2010) examined the developmental
trajectory of reading ability in DHH children over
a 3-year period and found that early vocabulary levels
were predictive of later vocabulary development. Additionally, they averaged 0.3-grade level improvement per
school year in reading levels. Although 1.989 sentence
structures in the current study may appear on the surface to be minimal progress, it is consistent with what
we know about the rate of progress in the literacy of
DHH students (Marschark & Wauters, 2008).
Age was the only statistically significant level-2
predictor variable for this HLM growth curve analysis
and therefore appears to have influenced the ability of
the participants to transfer their knowledge from the
intervention to the assessment. The selection of the subset of sentences used in the CWG may have influenced
the results of this HLM analysis as they were chosen to
match the structures from the LL software program.
The high level of vocabulary in the CWG sentences
may have influenced the results because they may have
affected the participant’s reading comprehension of the
sentences. This is consistent with research showing that
grammar ability correlates highly with reading ability
(Macaluso, 2007; Verhoeven &Van Leeuwe, 2008).
Diagnostic Evaluation of Language Variation
The statistically significant gains between the DELVNR pre- and postsyntax subtest scores indicated that
the LL software program may be a useful individualized
classroom activity for DHH students in the age ranges
of 5–12 years old. Additionally, the DELV-NR syntax
subtest may be a useful tool in assessing knowledge of
morphosyntax for this population. Presently, no published research exists on using the DELV-NR with
DHH students who use ASL as their primary mode
of communication. This assessment may be useful with
these students because the morphosyntax of their first
language, ASL, does not follow MAE (Lane et al.,
1996). The assessment of MAE may be a beneficial tool
for both hearing and DHH students.
The difference between DELV-NR pre- and
postsemantic subtest scores was not statistically significant. This may indicate that the four subdomains
of the assessment did not overlap to the extent originally estimated when constructing the research
design of this study. The verb contrasts present in
the DELV-NR assessment were a measure of the
452
Journal of Deaf Studies and Deaf Education 16:4 Fall 2011
Figure 4 continued.
corresponding concept of tense in the LL software
program. The subdomains of preposition contrast,
quantifiers, and fast mapping were not overtly
taught in the LL software program (see Figure 4).
These subdomains measure the participants’ ability
to organize vocabulary, contrast prepositions, comprehend quantifiers, and retain new meanings from
the context of a sentence (Seymour et al., 2005).
Although some of these concepts are necessary for
comprehension of the sentence structures throughout the modules and levels of the LL program, the
lack of direct instruction in particular modules may
account for the lack of gain between pre- and posttest scores. Including all four subdomains of the
semantics subtest in this study may have washed
out the effects of the one subdomain that overlapped
Effectiveness of Software Program 453
Figure 4 LanguageLinks modules and levels (Wilson, 2008).
with the LL software. Another implication of this
finding is that in this context, morphosyntax learning did not transfer to semantic learning. Further
study is needed to address the complex relationship
between semantics and morphosyntax in DHH
readers.
Social Validity
Upon completion of the intervention, teachers filled
out a questionnaire to assess the social validity of
using this intervention in a classroom setting as an
individualized activity. The teachers’ responses were
mixed and appeared to be dependent upon the grade
level of the students participating in the study. Although not all grade level teachers completed a questionnaire, which restricts the interpretation of the
results, it appeared that the fifth grade classroom
teachers felt the program was not as socially valid
for their students. This may be due to their students’
age and may be an indication that some of the programs’ activities, pacing, and feedback are better
designed for younger students.
The two kindergarten teachers reported that the
program was too difficult for their students without
direct teacher instruction of the concepts and suggested
that the intervention might work better as a supplemental activity after direct instruction. Additionally, both
kindergarten teachers felt that their students’ limited
skills in reading and vocabulary knowledge may have
impeded their ability to comprehend all the sentences
presented in print on the computer screen. They also
commented that if they used the software in the future,
they would prefer to read the sentences to the students
using ASL. Had this been done, it would have added
a confounding factor to the study as the grammars of
English and ASL differ greatly.
The second, third, and fourth grade teachers who
responded to the questionnaires and gave verbal feedback to the first author expressed positive comments
regarding motivation and ease of implementation of
the intervention. They also commented that using this
software program in tandem with classroom instruction of the morphosyntax concepts might have been
more beneficial to the students. All teachers expressed
frustration that they were not able to read the sentences to the students using ASL or explain the morphosyntax concepts to the students while they were
completing the research study. However, the point of
the study was to determine whether the LL software
454
Journal of Deaf Studies and Deaf Education 16:4 Fall 2011
product in and of itself was of benefit to the participants in the study.
Participant Questionnaires
The student questionnaires were designed to assess
the social validity of the intervention because previous
researchers found that multimedia tools could be a best
practice for students (Cannon et al., 2010; Dangsaart
et al., 2008; Massaro & Light, 2004). Socially valid
interventions may be key factors to academic success
for direct instruction of morphosyntax (Pipp-Siegal,
Sedey, VanLeeuwen, & Yoshinaga-Itano, 2003), and
investigating this aspect of the intervention was an
additional, informal component of the current study.
Based on positive responses, it appeared that up to half
of the participants enjoyed using the LL software
program and viewed it as a fun activity.
Conclusions
The results of this article add to the minimal literature
base on the effectiveness of using a software program to
increase the comprehension of morphosyntax of DHH
students who use ASL. The LL software program
appears to be an effective individualized classroom activity to increase morphosyntax for this population.
The optimal age range for DHH students to use this
software appears to be first through fourth grade. The
intervention seems socially valid to certain participants,
but further study of the link between student participation and the multimedia format of this software is
needed before this conclusion can be asserted. However, certain limitations of this study are evident.
First, the timing component of the software was
a problem because it constricted the ease of implementation of the intervention. Individual timers had
to be used to monitor the length of the sessions because the LL software would exit the participants
and the timer would turn off if they completed the
sixth module of a level within the 10-min session.
Future research may use the new web-based version
of the LL program that will automatically move participants between modules and levels. This new component would make the limitation of the software’s
timing problems obsolete and would increase ease of
implementation for classroom teachers. Next, teachers expressed concerns about the lack of direct instruction. Additional research is needed to determine
whether progress beyond the 0.4 modules per session
could be increased in the presence of a teacher to
mediate instruction. Because time-on-task influences
progress, another area of research might be to
investigate whether students would choose the
computer-based task over a different format of language instruction.
Another limitation of the current study was the
use of the full semantic subtest of the DELV-NR
because not all the concepts overlapped with the
intervention and may have influenced the results
of the dependent t test. Additionally, we have no
prior research on the use of the DELV-NR with this
population. Even though the DELV-NR was
previously used with DHH participants who use
the oral/aural method, the lack of assessment with
the population used in this study was a limitation
when interpreting the current results. Future
research should include using the DELV-NR syntax
and semantic subtests as assessments on a larger
pool of DHH participants within the same age range
(5–12 years old) who utilize ASL as their primary
mode of communication.
In summary, further research on evidence-based
practices for DHH students is vital for the field of
deaf education and should consider the above suggestions. Innovative research designs to eliminate some
of the challenges of conducting research with lowincidence disability groups should be investigated in
future studies. The daily use of LL increased the
comprehension of morphosyntax of the participants
in this study and may be a promising practice for
DHH students who use ASL.
Conflicts of Interest
No conflicts of interest were reported.
References
Aarons, D., Bahan, B., Kegl, J., & Neidle, C. (1992). Clausal
structure and a tier for grammatical marking in American
Sign Language. Nordic Journal of Linguistics, 15, 103–142.
doi:10.1017/S0332586500002560.
Effectiveness of Software Program 455
Adger, D. (2003). Core syntax: A minimalist approach. New York,
NY: Oxford University Press.
Allen, T. (1986). Patterns of academic achievement among hearing-impaired students: 1974 and 1983. In A. Schildroth &
M. Karchmer (Eds.), Deaf children in America (pp. 161–206).
San Diego, CA: College-Hill.
Anderson, L. W. (1984). Time and school learning: Theory,
research, and practice. London, UK: Croom Helm.
Babbidge, H. D. (1965). Education of the Deaf. A Report to the
Secretary of Health, Education, and Welfare by his Advisory
Committee on the Education of the Deaf. Washington, DC:
Government Printing Office.
Berent, G., Kelly, R., Aldersley, S., Schmitz, K., Khalsa, B.,
Panara, J., & Keenan, S. (2006). Focus-on-form instructional methods promote deaf college students’ improvement
in English grammar. Journal of Deaf Studies and Deaf
Education, 12, 9–24. doi:10.1093/deafed/enl009.
Bishop, D. (1983). Comprehension of English syntax by profoundly deaf children. Journal of Child Psychology &
Psychiatry & Allied Disciplines, 24, 415–434. doi:10.1111/
j.1469-7610.1983.tb00118.x.
Blackwell, P. M., Engen, E., Fischgrund, J., & Zarcadoolas, C.
(1978). Sentences and other systems: A language and learning
curriculum for hearing-impaired children. Washington, DC:
Alexander Graham Bell Association for the Deaf.
Brown, P. M., & Brewer, L. C. (1996). Cognitive processes of deaf
and hearing skilled and less skilled readers. Journal of Deaf
Studies and Deaf Education, 1, 263–270. Retrieved fromhttp://
jdsde.oxfordjournals.org/content/1/4/263.full.pdf1html.
Brown, R. (1973). A first language: The early stages. Cambridge,
MA: Harvard University Press.
Bruner, J. (1986). Actual minds, possible worlds. Cambridge, MA:
Harvard University Press.
Cannon, J. E., Fredrick, L. D., & Easterbrooks, S. R. (2010).
Vocabulary instruction through books read in ASL in English
language learners with hearing loss. Communication Disorders
Quarterly, 31, 98–112. doi:10.1177/1525740109332832.
Carrow-Woolfolk, E. (1999). CASL: Comprehensive Assessment of
Spoken Language. Circle Pines, MN: American Guidance
Service.
Cawthon, S. (2007). Hidden benefits and unintended consequences of No Child Left Behind policies for students who are
Deaf or hard of hearing. American Educational Research
Journal, 44, 460–492. doi:10.3102/0002831207306760.
Chomsky, N. (1957). Syntactic structures. The Hague, The
Netherlands: Mouton & Co.
Chomsky, N. (1973). Conditions on transformation. In
S. R. Anderson & P. Kiparsky (Eds.), A Festschrift for Morris
Halle (pp. 232–286). New York, NY: Holt, Rinehart, and
Winston.
Chomsky, N. (1977). On wh-movement. In P. W. Culicover,
T. Wasow, & A. Akmajian (Eds.), Formal syntax (pp. 71–132).
New York, NY: Academic Press.
Chomsky, N. (1986). Barriers. Cambridge, MA: Harvard University Press.
Chomsky, N. (1995). The Minimalist Program. Cambridge, MA:
MIT Press.
Chomsky, N. (2005). Three factors in language design. Linguistic
Inquiry, 36, 1–22. doi:10.1162/0024389052993655.
Commission on the Education for the Deaf. (1988). Toward
equality. Washington, DC: Governmentt Printing Office.
Conlin, D. & Paivio, A. (1975). The associative learning of the
deaf: The effects of word imagery and signability. Memory
and Cognition, 3, 333–340. Retrieved from http://
www.springerlink.com/content/40662816851423g6/fulltext.
pdf.
Dangsaart, S., Naruedomkul, K., Cercone, N., & Sirinaovakul B.
(2008). Intelligent Thai text—Thai sign translation for language learning. Computers & Education, 51, 1125–1141.
doi:10.1016/j.compedu.2007.11.008.
Davey, B., LaSasso, C., & MacReady, G. (1983). Comparison of
reading comprehension task performance for Deaf and
hearing readers. Journal of Speech and Hearing Research,
26, 622–628. Retrieved from http://jslhr.highwire.org/
cgi/content/abstract/26/4/622.
Easterbrooks, S. R. (1999). Improving practices for students
who are deaf and hard of hearing. Exceptional Children,
65, 537–554. Retrieved from http://www.freepatentsonline.
com/article/Exceptional-Children/55124408.html.
Easterbrooks, S. R. (2010a). Comprehension of written grammar.
Atlanta, Georgia: Department of Educational Psychology
and Special Education. Georgia State University. (Unpublished assessment).
Easterbrooks, S. R. (2010b). Evidence-based curricula and
practices that support development of reading skills. In
M. Marschark & P. E. Spencer (Eds.), The Oxford handbook
of deaf studies, language, and education (Vol. 2, pp. 111–126).
New York, NY: The Oxford University Press.
Easterbrooks, S. R., & Baker, S. (2002). Language learning in
children who are deaf and hard of hearing: Multiple pathways.
Boston, MA: Allyn & Bacon.
Ewoldt, C. (1990). The early literacy development of deaf children. In D. Moores & K. Meadow-Orlans (Eds.), Educational and developmental aspects of deafness (pp. 85–114).
Washington, DC: Gallaudet University Press.
Finn, D., Futernick, A., & MacEachern, S. (2005). Efficacy of
language intervention software in preschool classrooms. Paper
presented at the annual meeting of the American Speech-Language-Hearing Association, San Diego, November 2005.
Fitch, W., Hauser, M., & Chomsky, N. (2005). The evolution of the language faculty: Clarifications and implications. Cognition, 97, 179–210. Retrieved from http://
bendyglu.domainepublic.net/archives/chomsky%20reader/
FitchHauserChomksyLangFacCog.pdf.
Gardner, M. F. (1979). Expressive One-Word Picture Vocabulary
Test. Novato, CA: Academic Therapy Publications.
Holt, J. A., Traxler, C. B., & Allen, T. E. (1997). Interpreting the
scores: A user’s guide to the 9th Edition Stanford Achievement
Test for educators of deaf and hard-of-hearing students. Gallaudet
Research Institute Technical Report 97-1. Washington, DC:
Gallaudet University.
Johns, J. L. (2008). Basic Reading Inventory: Pre-primer through
grade twelve and early literacy assessments. Dubuque, IA:
Kendall Hunt.
456
Journal of Deaf Studies and Deaf Education 16:4 Fall 2011
Kelly, L. (1996). The interaction of syntactic competence and
vocabulary during reading by deaf students. Journal of Deaf
Studies and Deaf Education, 1, 75–90. Retrieved from http://
jdsde.oxfordjournals.org/content/1/1/75.full.pdf1html.
Kyle, F., & Harris, M. (2010). Predictors of reading in deaf
children: A three-year longitudinal study. Journal of Experimental Child Psychology, 107, 229–243. doi:10.1016/
j.jecp.2010.04.011.
Labov, W. (1969). Contraction, deletion, and inherent variability
of the English copula. Language, 45, 715–762. Retrieved
from http://www.jstor.org/stable/412333.
Labov, W. (1972). Language in the inner city: Studies in the Black
English vernacular. Philadelphia, PA: University of Pennsylvania Press.
Lane, H., Hoffmesiter, R., & Bahan, B. (1996). Journey into the
deaf-world. San Diego, CA: Dawn Sign Press.
Macaluso, R. (2007). Through the looking glass: Grammatical skills
as a critical component to successful reading comprehension.
Dissertation Abstracts International Section A: Humanities and
Social Sciences, 68, 468. Retrieved from http://www.proquest.
com/en-US/catalogs/databases/detail/dai.shtml.
Marschark, M., & Wauters, L. (2008). Language comprehension
and learning by deaf students. In M. Marschark, P. Hauser
(Eds.), Deaf cognition: Foundations and outcomes
(pp. 309–350). New York, NY: Oxford University Press.
Massaro, D. W., & Light, J. (2004). Improving the vocabulary
of children with hearing loss. The Volta Review, 104,
141–174. Retrieved from www.cinahl.com/cgi-bin/
refsvc?jid=1206&accno=2005037710.
McNally, P., Rose, S., & Quigley, S. (2007). Reading practices
with deaf children. Austin, TX: PRO-ED.
Merchant, G., deVilliers, J. G., & Smith, S. (2008). Optimized
intervention software benefits grammar skills in young oral deaf
children. Presentation Session at the Annual National Conference of the Council for Exceptional Children, Boston,
MA.
Moats, L. C. (2000). Speech to print: Language essentials for
teachers. Baltimore, MD: Brookes Publishing Company.
Moeller, M., Osberger, M., & Eccarius, M. (1986). Language
and learning skills of hearing-impaired students. In M.
J. Osberger (Ed.), Receptive language skills (pp. 41–53).
ASHA Monographs, 23 Rockville, MD: American
Speech-Langauge-Hearing Association. Retrieved from
http://www.asha.org/uploadedFiles/publications/archive/
Monographs23.pdf.
National Agenda. (2005). National Deaf Education Project Now.
Retrieved from http://www.ndepnow.org/.
National Reading Panel. (2000). Report of the National Reading
Panel. Washington, DC: National Institute of Child Health
and Human Development. Retrieved from http://
www.nichd.nih.gov/publications/nrp/report.cfm.
Neidle, C., Kegl, J., MacLaughlin, D., Bahan, B., & Lee, R. G.
(2000). The syntax of American Sign Language: Functional categories and hierarchical structure. Cambridge, MA: MIT Press.
Peyton, T. (2009). Does ASL really just have two grammatical
persons? Sign Language Studies, 9, 166–210. doi:10.1353/
sls.0.0010.
Pipp-Siegel, S., Sedey, A., VanLeeuwen, A., & Yoshinaga-Itano, C.
(2003). Mastery motivation and expressive language in young
children with hearing loss. Journal of Deaf Studies and Deaf
Education, 8, 133–145. doi:10.1093/deafed/eng008.
Power, D. J., & Quigley, S. P. (1973). Deaf children’s acquisition
of the passive voice. Journal of Speech and Hearing Research,
16, 5–11. Retrieved from http://jslhr.highwire.org/cgi/
content/abstract/16/1/5.
Quigley, S. P., & King, C. M. (1980). Syntactic performance of
hearing impaired and normal hearing individuals. Applied Psycholinguistics, 1, 329–356. doi:10.1017/S0142716400000990.
Quigley, S. P., & Power, D. J. (1972). The development of syntactic
structures in the language of deaf children. Urbana, IL: Institute for Research on Exceptional Children.
Quigley, S. P., Wilbur, R. B., & Montanelli, D. S. (1976). Complement structures in the language of deaf students. Journal
of Speech and Hearing Research, 19, 448–457. Retrieved
from http://jslhr.highwire.org/cgi/content/abstract/19/
3/448.
Quigley, S. P., Wilbur, R. B., Montanelli, D. S., Power, D. J., &
Steinkamp, M. (1976). Syntactic structures in the language
of deaf children. Urbana: University of Illinois At
Urbana-Champaign, Institute for Child Behavior and
Development.
Raudenbush, S. W., & Chan, W. (1992). Growth curve analysis
in accelerated longitudinal designs. Journal of Research in
Crime and Delinquency, 29, 387–411. doi:10.1177/
0022427892029004001.
Schimmel, C., & Edwards, S. (2003). Literacy strategies for the
classroom: Putting Bi-Bi theory into practice. Odyssey, 5,
58–63. Retrieved from http://www.gallaudet.edu/Clerc_
Center/Information_and_Resources/Products_and_Publications/Odyssey.html.
Schirmer, B. R., & McGough, S. M. (2005). Teaching reading
to children who are deaf: Do the conclusions of the National Reading Panel apply?Review of Educational Research,
75, 63–117. doi:10.3102/00346543075001083.
Seymour, H. N., Roeper, T. W., & deVilliers, J. (2005). Diagnostic Evaluation of Language Variation—Norm Referenced
(DELV-Norm Referenced). San Antonio, TX: PsychCorp.
Stinson, M. S., & Kluwin, T. N. (2003). Educational consequences of alternative school placements. In M. Marschark
& P. E. Spencer (Eds.), Oxford handbook of deaf studies,
language, and education (pp. 52–64). New York, NY: Oxford
University Press.
Verhoeven, L., & Van Leeuwe, J. (2008). Prediction of the development of reading comprehension: A longitudinal study. Applied Cognitive Psychology, 22, 407–423. doi:10.1002/acp.1414.
Wilson, M. S. (2008). LanguageLinks: Syntax Assessment and
Intervention and Prepositions! Background, Rationale, and
Use In Instructional Program. Retrieved from http://
www.laureatelearning.com/pdfs/langlinks_prep_mono.pdf.
Wilson, M. S., Fox, B. J., & Pascoe, J. P. (2003). The theory and
research bases of Sterling Editions language intervention programs. Retrieved from Laureate Learning Systems, Inc.
http://www.laureatelearning.com/products/monopro/
llsmonographs.html.
Effectiveness of Software Program 457
Wilson, M. S., Fox, B. J., & Pascoe, J. P. (2008). Laureate’s language development programs: Theory and research. Retrieved
from Laureate Learning Systems, Inc. http://www.laureate
learning.com/products/monopro/llsmonographs.html.
Wolf, M. M. (1978). Social validity: The case for subjective
measurement or how applied behavior analysis is finding
its heart. Journal of Applied Behavior Analysis, 11,
203–214. doi:10.1901/jaba.1978.11-203.