How Do Children Who Can`t Hear Learn to Read an Alphabetic

How Do Children Who Can’t Hear Learn to Read an
Alphabetic Script? A Review of the Literature on Reading
and Deafness
Carol Musselman
The Ontario Institute for Studies in Education
University of Toronto
I review the literature on reading and deafness, focusing on
the role of three broad factors in acquisition and skilled reading: the method of encoding print; language-specific knowledge (i.e., English); and general language knowledge. I explore the contribution of three communication systems to
reading: spoken language, English-based sign, and American
Sign Language. Their potential contribution to literacy is
mediated by four parameters on which they differ: codability,
structural isomorphism, accessibility, and processibility. Finally, I discuss the implications for additional research as well
as for education.
Learning to read is a critical developmental task that
has profound implications for educational, vocational,
and social development. This task is even more difficult
for children who are deaf. Language delay—which is
the hallmark of deafness—increases the challenge of
acquiring this skill. If one lacks normal hearing, spoken
language develops slowly and may never progress beyond a minimal level. Deaf children, therefore, have
only limited knowledge of the spoken language that
print represents. Even though most deaf persons eventually develop at least functional communication skills
in sign language, participation in society is rendered
more difficult by their lack of well-developed speech.
Reading (and writing), therefore, becomes even more
essential for accessing education and the workplace.
Researchers in the field of deafness have long been
involved in a debate on communication methods. Until
Correspondence should be sent to Carol Musselman, HDAP, OISE/UT,
252 Bloor Street West, Toronto, Ontario M5S 1V6 (e-mail: cmusselman
@oise.utoronto.ca).
䉷2000 Oxford University Press
recently, this debate centered on the relative value of
instruction in spoken language vs. sign language. More
recently, it has shifted to a concern with the most appropriate form of sign language to use in educational
settings. All schools of thought, however, agree on the
importance of reading, and arguments favoring one
communication mode over another frequently hinge on
its purported ability to facilitate literacy. Notions of
reading, therefore, are central to current conceptualizations of deafness and deaf education.
There are two broad views of reading and deafness.
The dominant view is that deaf persons learn to read
and engage text using essentially the same processes as
do hearing persons. An opposing view is that deaf individuals read using qualitatively different processes. In
examining this controversy, I will first review key concepts about deafness and consider the impact of deafness on the development of interpersonal communication skills and literacy.
From this background, I will examine three factors
implicated in reading acquisition. The first is the
method of encoding print. It has been hypothesized
that the relatively poor reading skills of deaf individuals result from deficiencies in phonological processing.
Some researchers, however, argue that deaf persons
utilize visual representations of print, or codes based
on orthography, articulation, fingerspelling, or sign
language. I will consider evidence concerning the role
of these encoding strategies in reading. The second
factor is language-specific knowledge. Evidence will be
considered for the hypothesis that difficulties in read-
10
Journal of Deaf Studies and Deaf Education 5:1 Winter 2000
ing derive from deaf persons’ limited knowledge of the
semantics and syntax of English. Finally, I will consider the role of general language knowledge and evidence for the hypothesis that general language skills
acquired in sign language can compensate for deficiencies in specific English-language skills.
Language and Deafness: Skill Levels,
Communication Modes, and Current Issues
What does it mean to be deaf? The most common
definition is an audiological one, originally developed in
the context of treating hearing problems due to aging.
The central concept in this definition is hearing loss
(measured in decibels), which represents the increase
in intensity a person requires to detect the presence of
sound. For most individuals who identify themselves as
deaf, however, elevated sound detection thresholds
were present at birth or soon thereafter, and hearing
was never lost. Educators typically define deafness in
functional terms, identifying those as hard-of-hearing
who function auditorially given appropriate amplification and acoustic conditions, and as deaf, those who
do not.
But how do deaf people define themselves? A deaf
man with hearing parents says this about his educational experience: “Something echoed through my eyes
told me I was indeed different, but in what way!” Arguing that these differences give rise to a unique culture, he continues: “Deaf culture is reared upon deaf
nature, through the enhancement of visuality that it
emerged from” (Goulet, 1990). In a creative writing
task, a 16-year-old deaf boy with deaf parents imagines
the future development of four new cities on the moon.
He names one of the cities Eyeth:
Eyeth is a special city, that city is on this picture,
Eyeth have all deaf people not even hearing people
. . . Earth ⫽ alot of people are hearing in world.
People depend on their ear to listen. Eyeth ⫽ alot
of people are deaf in City. People depend on their
eye to listen.
What does it mean to naturally depend on the eye?
The most direct implication is that spoken language
will be acquired, if at all, only with difficulty. And, indeed, research shows that most individuals with severe
or profound hearing losses (i.e., greater than 70 dB) do
not acquire functional speech (Jensema, Karchmer, &
Trybus, 1978; Musselman, 1990). A minority, roughly
one quarter, do rely primarily on spoken language
(Musselman & Akamatsu, in press), but only as the result of intensive, long-term specialized instruction
(Musselman & Kircaali-Iftar, 1996). This is not to say
that deaf individuals do not use speech at all; in fact,
most make some use of spoken language, primarily in
situations in which the message is simple and there are
abundant contextual cues (Binet & Simon, 1909; Musselman [Reich] & Reich, 1976).
In contrast, deaf children have unimpaired abilities
to acquire sign language, which is language realized in
the visual modality. The only deaf children, however,
who actually acquire language at a normal rate are
those born to deaf families who use a natural sign language, such as American Sign Language (ASL) (Caselli & Volterra, 1990; Newport & Meier, 1985). Because 90% of deaf children are born to hearing parents,
they are unable to exploit their available languagelearning capabilities due to this sensory mismatch. By
adulthood, most deaf persons communicate primarily
in ASL, with varying degrees of skill. A distinction has
been made between the Deaf community, which uses
ASL, and the larger community of deaf persons who
communicate primarily through speech (Padden &
Humphries, 1988; Wilcox & Corwin, 1990).
In addition to limited spoken language, deafness
usually results in poor knowledge of the semantics and
syntax of the spoken language. Studies of deaf individuals throughout the life span show limited vocabulary
acquisition, coupled with limited knowledge of the
multiple meanings of words (Paul, 1996a; Schirmer,
1985). Knowledge of grammatical rules is delayed,
with particular problems evident in verb tenses and the
rules for producing coordinate and compound sentences (Quigley, Power, & Steinkamp, 1977). In terms
of reading, most deaf teenagers and adults are severely
delayed, with reading comprehension skills usually
reaching a plateau at a grade 4 or 5 level (Holt, 1994).
Throughout most of this century, instruction for
deaf students was primarily auditory-oral (A/O). A/O
instruction promotes the development of spoken language through training the use of residual hearing and
speechreading. Failure to achieve widespread success
Deaf Readers
with this method and increasing understanding of the
linguistics of ASL resulted in the introduction of Total
Communication (TC) methods. As a philosophy, TC
involves the addition of visual methods of communication to the use of residual hearing and speechreading:
gestures, fingerspelling, and sign language. In practice,
TC generally refers to the simultaneous use of spoken
language and English-based sign. By the end of the
1960s, the majority of programs were of this type
(Stewart, 1993).
It is important to distinguish English-based sign
from ASL. The latter is a natural language with its own
vocabulary and syntax. The most obvious aspect of
ASL is the production of signs on the hands, which
correspond roughly to words in spoken language. The
order in which sign are produced conveys syntactic information, just as in spoken languages. Thus, ASL involves the production of signs in temporal sequence.
Unlike spoken languages, however, facial expression and
body movements also convey syntactic information. Although gestures and affective displays play an important
role in spoken communication, they are optional rather
than obligatory, analogical rather than heuristic, and
provide an unbounded set of opportunities for message
modulation. In ASL, however, many facial and postural gestures are obligatory and rule-governed and
constitute a finite set of discrete meaning units. ASL
also has features that derive from the unique possibilities inherent in visual communication. One of these is
the use of space and the movement of signs through
space to convey semantic and syntactic information.
Space plays an important role in the formation of individual manual signs, as well as in discourse structure.
For example, after one introduces the major actors in
a story, these can be assigned to spatial locations and
subsequently indexed with a point gesture. Another
important feature of ASL is its use of simultaneous
(rather than purely sequential) elements. For example,
basic signs can be inflected by accompanying facial expressions and body movements, as well as modified by
adjectives and adverbs that are incorporated into the
sign itself, rather than affixed (Meier, 1991; Wilbur,
1987).
English-based sign, on the other hand, is best understood as a code for English. Signs, about 70% of
which have been derived from ASL, are arranged in
11
English word order, together with artificial signs that
have been created to represent the function words and
inflectional morphemes of English (Stewart, 1993;
Stokoe, 1975).
The 1990s has seen yet another dramatic shift in
deaf education: the systematic use of ASL. It has been
proposed that ASL is the native language of Deaf
people and should replace English-based sign as the
primary vehicle of communication (Sacks, 1989). In
Unlocking the Curriculum, Johnson, Liddell, and Erting
(1989) presented one of the first systematic arguments
for bilingual/bicultural education for deaf children, a
call that has been echoed by others (e.g., Israelite,
Ewoldt, & Hoffmeister, 1992; Mason & Ewoldt, 1996)
and that has led in recent years to their implementation.
Bi-Bi programs involve the use of two languages—ASL
and English, the latter primarily in print—and socialization into two cultures—hearing and Deaf.
The debate over communication methods is still a
lively one. Few continue to advocate an exclusively A/O
approach, and it is widely acknowledged that selection
of a spoken language vs. a sign language approach requires an assessment of individual capabilities. There is
much less agreement on the respective roles of Englishbased sign and ASL and how to assess students’ needs.
These controversies have implications for reading instruction and will be revisited in subsequent sections.
Phonological Processing by Deaf Readers
Stanovich (1991) writes that the “specification of the
role of phonological processing in the earliest stages of
reading acquisition . . . [is] one of the more notable scientific success stories of the last decade.” Research
shows that phonological processing is involved at all
levels of skill and plays a causal role in reading skill acquisition. Furthermore, studies show that phonologically based remedial programs are effective with disabled readers (e.g., Lovett et al., 1994). The role of
phonological processing in hearing readers will not be
elaborated here, as there is extensive literature on this
subject.
But is this true of deaf readers, whose orientation
to language is typically visual and whose spoken language is usually severely delayed? Conrad (1979) represents one of the earliest attempts to address this ques-
12
Journal of Deaf Studies and Deaf Education 5:1 Winter 2000
tion. Using a sample that included most of the
adolescents in England and Wales, Conrad compared
their short-term memory for printed words in two experimental contexts: one in which the sets of words
rhymed (e.g., do, few, who, zoo, blue, true, screw, through),
and one in which the words in a set were visually similar (e.g., bare, bean, door, furs, have, home, farm, lane).
Examining error patterns on such a task provides evidence of how the items have been represented internally. If they are coded phonologically, readers should
have more difficulty remembering the rhyming words,
since they are more easily confused. Similarly, visual
encoding should result in more errors in remembering
the visually similar words. Conrad found that most of
the deaf teens made more errors when the words were
phonologically confusing than when they were visually
confusing. He did find a subgroup of teens who had
more visual confusions. Those using a phonological
code had more hearing and more intelligible speech.
Most important, however, use of a phonological code
was strongly related to reading comprehension. Thus,
although some deaf teens used a visual code, it did not
support skilled reading.
Conrad’s results might reflect the fact that all of his
participants were educated in oral programs. Subsequent research, however, has extended his finding to
students in other programs. In a review of the literature, Jacqueline Leybaert (1993) concluded that deaf
students from both A/O and TC programs use speechbased codes in short-term memory tasks for written
words, and that the extent to which they use phonological codes is positively related to their reading levels.
This relationship has been obtained using a number of
experimental paradigms, including the phonological
similarity task described above, plus letter cancellation,
Stroop, lexical decision, and judgments of rhyming and
semantic acceptability tasks.
One of the more telling studies demonstrated that
even deaf students whose first language was ASL and
whose speech was poor used phonological encoding
(Hanson, Goodell, & Perfetti, 1991). These authors
compared the ability of deaf and hearing college students to judge the semantic acceptability of printed
sentences, half of which were tongue twisters and half
of which were not. Both groups made more errors on
the tongue twister than the control sentences.
The sole exception to this general finding comes
from a study by Chincotta and Chincotta (1996), which
used an articulatory suppression task with orally educated Chinese deaf students. Articulatory suppression
is a phenomenon commonly observed in hearing
people, in which the repetition of an irrelevant word or
phrase is found to reduce verbal memory. Using a digit
span task, these authors found that a competing articulatory task did not suppress short-term memory in
their participants, suggesting that these deaf children
were not using a phonological code.
Stanovich (1980, 1994) made an important distinction between experimental tasks that assess problemsolving skills related to phonology and those that tap the
actual use of phonological knowledge in word identification. Only the latter, he argues, provide ecologically
valid evidence concerning the processes involved in
reading. Problem-solving tasks may merely reflect general intelligence, linguistic knowledge, or even constitute a by-product of reading itself. Unfortunately, most
studies of deaf students have used just such tasks.
Like others before him, Kelly (1993) tested for the
presence of phonological encoding by using a lexical
decision task, which requires participants to judge
whether or not strings of letters constitute words. Kelly
found that deaf adolescent readers from a TC program
did access phonological information, as evidenced by
faster reaction times for word pairs that were phonologically and orthographically similar compared to
pairs that were only orthographically similar. Concerned about the validity problem, however, Kelly also
included a reading recall task. He found that use of
phonological information failed to correlate highly
with either reading speed or accuracy. Thus, while subjects used phonological information to aid memory, it
did not play an important role in actual reading. Kelly
points out that this finding is tentative due to methodological problems in the study, including small sample
size (n ⫽ 17) and ceiling effects on the measure.
Support for Kelly’s finding comes from a study by
Waters and Doehring (1990). They again found that
deaf children and adolescents used phonological coding in short-term memory, but their ability to do so was
not related to reading achievement, as measured by the
reading vocabulary and reading comprehension subtests of the Stanford Achievement Test. Rather, read-
Deaf Readers
ing scores were predicted by the accuracy and speed
with which they could identify words, not the encoding
processes used to do so. Furthermore, reaction times
and error rates on a lexical decision task were the same
for phonologically regular and irregular words, again
suggesting that the students were not encoding words
phonologically. These findings are especially telling because the subjects were all orally educated, meaning
that their training emphasized spoken language.
Leybaert and Alegria (see Leybaert, 1993) provided the first evidence for phonological encoding by
deaf participants in an actual reading task. In a series
of read-aloud studies, they found that deaf participants, like hearing participants, were able to pronounce
pseudo-words, i.e. novel, word-like strings without
meaning. Furthermore, evidence from both accuracy
rates and speed measures showed that pseudo-words
with a complex phonology were more difficult to read
than those with a simple phonology. Deaf subjects were
also more accurate in reading regular than irregular
words. Thus, deaf readers appear to use phonological
information during oral reading.
In addition to problems of ecological validity, an
additional limitation of existing research is the fact that
most studies have been conducted with adolescents and
college students. Thus, it is possible that phonological
encoding is an outcome of learning to read, rather than
a prerequisite. The studies by Waters and Doehring
(1990) and by Chincotta and Chincotta (1996), both of
which included some preadolescents, found no evidence of phonological encoding.
A study by Hanson, Liberman, and Shankweiler
(1984) is one of the few providing evidence of phonological encoding by beginning deaf readers. Using a
sample of deaf children from a total communication
program, they compared short-term memory for sets
of letters that were phonetically (B C P V), dactylically
(M N S T) and visually (K W X Z) similar to a control
set. Since, in this task, the same stimulus sets are presented repeatedly, improved performance in one condition over another is evidence of encoding. It was found
that both phonetic and dactylic similarity improved
performance for good readers, whereas visual similarity did not. The accuracy of poor readers did not vary
by stimulus set. An analysis of errors provided additional evidence of phonological and dactylic encoding:
13
Among good (but not poor) readers, most errors
rhymed with the target. Good (but not poor) readers
also appeared to encode letters by the shape of the fingerspelled form. In a study using the Stroop paradigm,
Leybaert and Alegria (1993) also found that young deaf
readers accessed phonological information. There was,
however, no relationship between their access to phonological information and reading ability.
Although one would not deny the importance of
these sampling and methodological issues, the body of
evidence currently available supports the hypothesis
that skilled reading by deaf students (like that of hearing students) involves phonological encoding. Before
this conclusion can be accepted without qualification,
however, it requires systematic replication with both
beginning and mature readers on reading tasks having
high ecological validity.
In summarizing the literature to date, Hanson
(1991) concludes: “It is clear that alternatives such as
visual and sign coding can be used by deaf readers. . . .
The evidence suggests, however, that neither of these
alternatives is an effective substitute for a phonological
code in verbal short-term memory” (p. 157). If true,
we are left with a “Catch 22”: Despite having severely
deficient speech, deaf children must develop phonological processing capabilities in order to become skilled
readers. Leybaert (1993) concludes that this is precisely what underlies the reading problems of deaf
individuals. In addition to the body of research on encoding, this conclusion is consistent with program
evaluation studies showing that orally-educated deaf
children have better reading skills than those educated
using sign language (e.g., Geers & Moog, 1989; Rogers,
Leslie, Clarke, Booth, & Horvath, 1978).
A further conceptual and methodological caution
about this conclusion must be raised. Waters and
Doerhing (1990) argue that evidence of phonological
information being available during reading is not proof
that it is required. Rather than assembling phonological
representations from spelling-sound correspondences,
deaf individuals might use some other encoding system
to hold words in short-term memory, and then retrieve
the phonology as a whole from long-term memory
along with meaning.
Support for this position comes from a series of
studies using the Stroop paradigm conducted by Ley-
14
Journal of Deaf Studies and Deaf Education 5:1 Winter 2000
baert and Alegria (1993). These authors found that,
whereas deaf children demonstrated the familiar interference effect from color words on the standard task
that requires a vocal response, this effect was considerably reduced on a manual (i.e., button pressing) task.
In fact, profoundly deaf participants with unintelligible
speech had faster reaction times on the manual task
than either profoundly deaf participants with intelligible speech or participants with severe deafness. It is
also interesting that deaf readers did not show impaired
performance when reading pseudo-words that were
homophonic to the color, while hearing readers did so.
Thus deaf participants did not access phonological information unless required to produce it.
Taken together, these findings suggest that deaf
readers’ access to phonological representations may
follow, rather than precede, initial word identification.
If this is so, deaf readers must use another method to
represent print in short-term memory. Stanovich
(1994) has suggested that phonological encoding may
function, not directly in lexical access, but as an efficient way of holding strings of words in short-term
memory while higher-level processors operate upon
them. Is there evidence that deaf readers do use other
encoding strategies to access text, and that these strategies can support skilled reading? Various proposals
have been advanced, all of them looking to deaf readers’
unimpaired visual sensorium.
Are There Alternatives to Phonological Encoding?
Orthography. Conrad’s (1979) early study showed that
some deaf readers used an orthographic strategy, although it appeared to be less effective than phonological encoding. Evidence from a study of hearing children shows that an orthographic strategy can be
effective in learning to read, albeit with a nonalphabetic
language. Huang and Hanley (1994) studied reading
acquisition in English- and Chinese-speaking children
from both Taiwan and Hong Kong. Written Chinese
is primarily logographic; although almost 80% of the
characters do include a phonetic marker, it is not always a reliable guide to pronunciation.
The children in their sample were assessed on a
battery of tests, including measures of phonological
awareness. Using multiple regression analysis, Huang
and Hanley found that, when nonverbal IQ and vocabulary knowledge were taken into account, the relationship between phonological awareness and reading disappeared in the two Chinese groups. A test of visual
skill (paired associate learning of unfamiliar figures)
was the most powerful predictor of reading for these
children (with simple correlations at .70 and above).
For the English-speaking children, however, measures
of phonological awareness continued to contribute significantly to the equation.
Furthermore, the level of phonological awareness
in the two Chinese groups was found to reflect method
of instruction. In Taiwan, students first learn to read a
phonetic script for Chinese (Zhu-Yin-Fu-Hao), while
Hong Kong children are introduced directly to characters. It is thus not surprising that the Taiwanese
students outperformed those from Hong Kong on
measures of phonological awareness in Chinese. The
difference, however, was not related to reading performance.
If visual processes play an enhanced role in hearing
children’s learning to read a logographic language,
might deaf children effectively use this strategy when
learning an alphabetic language? This possibility is enhanced by evidence that deaf persons have better visual
processing skills than hearing children. For example,
Parasnis and Samar (1982) found that deaf students
were faster at noticing visual stimuli. In her review of
the literature, Parasnis (1983) concluded that this visual advantage obtains only for certain types of tasks,
namely those that require attention to the whole pattern; on tasks requiring analysis of the pattern, deaf individuals perform more poorly. This conclusion is supported by the results of a more recent study by Craig
and Gordon (1988), in which deaf adolescents were
found to perform better than a hearing group on visual
localization and closure tasks, but less well on a task
requiring them to determine how many blocks in a
three-dimensional array were touching a designated
block. They further investigated the relationship between performance on these tasks and reading and
found a significant relationship only for the touching
blocks task. These studies suggest that, although deaf
students have some enhanced visual processing skills,
these are not involved in reading.
Padden (1993) found evidence for orthographic en-
Deaf Readers
coding by deaf students, albeit in spelling. Investigating
the spelling errors of young deaf children whose primary language was ASL, Padden concluded that these
children attempted to reproduce the overall shape of a
word, tending to confuse letters of the same height
(e.g., t, d, b) and those with descenders (e.g., p, q, g), as
well as being sensitive to the feature of doubling (e.g.,
spelling green as ganne). These young deaf spellers only
produced letter sequences that are possible in English,
suggesting sensitivity to orthographic information. Although suggestive, these findings are limited because
they derive from an analysis of a relatively few number
of spelling errors; for correctly spelled words (which
were considerably more frequent in her corpus), it was
impossible to separate the effects of orthography from
phonology. The study also did not investigate the relationship between use of an orthographic code and
spelling performance.
Beyond Conrad’s (1979) work, only a few studies
have investigated the role of orthography in reading itself. One of the difficulties with this research is disentangling the effects of phonologic and orthographic
similarity. Two studies, designed to test the hypothesis
that phonological encoding is used in reading, attempted to eliminate orthographic similarity as a possible confound. Quinn (1981) tested deaf adolescents
from both A/O and TC programs on a letter detection
task, varying whether the target letter (in this case the
letter g) had a regular or irregular pronunciation in the
word. He found that both groups of deaf subjects made
more errors on the irregular than the regular words,
indicating that phonological encoding was involved in
this simple task in which orthographic features were
held constant. In a post-hoc analysis, however, Quinn
determined that the irregular words, although as frequent in English as the regular words, included less
common letter sequences. Thus, the results could have
reflected the deaf subjects’ sensitivity to orthographic
regularities.
The previously cited study by Hanson, Goodell,
and Perfetti (1991) found that it was more difficult to
judge the semantic acceptability of tongue-twister than
control sentences, a finding that provides evidence for
phonological encoding. Two additional conditions were
included in this study in order to eliminate orthographic similarity as a possible confound. In these con-
15
ditions, a concurrent memory task was imposed on the
judgments of semantic acceptability. The stimuli were
either phonologically similar or dissimilar to the target
sentences, but numbers were used rather than letters
or words. Phonological similarity of the concurrent
memory stimuli was found to further increase the
difficulty of judging semantic acceptability, despite the
absence of any possibility of orthographic similarity.
Two other studies have explicitly tested for the
presence of orthographic encoding. Hanson (1982) investigated short-term memory for word sets that were
phonologically or orthographically similar. She found
no evidence of orthographic encoding for either signed
or printed presentations. Parasnis and Whitaker (1992)
also compared the effects of phonological and orthographic similarity on verbal processing. Arguing that
the visual-processing superiority of deaf individuals
derives from exposure to sign, they selected only deaf
students who were fluent signers. Subjects were asked
to judge whether pairs of words rhymed. Word pairs
were constructed to be either phonologically similar
(e.g., bowl-toll) or orthographically similar (e.g., bowlhowl). Overall, the deaf signers scored barely above
chance in judging whether or not the phonologically
similar pairs matched but considerably below chance
on the orthographically similar pairs, suggesting that
they were primarily using an orthographic strategy.
There was, however, considerable intersubject variability. Relating performance to reading, Parasnis and
Whitaker found a significant relationship between use
of a phonological code and reading, but no such relationship for orthographic encoding.
Using orthographic features to encode print would
seem to be a natural compensatory strategy for readers
with limited hearing. This hypothesis is consistent
with the empirical finding that deaf individuals have
superior skills on some visual processing tasks. Although there is evidence that some deaf individuals do
use an orthographic code, it appears to be less effective
than a code based on phonology.
Articulation. A number of investigators have argued that
the “phonological” code used by deaf readers may, in
fact, be based on speech movements. Leybaert (1993)
suggests that a phonological code need not be soundbased, so long as it provides a complete and unambigu-
16
Journal of Deaf Studies and Deaf Education 5:1 Winter 2000
ous representation of the text. Chalifoux (1991) proposes that deaf readers assemble a visual representation
of the text by converting graphemes into articulatory
movements, retaining them in a visual-spatial store (in
contrast to hearing readers who convert them to phonemes and retain them in an acoustic store). Chincotta
and Chincotta’s (1996) observation that deaf children
mouthed stimuli in their short-term memory task provides anecdotal evidence. Leybaert and Alegria (1993)
propose that deaf children may draw on their knowledge of both speechreading and speech production to
represent print, thus combining visual and kinesthetic
cues. Parasnis and Whitaker (1992) found that speechreading ability was related to participants’ use of a phonological code while speech intelligibility was not, supporting Chalifoux’s notion that articulatory movements
are represented visually.
One of the problems with a visual articulatory code
is that a number of phonemes look alike on the lips,
resulting in a code that is ambiguous. Cued Speech
(Cornett, 1967) is one of several systems that has been
developed for distinguishing phonemes which are visually confusable. Cued Speech consists of a system of
manual signals that a speaker coordinates with speech
and that, together with the lip movements, provide an
unambiguous representation. On their own, cues do
not provide an interpretable signal, and the deaf listener must attend to the oral as well as the manual
movements. Summarizing several studies on the relation between Cued Speech and phonological encoding,
Leybaert and Charlier (1996) concluded that deaf children with greater exposure to Cued Speech (both at
home and at school) relied on phonological coding to a
greater extent than deaf children with less exposure
(only at school).
It has not yet been possible to empirically disentangle articulatory from phonological encoding in order to provide a direct demonstration of its use in reading-related tasks. The only supportive evidence to date
is indirect and extremely sparse, although it remains a
worthwhile hypothesis to investigate.
Fingerspelling. Another possible visual substitute for
phonological encoding is fingerspelling. Interestingly,
Stanovich (1991) proposes that phonological encoding
may be important in reading because it focuses the
child’s attention on letters, thus facilitating the induction of orthographic structure. Fingerspelling provides
a comprehensive and unambiguous means for representing the phonetic structure of language in a manner
that is uniquely isomorphic to the printed text.
In the past, fingerspelling in combination with spoken language was used as a primary method of communication in some educational settings. Known as Visible English or the Rochester Method, its purpose was
to maximize the relationship between the interpersonal
communication system and English print. Hoemann
(1972) studied the spontaneous use of fingerspelling
and sign in a referential communication task by
elementary-age deaf children from a school that used
this method. He found that even the youngest students
(ages 6–7) used fingerspelling more frequently than
signs (53% vs. 33%). Furthermore, peer receivers usually understood fingerspelled messages, although incorrect spelling did create some difficulties for younger
students. In an experimental study, Quigley (1969)
found that deaf students educated using the Rochester
Method had better language and academic skills than
those educated orally. This method has now been
largely abandoned as a primary communication tool
because it is cumbersome and difficult to implement
consistently (see Reich & Bick, 1977).
There are a few reported cases of fingerspelling
serving as the primary language for children with deaf
parents. In natural settings, however, fingerspelling
primarily functions to supplement signs. Proper names
are commonly fingerspelled, as are loan words from
English for which there is no ASL equivalent. Signs
for which the English equivalent is ambiguous may be
fingerspelled, a practice shown to contribute to the
clarity of simultaneous communication (MalleryRuganis & Fischer, 1991). Signs that are ambiguous in
English may also be initialized; that is, the natural handshape is replaced by the letter that begins the English
word, while the remaining features of the sign are retained. Over time, frequently fingerspelled words may
become regularized and incorporated into the sign lexicon (Battison, 1978).
Studies of naturalistic communication within deaf
families reveal that parents even use fingerspelling with
infants. Deaf mothers have been observed to fingerspell
words for emphasis, to represent a concept that has no
Deaf Readers
natural sign and, with older children, specifically to develop literacy (Erting, Thumann-Prezioso, & Benedict, 1997).
Padden and Ramsey (1998) studied the acquisition
of reading and writing skills by a large sample of deaf
and hard-of-hearing children. They found a moderate
correlation between a measure of fingerspelling comprehension and reading comprehension. Moreover, in
systematic observations of a number of classrooms,
they documented quite widespread use of fingerspelling during reading instruction. Although deaf teachers
fingerspelled more than twice as often as hearing teachers, the use of fingerspelling was quite extensive in both
groups: an average of approximately twice per minute
for deaf teachers and once per minute for hearing
teachers. Padden and Ramsey observed fingerspelling
frequently being used in what they called chaining sequences, a sequence of interactions in which teachers
formed an explicit relationship between a sign or a
printed word and a fingerspelled word.
Only two studies have directly investigated the possible use of fingerspelling to encode print. The previously cited study by Hanson, Liberman, and Shankweiler (1984) found evidence for dactylic (as well as
phonological) encoding of letters from an analysis of
accuracy rates and error patterns. Mayberry and Waters (1987) investigated the use of fingerspelling for encoding print. Using both word recognition and lexical
decision tasks, they compared the ability of deaf children and adolescents to recognize words presented in
print and in fingerspelling. The finding that words
were recognized faster and with greater accuracy when
presented in print suggests that fingerspelling was not
the method of encoding.
Although fingerspelling is a component of ASL
and may be used by teachers to mediate between sign
language and printed English, there is no direct evidence that deaf readers use it to encode print. The
strongest support comes from Padden and Ramsey’s
(1998) finding that fingerspelling and reading comprehension were related; the direction of causation in
these data, however, remains unclear.
Sign language. Since most deaf individuals develop at
least functional interpersonal communication in a natural sign language, it has been called the “native” lan-
17
guage of deaf people. As such, signs would seem an obvious candidate for encoding print.
Four parameters or cheremes characterize signs:
hand shape, location on the body, movement, and hand
orientation (Wilbur, 1987, p. 21). Studies have shown
that working memory for signs has important parallels
with that for speech. Lists of signs that share cheremes
are more difficult to remember than control lists because they share linguistically significant distinctive
features and are thus more confusable. Furthermore,
this interference effect is suppressed by a competing
manual task (which prevents thorough processing of
the linguistic stimuli). Thus, the surface features of
sign seem to be encoded in a memory loop and refreshed by rehearsal (in this case manual rehearsal),
just as in speech (Wilson & Emmory, 1997).
There are, however, important differences between
visual and auditory memory. Most particularly, auditory memory is superior at preserving sequential information, whereas visual memory is better for simultaneous information. There are documented differences
between the cognitive functioning of deaf and hearing
individuals that appear to correspond to their respective reliance on visual and auditory language. For example, studies of serial recall show that hearing individuals have better forward recall than deaf individuals,
which is consistent with their greater facility with temporal verbal processing. Deaf individuals, on the other
hand, perform equally well in both directions (somewhat more poorly than hearing persons on forward recall and somewhat better on backward recall), which
suggests that they may be using simultaneous processing. Deaf individuals also show better memory than
hearing persons for spatial location (for reviews, see
Craig & Gordon, 1988, and Wilson, Bettger, Niculae, &
Klima, 1997).
It has been found that deaf individuals use sign to
encode nonprint information in short-term memory.
Using suppression tasks, MacSweeney, Campbell, and
Donlan (1996) tested the short-term memory of hearing and deaf children for pictures of common objects.
Children were exposed to either an articulatory suppression task (repeating because, because . . . aloud), or a
sign (signing because, because . . . ), manual, or foot tapping suppression task. The results indicated that articulatory suppression decreased the performance of both
18
Journal of Deaf Studies and Deaf Education 5:1 Winter 2000
groups, although the hearing children were more
affected than the deaf children. They also found that
sign and manual suppression tasks decreased the performance of the deaf group, but not the hearing group.
It seems, therefore, that deaf children used both
speech-based and sign-based encoding on this task.
Hamilton and Holtzman (1989) found that the
method of encoding varied with task demands as well
as participant characteristics. Their study included
hearing and deaf participants with differing levels of
spoken language and sign language experience and fluency. The stimuli consisted of word lists that were either
phonologically similar, cheremically similar, or dissimilar. Lists were presented either orally, in sign, or simultaneously in speech and sign. Overall, performance was
found to vary with stimulus type and presentation
mode: In the oral-presentation condition, phonological
lists were recalled less well than cheremically-similar or
control lists, and participants made more phonological
intrusion errors. The reverse was true under manualpresentation, with poorer recall of cheremically-similar
lists and more cheremic intrusions.
Encoding strategy also varied with the language experience and fluency of the participants. Deaf participants who first learned to speak and then to sign
showed a phonological similarity effect for oral stimuli,
but no cheremic similarity effect for manual stimuli.
Conversely, early deaf signers with poor speech showed
a cheremic similarity effect. Early deaf signers with
good speech (i.e., bilinguals) did not show either similarity effect, nor did bilingual hearing subjects. This
suggests that bilingual subjects were able to maximize
performance by choosing different encoding strategies
depending on the task demands.
These studies of sign encoding have used only nonprint stimuli. Is there evidence that signs can provide
an efficient memory trace for printed materials? Conlin
and Paivio (1975) found that deaf signers had better
recall for paired associates presented in print that had
a commonly used sign equivalent than for those that
did not. Hanson’s 1982 study also included a cheremically similar word set in addition to phonologically and
orthographically similar sets. This study is important
because it included only native deaf signers, thus maximizing the likelihood of finding cheremic encoding.
Hanson also varied mode of presentation, including
both sign and print conditions. When words were
signed, both phonologic and cheremic similarity produced performance decrements in ordered recall, suggesting the presence of dual encoding. When words
were printed, however, evidence was found only for
phonologic encoding. (As mentioned above, orthographic similarity did not affect performance in either
signed or printed presentations.)
This study also investigated the effect of similarity
on unordered recall of printed words. Neither phonologic nor cheremic similarity effects were found, except
when participants were directly instructed to use signs
as a mnemonic device. Thus, as demonstrated in other
studies (e.g., Hamilton & Holtzman, 1989), encoding
may vary with task requirements. Hanson’s study suggests that phonological encoding is used for printed materials, especially when information about order needs
to be preserved (see also Krakow & Hanson, 1985).
Preserving order in a problem-solving task, however, may not be the same as using order in a linguistic
task. Treiman and Hirsh-Pasek (1983) studied the encoding strategies of native signers by comparing their
ability to judge the semantic acceptability of various
types of sentences. They found that participants did
as well judging tongue twisters as control sentences,
suggesting that they were not using phonological encoding. Similarly, neither “articulatory-twisters” nor
“finger-twisters” were more difficult to judge, although
some participants reported that they visualized words
as fingerspelled when reading. Participants, however,
performed more poorly on “hand-twisters” and reported that they used a sign-based code. Although
these investigators did not directly relate encoding
strategy to reading ability, all of the participants in this
study were relatively good readers, having attained an
average reading level of 7.0 grade equivalence, compared to national norms for deaf students of about 4.5
(Holt, 1994). These findings stand in direct contrast
to those of Hanson, Goodell, and Perfetti (1991), cited
previously, who did find evidence of phonological encoding using the same task. The samples in both studies were similar, comprising native adult signers. Those
in the Hanson study, however, were university students
whose reading levels were almost two grade levels
higher than those in Treiman’s sample.
Additional evidence for sign encoding comes from
Deaf Readers
the study by Mayberry and Waters (1987), previously
cited as failing to provide evidence for fingerspellingbased encoding. This study, which used a word recognition as well as a lexical decision task, assessed the
speed and accuracy with which signing deaf children
and adolescents could read printed and fingerspelled
words. The investigators also compared performance
for words that had a commonly used sign equivalent
with those that did not, finding higher performance on
the former. The study, however, did not control for
word familiarity, leaving open the possibility that the
more signable words were also more familiar.
Phonological encoding revisited. The evidence concerning
encoding methods appears contradictory and inconclusive. Overall, evidence supports the use of flexible
encoding strategies that vary with the stimulus mode
(sign vs. print) and, perhaps, with whether ordered or
unordered recall is required (cf. Chalifoux, 1991). Nevertheless, the weight of the evidence supports the use
of phonological encoding by better deaf readers when
processing printed materials.
The range of possible variables impinging on the
processing of text is considerable, making this a complex area to investigate. A series of studies by Lichtenstein (1998) is important because of its comprehensiveness. Lichtenstein considered multiple encoding
strategies, including phonology, orthography, and sign
(which included fingerspelling). Using a working
memory paradigm similar to Conrad’s, he found evidence that deaf university students used both phonological and orthographic encoding. Asked to introspect, participants reported using multiple strategies,
including sign, in a variety of memory and reading and
writing tasks. Only the use of a phonological code,
however, was significantly associated with working
memory capacity. Lichtenstein also included standardized tests of reading and writing. He found that both
phonological and orthographic encoding were positively associated with performance, although the correlations for phonological encoding were stronger. Reported use of sign in both memory and reading tasks
was negatively related to reading and writing performance.
Thus, Lichtenstein’s study reinforces the importance of phonological encoding. It does provide the
19
first evidence, however, that orthographic encoding
may play a role in skilled reading. This study did not
obtain the expected relationships between phonological encoding and either educational or communication
history. Although this may reflect the select nature of
the sample, it nevertheless raises questions about how
phonological coding develops and whether it is a prerequisite or an outcome of learning to read.
The Role of English Language Skills (LanguageSpecific Knowledge)
Early models of reading tended to adopt a “bottomup” view, in which processing was assumed to occur in
relatively discrete stages, beginning with input of the
surface text and proceeding sequentially through increasingly higher levels of linguistic and metalinguistic
analysis. An alternative class of “top-down” models
views reading as being driven by the purposes and active cognizing of the reader, who samples the text in
order to confirm emerging hypotheses. Most current
models view reading as an interactive process utilizing
both bottom-up and top-down processes. The following sections will consider two of the latter: knowledge
of the specific language being read and general language skills.
As noted previously, deaf students are generally delayed in their acquisition of English semantics and syntax (see Paul, 1998, for a review). It seems intuitively
obvious that deficiencies in lexical and syntactic knowledge would be implicated in their poor reading skills.
Most researchers have assumed this to be the case, inferring the existence of a direct relationship from the
fact that deaf individuals typically score low in tests of
both reading and language (Berent, 1993; Paul, 1998,
p. 73). There is, however, sufficient direct evidence to
corroborate the existence of a relationship. LaSasso
and Davey (1987), for example, found strong associations between vocabulary knowledge and various
measures of reading comprehension in a sample of
profoundly deaf adolescents. A study by Paul and Gustafson (1991) obtained similar findings. One of the
largest and most comprehensive studies in this area is
by Kelly (1996), who investigated the relationships
among vocabulary, syntax, and reading comprehension
in a large sample of secondary and postsecondary deaf
20
Journal of Deaf Studies and Deaf Education 5:1 Winter 2000
students from both A/O and TC programs. The findings revealed that both vocabulary and syntax had
strong relationships to reading comprehension. Waters
and Doehring’s (1990) study of oral students provides
further support since it included both speech- and
print-based measures of language, thus avoiding the
circularity present when only print-based measures of
language are used.
Two companion studies are of particular interest
because they had relatively large samples and assessed
student’s language skills on a comprehensive battery of
measures. The two studies had identical designs, and
both sampled profoundly deaf adolescents (ages 16–
17). Geers and Moog’s (1989) sample included 100 students who were enrolled in A/O programs. Moores
and Sweet (1990) studied two groups of students from
TC programs: a sample of 65 deaf students with deaf
parents, and a second sample of 65 students with hearing parents. Although these studies did not include
measures of phonological processing per se, they did
include measures of hearing, speech production (including articulation), and oral communication, all of
which would be expected to underlie phonological processing. They also measured cognitive skills (verbal and
performance IQ scores), English vocabulary and syntax, and measures of English-based signing and ASL.
The large number of measures was subjected to multivariate analysis, culminating in stepwise regressions of
the predictor variables on a composite measure of
reading.
The specific findings differed somewhat among the
three samples, due in part to differences in the analysis
strategy. In all three groups, however, measures of English vocabulary and syntax contributed to the regression equations on reading comprehension. In the oral
sample, these measures were speech-based whereas, in
the two TC samples, they were either print-based or
reflected English-based sign skills. Measures of residual hearing were predictive of reading skill for oral children and for TC children with deaf parents. Although
hearing measures did not predict the reading ability of
TC children with hearing parents, a measure of lipreading ability did. These findings provide some support for the notion that phonological processing is involved in deaf children’s reading. The main import of
the findings, however, concerns the importance of
knowledge of English vocabulary and grammar,
whether represented in speech, print, or sign. I will return to these studies again when considering the role
of sign language skills in reading.
The evidence is thus compelling that knowledge of
English semantics and syntax is related to reading
comprehension. It is not clear, however, what role these
skills play in reading or their relative importance. Paul
(1996b) summarizes three hypotheses about the role of
vocabulary knowledge in reading. The instrumental hypothesis proposes that vocabulary represents languagespecific lexical knowledge that is required in order to
derive meaning from text. The other two hypotheses
view vocabulary knowledge as reflecting more general
cognitive skills. The aptitude hypothesis proposes that it
represents basic verbal processing skills. The knowledge
hypothesis views vocabulary as incorporating conceptual knowledge required to comprehend new information and integrate it with existing schema. Vocabulary
development involves expanding this organized store of
information about the world as well as learning more
word meanings per se.
Paul (1996b, 1997) argues explicitly for the knowledge hypothesis. He distinguishes between teaching
isolated word meanings and developing text vocabulary,
by which is meant the ability to comprehend words in
context. Knowledge of text vocabulary represents a
broader and deeper appreciation of word meaning than
is represented by the traditional pedagogical strategies
of asking students to define words or use them in isolated sentences. Knowledge of text vocabulary allows
students to select among multiple possible word meanings and to develop a nuanced appreciation of the
meaning of words as instantiated in particular texts.
Thus, the knowledge hypothesis views vocabulary
knowledge as integrating both language-specific and
more general language-processing and cognitive skills.
Whereas some earlier studies (e.g., Robbins and
Hatcher, 1981) found that vocabulary instruction failed
to increase reading comprehension, Paul reviews evidence showing that it does do so when instruction addresses vocabulary knowledge in this more comprehensive manner.
The precise role of syntactic knowledge in reading
is also a matter of controversy. As is the case for vocabulary, the most intuitively obvious position is that
Deaf Readers
knowledge of specific syntactic structures is required
in order to derive meaning from text. While the lexical
items in text specify the basic concepts being discussed, syntax modulates root word meanings so as to
indicate their interrelationships (temporal, psychological, conceptual, etc). One might term this an instrumentalist view of syntax, which parallels the instrumentalist
view of vocabulary.
Several studies by Kelly provide compelling evidence concerning the role of syntax in reading comprehension. In a 1993 study, Kelly found that skilled deaf
readers more accurately and quickly recognized the
function words and inflections appearing in text than
did poor readers, while there was little difference in
their ability to recognize content words. In a later
study, Kelly (1996) examined the relationships among
vocabulary, syntax, and reading comprehension, using
data from the adolescent samples studied by Geers and
Moog (1989) and by Moores and Sweet (1990), in addition to a third sample of postsecondary deaf students.
Using multiple regression analysis, he found that most
of the variance in reading comprehension was explained by an interaction between vocabulary and syntax, with vocabulary knowledge (but not syntax) also
entering into the equation. Thus, vocabulary and syntactic knowledge did not function independently. A detailed investigation of the relationships suggested that
a certain level of syntactic knowledge was required in
order for vocabulary knowledge to be accessible.
These findings suggest that syntax, in addition to
functioning instrumentally to denote specific aspects of
meaning, is essential to the processing of text. Syntax
may facilitate processing in several ways. The syntactic
structure of a sentence specifies a word’s form class,
thus providing clues for disambiguating words with
multiple meanings. Apprehending syntactic structure
also assists in holding a sentence in working memory
while the meanings of individual words are being retrieved and integrated. Finally, automaticity in processing syntax frees up working memory, allowing
more capacity to be devoted to word retrieval and the
construction of meaning (see Kelly, 1996, and Paul,
1998, for a discussion).
In a subsequent study, Kelly (1998) found that the
reading comprehension of deaf adults improved following instruction in two specific syntactic structures
21
with which deaf readers typically have difficulty (i.e.,
relative clause and passive voice). Although the results
for individuals were variable, some participants showed
dramatic improvement in the ability to comprehend
sentences using the target structures.
The evidence that knowledge of English vocabulary and syntax plays an important role in reading comprehension is thus varied and compelling. It has been
suggested that limited knowledge of English results in
deaf readers adopting qualitatively different processing
strategies from hearing readers. For example, Lasso
(1985) suggests that deaf children answer reading comprehension questions by visually matching words in
the text with words in the response alternatives, rather
than reading for meaning. Gormley and Franzen
(1978) argue that deaf readers assume an S-V-O sentence structure and derive meaning from context. Similarly, DeVilliers and Pomerantz (1992), after failing to
find a significant relationship between a measure of
syntactic knowledge and reading comprehension, suggested that deaf readers tend to ignore unfamiliar syntactic structures and rely on vocabulary.
Most theorists believe that atypical language processing by deaf persons represents an attempt to compensate for language delay, rather than a qualitatively
different orientation to language itself (Paul, 1998).
Claims that deaf persons process language in a qualitatively different manner typically derive from studies of
their English language abilities. For example, the productive language of deaf students has been described as
stilted and inflexible (e.g., Paul, 1998, p. 71). A study of
both written and signed productions by Everhart and
Marschark (1988), however, showed that the signed
language of deaf children was as flexible and inventive
as the written language of hearing children. Thus, as
their skills continue to develop—which will likely occur over a longer time frame (Berent, 1993)—deaf students should demonstrate increasingly sophisticated
use of both the semantics and syntax of English.
A final issue concerning the relationship of
language-specific knowledge to reading comprehension is the direction of causation. Although it is generally assumed that knowledge of English is a prerequisite to reading, it is also possible for it to be an outcome.
This was the view of deaf adults surveyed by Dalby and
Letourneau (1991), who reported that they developed
22
Journal of Deaf Studies and Deaf Education 5:1 Winter 2000
English language skills through experience with print,
rather than acquiring them before learning to read.
The Role of General Language Knowledge:
The Possibility of Language Transfer
Dalby and Letourneau’s (1991) finding suggests that
deaf readers may engage print directly, just as spoken
language and sign language are engaged as primary linguistic systems, with the individual deriving meaning
through application of world knowledge, general language skills, and the social mediation of more skilled
users. Semantic and syntactic structure would then be
directly induced from the meaning representations
that have been constructed.
Stanovich (1993) articulates a somewhat similar
view. Stanovich argues that, once sufficient vocabulary
and word identification skills have been acquired to engage text at a basic level, reading itself becomes an important avenue for future linguistic and cognitive
growth. Stanovich (1994) further suggests that readers
may draw on higher level processes to compensate for
poorly developed lower level skills. More specifically,
he presents evidence showing that poorer hearing readers rely on context, not only to learn new words, but to
compensate for their limited automatic word recognition skills. Although only partially successful, greater
reliance on context does enhance performance. Stanovich terms this an interactive-compensatory hypothesis,
which he suggests applies to processing at the sentence
level. Might, however, even higher level skills play a
compensatory role in the reading of deaf individuals?
Several researchers have considered the possibility
that deaf readers make greater use of context and background knowledge in deriving meaning from text than
hearing readers. Gormley and Franzen (1978) developed this proposal and suggested that teachers should
use familiar reading materials in order to capitalize on
deaf students’ spontaneous use of this strategy.
The evidence for this proposal is scant. In fact,
some studies have found that deaf students make relatively little use of context. DeVilliers and Pomerantz
(1992) found that deaf students did derive the meaning
of new words from context. Andrews and Mason
(1991), however, found that deaf students made less use
of context than did hearing readers, although they re-
lied on background knowledge. Jackson, Paul, and
Smith (1997) found that deaf students did not show the
usual difficulty hierarchy when responding to questions of different types, namely, text-explicit, textimplicit, and script-implicit. They interpreted this result as indicating that the students did not use either
context or prior knowledge in responding to questions.
Brown and Brewer (1996), however, found that this was
true only for poor readers, and that skilled deaf readers
performed like hearing readers in drawing inferences
from text. In a similar vein, studies have found that
deaf students make relatively little use of metacognitive
strategies in reading (Strassman, 1997). The authors
all attribute some of these deficits to the failure of
teachers to encourage their use, suggesting that too
much attention is devoted to decoding the English text.
Part of the rationale for using sign language in the
education of deaf students is that, because it is more
accessible, students can quickly develop a functional
language base with which to acquire knowledge about
the world and develop higher level processing skills.
Thus, advocates of sign language would seem to have
implicitly adopted a fully interactive, compensatory
model. There is, however, a vigorous debate over the
relative educational merits of English-based sign vs.
ASL. The controversy over which form of sign language to use might hinge on which higher level processes are more important.
Some educators argue that there is an essential role
for English-based sign because it is isomorphic with
the structure of printed English (Mayer & Wells, 1996;
Paul, 1996a). This isomorphism is seen primarily at the
morphological and syntactic levels. At the semantic
level, English-based signing is more similar to ASL
than to English. However, the use of English word order and artificial inflectional morphemes does allow the
representation of English morpho-syntactic structure.
This position echoes that of some of the early deaf educators, such as the Abee de l’Epee in France (1784) and
Thomas Gallaudet in the United States (1848). John
Carlin (1859), a deaf man from the same time as Gallaudet, writes that he instantly recognized printed
words as units, but had to retain sentences in memory
until they could be understood. He proposes that, for
young deaf pupils, “easy and familiar words should be
taught by appropriate signs . . . and that the simple
Deaf Readers
rules of grammar should be explained in the signs in
the order of the words” (p. 19). His position, therefore,
is that word recognition occurs via sign encoding, but
that comprehension of text requires the teaching of
English grammar through English-based sign.
Proponents of ASL, on the other hand, offer a
number of theoretical and empirical arguments for its
superiority in promoting literacy. One is the fact that
use of English-based signing has had disappointing results on deaf children’s literacy levels. Not only have
reading levels remained low (Schildroth & Karchmer,
1985), but studies show that deaf children in total communication programs still develop only rudimentary
knowledge of English syntax (Geers & Schick, 1988;
Moores & Sweet, 1990).
One reason for its failure may be that Englishbased signing does not, in fact, faithfully represent English (Stokoe, 1975), and users do not completely encode the spoken message into sign (Marmor & Pettito,
1979; Maxwell & Bernstein, 1985). English-based signing is best described as a code for English, rather than a
complete language in its own right. As an artificial system that derives from English, English-based signing
may lack important features present in natural languages that have evolved through use over time.
Evidence, in fact, shows that ASL is more easily
acquired than English-based sign. Studies show that
deaf children of deaf parents have ASL skills that are
comparable to the spoken language skills of their hearing peers (Caselli & Volterra, 1990; Newport & Meier,
1985). Deaf students exposed only to English-based
signing spontaneously innovate ASL-like structures
(Leutke-Stahlman, 1988; Mounty, 1989; Nelson & Camarata, 1993). Even in the face of limited exposure and
lack of formal instruction, the ASL skills of deaf adolescents equal or surpass their skills in English-based
signing (Moores & Sweet, 1990, Musselman & Akamatsu, in press). Studies also show that ASL is the predominant language used by most deaf adults (Musselman [Reich] & Reich, 1976), even those educated
primarily in auditory-oral programs.
Gee and Goodhart (1988) and Mounty (1989) argue that English-based signing is not fully processible
because it is poorly adapted to the constraints of the
visual modality, which does not allow as rapid processing of individual elements as does audition. The
23
structure of ASL—with its complex morphology and
simultaneous (rather than sequential) presentation of
meaning units—allows the same information to be encoded in a smaller number of discrete signs. ASL, says
Mounty (1989), “is reflective of the human capacity for
language in a visual modality” (p. 57).
Advocates of bilingual/bicultural education draw
upon Cummins’s (1989, 1991) linguistic interdependence hypothesis to argue that general linguistic skills
developed in ASL will transfer to English print. In a
thoughtful review, Mayer and Wells (1996) argue that
there is empirical evidence for the notion that literacy
skills in a first language transfer to a second, but no
evidence that interpersonal skills in a first language
transfer to literacy skills in a second. Because ASL
lacks a printed form, according to their argument, it
does not satisfy the conditions for linguistic interdependence. Further developing this argument, Mayer
and Akamatsu (1999) conclude that English-based sign
is essential for bridging the gap between interpersonal
communication and literacy, although they view the
use of ASL as critical for promoting cognitive and social development.
This polarized view of English-based sign and
ASL, however, is somewhat artificial. Users of Englishbased sign do not always restrict themselves to the rulebook! Researchers investigating the features of effective
sign communication have found that experienced users
spontaneously incorporate elements of ASL. Identified
as especially important are the suprasegmental cues
such as facial expression and posture that signal communicative intent (e.g., command, question, declaration, etc.), use of spatial location, and sign directionality (Mallery-Ruganis & Fischer, 1991; Maxwell &
Bernstein, 1985). Some attempts have been made to
systematically incorporate these features into sign
communication and promote their use by hearing
teachers (Akamatsu & Stewart, 1998).
Such enhanced English-based sign systems may be
similar to what Fischer (1998) calls “natural sign systems” or what Lucas and Valli (1990) refer to as “contact signing.” Both these terms describe the signing
that has evolved naturally in the communicative exchanges between deaf and hearing people and that similarly incorporates elements of both English-based
signing and ASL.
24
Journal of Deaf Studies and Deaf Education 5:1 Winter 2000
The relationship of sign language skills to reading. Even
though these theoretical discussions raise important
issues, the debate can be properly addressed only
through empirical studies. The research showing that
printed words can be encoded in sign supports the use
of sign language as a mode of instruction. It does not,
however, assist in selecting one form of sign language
over another, or allow an investigation of possible compensatory relationships among word encoding, linguistic knowledge, and other higher level skills.
The previously mentioned study by Moores and
Sweet (1990) included measures of both English-based
signing and ASL. Among both deaf students with deaf
and those with hearing parents, moderate relationships
were found between the measure of English-based signing and reading. No significant relationships were found
between skill in ASL and reading in either group.
The English-based signing and ASL measures
used by Moores and Sweet assessed language skills holistically, using rather general measures. Other studies
have used measures requiring more specific linguistic
knowledge. In a study of 7- to 15-year-old TC students,
Mayberry and Chamberlain (1994) investigated the relationship between sign language skills and reading, using an experimental and a standardized measure of
reading comprehension. The results showed that reading skills were associated with measures of both
English-based signing and ASL, with correlations in
the moderate to strong range. Reading, however, was
not significantly related to spoken language, a finding
that has added significance because the spoken language measure emphasized phonological-level skills. In
the study of a large sample of deaf students (n ⫽ 78),
Hoffmeister, DeVilliers, Engen, and Topol (1997) again
found that both English-based signing and ASL skills
were significantly related to reading comprehension.
Because the measures of the two varieties of sign language emphasized different linguistic skills, it is not
possible to compare their relative contributions.
In Padden and Ramsey’s (1998) study of fingerspelling (previously cited), knowledge of ASL was
also found to be significantly related to reading comprehension, although the correlations were somewhat
smaller (about .55) than those observed between fingerspelling and reading (above .70). Strong and Prinz
(1997) tested 155 deaf students and found moderate to
strong correlations between ASL skill and English literacy. Thus, knowledge of both English-based sign and
ASL appears to be related to literacy.
Comparative studies of educational programming. Correlational studies of the relationships among language
skills suffer from an inherent limitation, namely, the inability to determine the direction of influence. This is
especially true in the case of relationships between
English-based signing and reading, where causation
can reasonably be argued to proceed in either (or both)
directions. This might account for the greater consistency in finding significant relationships between
English-based signing and reading. The direction of
causation would seem to be clearer for the relationship
between ASL and reading, as it is difficult to envision
how reading English could facilitate competence in
ASL.
Outcome studies of educational programming are
less equivocal on this point, because they take educational experience into account. One of the most robust
findings in the literature is that deaf children with deaf
parents outperform those with hearing parents on a variety of measures, including reading achievement (see
Kampfe & Turecheck, 1987, for a review). Such findings have long been used to argue for the superiority of
sign language (e.g., Vernon & Koh, 1970). Deaf children with deaf parents, however, differ from those with
hearing parents in other ways that might predispose
them to more favorable outcomes, most notably a hereditary rather than an adventitious etiology, and
greater acceptance of the deaf child by the parent.
Only two studies have been conducted that compare the educational outcomes associated with the use
of different forms of sign language. In a study of deaf
children with deaf parents, Brasel and Quigley (1977)
found that those whose parents used English-based
signing at home were better readers than those whose
families used only ASL. Luetke-Stahlman (1988) compared the reading achievement of deaf students with
hearing parents through programs using different
communication systems. In addition to oral- and ASLeducated groups, she included deaf children using sign
language that varied in the completeness with which
English was encoded. Luetke-Stahlman found that
there were no differences among groups from pro-
Deaf Readers
grams using oral English, ASL, or completely encoded
English sign, whereas those using less complete sign
representations of English did less well. She concluded
that completeness of linguistic representation is more
important than the particular language system used.
Although dealing with writing rather than reading,
a study by Singleton, Supalla, Litchfield, and Schley
(1998) adds to the discussion. They compared the written language of deaf elementary age children from
three different programs: a program using ASL, a “traditional” residential school program, and a TC program. In the upper elementary years (ages 9–12), students from the ASL program had better written
language overall than those from the other two programs. They did not find, however, a direct correlation
between ASL skill and written language. It is possible
that their ASL measure did not adequately capture the
skills of the students. It is also possible that other aspects of the ASL-based program were responsible for
the students’ enhanced skills.
These findings show that a relationship exists between sign language skills and reading, suggesting that
sign language skills can compensate for deaf students’
deficiencies in spoken English. The evidence, however,
provides no basis for choosing between English-based
sign and ASL and fails to clarify exactly how sign language skills promote literacy. Because it is more easily
processed, ASL would seem to have an advantage in
promoting general language skills, enhancing world
knowledge, developing metalinguistic and metacognitive skills, as well as providing a comprehensive and
efficient communication system for explaining the
meaning of text. English-based sign, on the other hand,
provides a more direct route for teaching English vocabulary and grammar. Both English-based sign and
ASL would appear equally well suited for developing
sign-based word recognition skills as a substitute for
phonological encoding.
Conclusions
To answer the question posed by the title of this article:
no one knows yet how deaf children learn to read. And
the jury is still out on whether they use processes that
are qualitatively similar or dissimilar to those used by
hearing children, for whom printed language is pri-
25
marily an alternative representation of spoken language. This is essentially the crux of the matter: Since
few deaf children succeed in acquiring functional levels
of spoken language, it is perhaps surprising that they
learn to read at all.
Auditory/oral approaches to deaf education utilize
a remedial approach, attempting through intense and
carefully structured teaching to raise spoken language
competencies. Proponents of sign language adopt a
compensatory approach, attempting to build an alternative language substratum to support interpersonal
communication, cognitive development, and literacy.
Those arguing that sign language will facilitate the acquisition of literacy either implicitly or explicitly appeal to some notion of linguistic interdependence,
which is the hypothesis that skills obtained in one
method of communication (i.e., English-based sign,
ASL) will transfer to a second (i.e., printed English).
Arguments pro and con have typically considered this
issue globally, without addressing what component
skills are required for reading, what skills might transfer from one communication system to another, and
whether different strategies are required to enhance the
development of different component skills.
This review of the literature has identified four parameters of communication systems that are central to
the discussion, and the current controversy can be conceptualized as a disagreement over their relative importance. These parameters describe characteristics of
communication systems that are relevant to the acquisition of literacy by deaf persons. Two of these concern
the relationship between interpersonal communication
systems and print and are referred to here as codability
and structural isomorphism. The other two—accessibility
and processibility—concern the extent to which communication systems are adapted to the biological capabilities of deaf learners.
Although current theories of reading recognize the
importance of top-down processes (e.g., semantics,
syntax, general knowledge), much of the current research focuses on the role of phonological decoding
(i.e., the conversion of print to a phonological representation in short-term memory). A considerable body
of research indicates that skilled deaf readers also rely
on phonological encoding. From Conrad’s (1979) early
study to more recent investigations (e.g., Hanson et al.,
26
Journal of Deaf Studies and Deaf Education 5:1 Winter 2000
1991), studies of short-term memory have shown that
deaf students, like their hearing peers, tend to confuse
printed words that sound alike. Furthermore, the degree to which deaf students use a phonological code
predicts their level of reading comprehension. However, it is possible that the phonological code used by
these skilled deaf readers is an outcome of learning to
read, rather than a prerequisite. Most of the extant
studies have used older children or college students,
and there are few studies of beginning deaf readers.
Clarification requires more research with young readers as well as longitudinal studies that can disentangle
the prerequisites of literacy from its outcomes.
Needless to say, the development of phonological
skills by individuals with limited hearing poses enormous challenges. Lichtenstein (1998) has proposed
that the phonological representations used by deaf
readers may differ from those used by hearing readers.
Along with others, he also proposes that deaf readers
might utilize representations based on articulation or
fingerspelling. There is, however, no direct evidence
that this occurs.
Since Conrad’s (1979) early work, it has long been
known that deaf students may develop representations
of print based on orthography, although evidence consistently indicates that this strategy is less effective
than phonological encoding. Studies have addressed
orthographic encoding fairly globally, investigating the
possible use of relatively macroscopic representations
of print (e.g., word length, predominant letter shape,
and occurrence of double letters). We need investigations of encoding at the level of the letter or syllable
that can clearly distinguish among phonological, articulatory, and visual strategies. It is possible that print is
encoded letter by letter (or syllable by syllable), and
that phonological encoding (or orthographic, articulatory, fingerspelling, or sign encoding) is merely a means
of rehearsal (cf. Stanovich, 1991).
There is growing evidence that deaf children may
encode print via sign-based representations, and that
these can mediate skilled reading. Recent research has
utilized more carefully selected samples of students
with sufficient sign language experience to provide an
adequate visual language substratum for reading. Additional replications are needed of these recent findings, as well as more direct evidence that signs are in-
deed used to encode during reading. Spoken language,
however, retains the unique advantage of being directly
encodable into print.
Deaf readers, however, use multiple encoding strategies, as is shown most clearly by Lichtenstein’s (1998)
work. He sees phonological encoding as providing the
main representation of print but offers evidence that
deaf readers selectively supplement their limited abilities with both orthographic and sign codes. This is a
complex achievement, as it requires the integration of
information from auditory and visual memory.
Encoding of the surface features of print (whether
using phonological, sign, or visual codes) may be important because it provides an efficient way of holding
text in short-term memory while it is operated upon by
higher level processors. Stanovich’s (1991) compensation hypothesis suggests that strengthening these other
areas of functioning may enhance literacy. This notion
provides an additional rationale for using sign language
as part of a literacy program. Due to its enhanced accessibility, sign language provides a means for bypassing
deaf children’s poor auditory skills and providing them
with a functional linguistic system. English-based sign
is the choice of writers such as Paul (1996a) and Mayer
and Wells (1996), at least with regard to the support of
English literacy. In both cases, however, they argue on
the basis of its structural isomorphism to printed
English, an isomorphism that exists primarily at the
morpho-syntactic level.
Other writers advocate the use of ASL because it is
a natural language that is better adapted to the visual
modality. Since it is more processible by deaf individuals, ASL offers several possibilities: One is that deaf
students may be able to develop a stronger semantic
and syntactic base in ASL than they appear to do in
English-based sign. Although the specific linguistic
features of ASL may not transfer to English, ASLusing students may develop more effective semantic
and syntactic strategies that can be applied to comprehending printed English. Because it is more easily
learned and comfortable to use, ASL should also facilitate instruction, resulting in increased knowledge of
the world and better developed metalinguistic skills. It
is important to remember that distinctions among levels of language are artificial. Although conceptually
distinct, semantics, syntactics, metalinguistics, and so
Deaf Readers
27
Figure 1 Alternative paths to literacy for deaf students.
on, all interact in any communicative act. An important
aspect of linguistic competence is integrating information from these various systems. As a complete language, ASL may also provide enhanced opportunity to
develop such integrative language processing skills.
The results of several studies suggest that skilled deaf
readers are strategic, selectively recoding print into
speech and sign in order to support the derivation of
meaning (Lichtenstein, 1998; Padden & Ramsey, 1998).
Experience with a natural language that is well-suited
to their processing capabilities might facilitate the development of these executive-level skills.
These competing viewpoints need to be subjected
to more careful empirical testing by studying the relationship of various component skills to reading. Currently a few studies show significant relationships between reading and both English-based signing and
ASL. Evaluation studies provide another window on
these issues. Most studies find that orally educated students have reading skills superior to those for students
in total communication programs. While it is true that
A/O students typically have more hearing and are socioeconomically better off than those in other programs, these findings still demonstrate the advantage
to spoken language of its direct encodability into print.
Numerous studies have compared groups of deaf
students exposed to sign language. The robust finding
that those with deaf parents have better language and
academic skills than those with hearing parents suggests that the enhanced processibility of ASL conveys
an advantage, a conclusion supported by several other
studies finding an advantage for ASL-educated students. Other studies have found an advantage for students exposed to English-based sign and thus argue for
the importance of its structural isomorphism to
printed English.
Currently there are more questions than answers,
yet the literature suggests several possible paths to literacy. The obvious one is that followed by hearing children, with spoken language learned first, and printed
language decoded to speech from which it is derived.
After skilled comprehension has been achieved,
printed English, in turn, serves to further develop spoken language. This is represented in Figure 1 as a bidirectional arrow with two feathers in the speechto-print direction, and one in the reverse. A second
possible path is from English-based sign to printed English. As in the former route, there is again feedback
from print to interpersonal communication. A third
possible path proceeds from ASL to print with
English-based sign as an intermediary. A fourth possible path is directly from ASL to print. Padden and
Ramsey (1998) argue that some deaf readers learn to
associate specific elements of ASL with English print.
Their preliminary observations of several deaf children
showed that good readers developed these associations
at the sentence level or even to larger units of text.
Researchers are coming to the view that all of these
paths operate to some extent, depending on children’s
inherent capabilities and language experience. As Paul
(1997) states: “There is no best method for teaching
students who are deaf or hard of hearing to read, and
becoming fixated on one technique is not only unsupported by research, but also it might be detrimental to
students’ progress.” Nelson and Camarata (1996) suggest that, rather than adopting single-strategy solutions, we need to search for tricky mixes of instructional
strategies that address the unique learning needs of
28
Journal of Deaf Studies and Deaf Education 5:1 Winter 2000
deaf students. A similar call has been made by Stewart
(1997), who argues that research is needed to delineate
the best way in which to combine various forms of sign
language in the classroom, taking into account specific
student needs and teacher capabilities.
Instructional decisions may need to take into account the particular skill being targeted. For example,
for one student, building strong decoding skills may be
best facilitated through a phonics-based strategy, English syntax through English-based sign, and general
knowledge and metacognitive skills through instruction in ASL. For another student, a different profile of
strategies may be required.
Most of the research on reading has proceeded
from an intrapersonal stance, that is, focusing on the
skills required for literacy and their utilization during
reading. It is also important, however, to consider interpersonal factors, that is, teaching. Since the 1970s,
research on the development of interpersonal communication has emphasized the importance of the sociolinguistic context in which it is situated. Language
learning is seen as an interpersonal process, as well as a
cognitive and linguistic one. Similarly, learning to read
needs to be situated in an interactional context. Some
researchers are now attempting to delineate the instructional strategies that may contribute uniquely to
the literacy development of deaf children. The ubiquity of speech-based codes as evidenced in much of the
research may reflect the manner in which children have
been taught, not inherent cognitive requirements of
reading as a task.
Padden and Ramsey (1998) describe a unique type
of classroom discourse that teaches the associations between printed words and signs. These “chaining structures” use fingerspelling as an intermediary form to
bridge and highlight equivalences across languages.
They note that deaf parents at home use similar strategies. Singleton et al. (1998) describe the use of written
glosses for ASL (either designed by teachers or developed spontaneously by children themselves) as providing a bridge from ASL to print. Kuntze (1998) summarizes several studies in which the interactions between
deaf children and hearing adults were seen to combine
English-based sign, ASL, and print.
Nelson (1998) talks about the importance of “supportive social-interactive processes,” and outlines what
he calls a rare event transactional model of learning.
The core of the model is a dialectic engaged in by child
and teacher, in which there is a recursive recasting of
text. This can begin with the presentation of an initial
text by either child or teacher in any language modality
and proceeds through successive elaborations and
translations. He advocates periodic informal assessment of how children respond to various types of
bridging sequences.
Understanding and facilitating the acquisition of
literacy by deaf children clearly require attention to a
multitude of factors. As complex as this development
task is in hearing children, it is rendered even more
complex by the biological constraints attendant upon
deafness and the complex sociocultural milieu within
which deaf children live and grow. Even though there
are parallels between deaf children and hearing children from minority language groups, they break down
because most deaf children can never acquire facility
in the majority language. The language that suits their
capabilities—namely, one of the natural sign languages—is not the language of the majority. The task
of bridging these realities is truly challenging. Research
is only beginning to elucidate the manner in which deaf
children access printed language and the instructional
strategies that best facilitate their learning.
These issues need to be tackled on a broad front.
Experimental studies of encoding in working memory
will continue to enhance our understanding of how beginning and skilled deaf readers process print. Naturalistic studies of instructional interaction can identify
spontaneous teaching strategies used by teachers and
parents. Evaluation studies can document the longterm outcomes of carefully implemented and comprehensive educational programs. The knowledge gained
from these investigations will advance our understanding and enhance the ability to provide each deaf child
with the most appropriate programming.
Received April 23, 1999; revised July 20, 1999; accepted July
27, 1999
References
Akamatsu, T., & Stewart, D. (1998). Constructing simultaneous communication: The contributions of natural sign
language. Journal of Deaf Studies and Deaf Education, 3,
302–319.
Andrews, J., & Mason, J. M. (1991). Strategy usage among deaf
and hearing readers. Exceptional Children, 57, 536–545.
Deaf Readers
Battison, R. (1978). Lexical borrowing in American Sign Language.
Silver Springs, MD: Linstok.
Binet, A., & Simon, T. (1909). An investigation concerning the
value of the oral method. Reprinted in American Annals of
the Deaf, 1997, 142, 35–45.
Brasel, K. E., & Quigley, S. P. (1977). Influence of certain language and communication environments in early childhood
on the development of language in deaf individuals. Journal
of Speech and Hearing Research, 20, 95–107.
Berent, G. P. (1993). Improvements in the English syntax of deaf
college students. American Annals of the Deaf, 138, 55–61.
Brown, P., & Brewer, L. C. (1996). Cognitive processes of deaf
and hearing skilled and less skilled readers. Journal of Deaf
Studies and Deaf Education, 1, 263–270.
Carlin, J. (1859). Words recognized as units: Systematic Signs.
Reprinted in D. Moores (Ed.), The American Annals of the
Deaf: 150th Anniversary Issue, 142, 18–20.
Caselli, M. C., & Volterra, V. (1990). From communication to
language in hearing and deaf children. In V. Volterra and
C. J. Erting (Eds.), From gesture to language in hearing and
deaf children. Berlin: Springer-Verlag, pp. 263–277.
Chalifoux, L. M. (1991). The implications of congenital deafness for working memory. American Annals of the Deaf, 136,
292–299.
Chincotta, M., & Chincotta, D. (1996). Digit span, articulatory
suppression, and the deaf: A study of the Hong Kong Chinese. American Annals of the Deaf, 141, 252–257.
Conlin, D., & Paivio, A. (1975). The associative learning of the
deaf: The effects of word imagery and signability. Memory & Cognition, 3, 335–340.
Conrad, R. (1979). The deaf school child: Language and cognitive
function. London: Harper & Row.
Cornett, O. (1967). Cued speech. American Annals of the Deaf,
112, 3–13.
Craig, H. B., & Gordon, H. W. (1988). Specialized cognitive
function and reading achievement in hearing-impaired adolescents. Journal of Speech and Hearing Disorders, 53, 30–41.
Cummins, J. (1989). A theoretical framework of bilingual special
education. Exceptional Children, 56, 111–119.
Cummins, J. (1991). Interdependence of first- and secondlanguage proficiency in bilingual children. In E. Bialystok
(Ed.), Language processing in bilingual children. Cambridge:
Cambridge University Press, pp. 70–88.
Dalby, K., & Letourneau, C. (1991, July). Survey of communication history of deaf adults. Paper presented at the biennial
meeting of the Association of Canadian Educators of the
Hearing Impaired. Calgary, Alberta,.
de l’Eppé, C. M. (1784/1984). The true method of educating the
deaf, confirmed by much experience. In H. Lane (Ed.), The
deaf experience: Classics in language and education. Cambridge, MA: Harvard University Press, pp. 49–72.
DeVilliers, P., & Pomerantz, S. (1992). Hearing-impaired students learning new words from written context. Applied Psycholinguistics, 13, 409–431.
Erting, C. J., Thumann-Prezioso, C., & Benedict, B. S. (1997,
April). ASL/English bilingualism in the preschool years: The
role of fingerspelling. Paper presented at the Developmental
Perspectives on Deaf Individuals: A Conference in Honor of
Kathryn P. Meadow-Orlans, Gallaudet University, Washington, D.C.
29
Everhart, V. S., & Marschark, M. (1988). Linguistic flexibility in
signed and written language productions of deaf children.
Journal of Experimental Child Psychology, 46, 174–193.
Fischer, S. D. (1998). Critical periods for language acquisition:
Consequences for deaf education. In A. Weisel, (Ed.), Issues
unresolved: New perspectives on language and deaf education
(pp. 9–26). Washington, DC: Gallaudet University Press.
Gallaudet, T. H. (1848). On the natural use of signs and its value
and uses in the instruction of the deaf and dumb. Reprinted
in D. Moores (Ed.), The American Annals of the Deaf: 150th
Anniversary Issue, 142, 1–7.
Gee, J. P., & Goodhart, W. (1988). American Sign Language and
the human biological capacity for language. In M. Strong
(Ed.), Language learning and deafness. New York: Cambridge
University Press, pp. 48–74.
Geers, A., & Moog, J. (1989). Factors predictive of the development of literacy in profoundly hearing-impaired adolescents. Volta Review, 91, 69–86.
Geers, A. E. & Schick, B. (1988). Acquisition of spoken and
signed English by hearing-impaired children of hearingimpaired or hearing parents. Journal of Speech and hearing
Disorders, 53, 136–143.
Gormley, K. (1982). The importance of familiarity in hearingimpaired readers’ comprehension of text. Volta Review, 84,
71–80.
Gormley, K. A., & Franzen, A. M. (1978). Why can’t the deaf
read? Comments on asking the wrong question. American
Annals of the Deaf, 123, 542–547.
Goulet, W. (1990). My past personal experience. Canadian Journal of the Deaf, 3, 101–104.
Hamilton, H., & Holzman, T. G. (1989). Linguistic encoding in
short-term memory as a function of stimulus type. Memory & Cognition, 17, 541–550.
Hanson, V. L. (1982). Short-term recall by deaf signers of American Sign Language: Implications of encoding strategy for
order recall. Journal of Experimental Psychology, 8, 572–583.
Hanson, V. L. (1991). Phonological processing without sounds.
In S. Brady & D. Shankweiler (Eds.), Phonological processes
in literacy: A tribute to Isabelle Y. Liberman (pp. 153–161).
Hillsdale, NJ: Erlbaum.
Hanson, V. L., Goodell, E. W., & Perfetti, C. A. (1991). Tonguetwister effects in the silent reading of hearing and deaf college students. Journal of Memory and Language, 30, 319–330.
Hanson, V. L., Liberman, I. Y., & Shankweiler, D. (1984). Linguistic coding by deaf children in relation to beginning reading success. Journal of Experimental Child Psychology, 37,
378–393.
Hoemann, H. (1972). Children’s use of fingerspelling versus sign
language to label pictures. Exceptional Children, 39, 161–162.
Hoffmeister, R. J., DeVilliers, P. A., Engen, E., & Topol, D.
(1997). English reading achievement and ASL skills in deaf
students. Proceedings of the Annual Boston University Conference on Language Development, 21, 307–318.
Holt, J. A. (1994). Classroom attributes and achievement test
scores for deaf and hard of hearing students. American Annals of the Deaf, 139, 430–437.
Huang, H. S., & Hanley, J. R. (1994). Phonological awareness
and visual skills in learning to read Chinese and English.
Cognition, 54, 73–98.
Israelite, N., Ewoldt, C., & Hoffmeister, R. (1992). Bilingual/
30
Journal of Deaf Studies and Deaf Education 5:1 Winter 2000
bicultural education for deaf and hard-of-hearing students. Toronto, Ontario: MGS Publications Services.
Jackson, D. W., Paul, P. V., & Smith, J. C. (1997). Prior knowledge and reading comprehension ability of deaf adolescents.
Journal of Deaf Studies and Deaf Education, 2, 172–184.
Jensema, C. J., Karchmer, M. A., & Trybus, R. J. (1978). The rated
speech intelligibility of hearing impaired children: Basic relationships and a detailed analysis. Series R, Number 6. Washington,
DC: Office of Demographic Studies, Gallaudet College.
Johnson, R E., Liddell, S., & Erting, C. (1989). Unlocking the
curriculum: Principles for achieving access in deaf education.
Gallaudet Research Institute Working Paper 89–3. Washington, DC: Gallaudet University.
Kampfe, C. M., & Turecheck, A. G. (1987). Reading achievement of prelingually deaf students and its relationship to parental method of communication: A review of the literature.
American Annals of the Deaf, 132, 11–15.
Kelly, L. P. (1993). Recall of English function words and inflections by skilled and average deaf readers. American Annals of
the Deaf, 138, 288–296.
Kelly, L. P. (1996). The interaction of syntactic competence and
vocabulary during reading by deaf students. Journal of Deaf
Studies and Deaf Education, 1, 75–90.
Kelly, L. P. (1998). Using silent motion pictures to teach complex syntax to adult deaf readers. Journal of Deaf Studies and
Deaf Education, 3, 217–230.
Krakow, R. A., & Hanson, V. L. (1985). Deaf signers and serial
recall in the visual modality: Memory for signs, fingerspelling, and print. Memory & Cognition, 13, 265–272.
Kuntz, M. (1998). Literacy and deaf children: The language
question. Topics in Language Disorders, 18, 1–17.
LaSasso, C., & Davey, B. (1987). The relationship between lexical knowledge and reading comprehension for prelingually,
profoundly hearing-impaired students. Volta Review, 89,
211–220.
Lasso, C. (1985). The visual matching test-taking strategies used
by deaf readers. Journal of Speech and Hearing Research, 28,
2–7.
Leybaert, J. (1993). Reading in the deaf: The roles of phonological codes. In M. Marschark and M. D. Clark, Psychological
perspectives on deafness. Hillsdale, NJ: Erlbaum.
Leybaert, J., & Alegria, J. (1993). Is word processing involuntary
in deaf children? British Journal of Developmental Psychology,
11, 1–29.
Leybaert, J., & Charlier, B. (1996). Visual speech in the head:
The effect of cued-speech on rhyming, remembering and
spelling. Journal of Deaf Studies and Deaf Education, 1,
234–248.
Lichtenstein, E. H. (1998). The relationships between reading
processes and English skills of deaf college students. Journal
of Deaf Studies and Deaf Education, 3, 80–134.
Lovett, M. W., Borden, S. L., DeLuca, T., Lacerenza, L., Benson, N.J., & Brackstone, D. (1994). Treating the core deficits
of developmental dyslexia: Evidence of transfer of learning
after phonologically-and strategy-based reading training
programs. Developmental Psychology, 30, 805–822.
Lucas, C., & Valli, C. (1990). ASL, English, and contact signing.
In C. Lucas (Ed.), Sign language research: Theoretical issues
(pp. 288–307). Washington, DC: Gallaudet University Press.
Luetke-Stahlman, B. (1988). Educational ramifications of vari-
ous instructional inputs for hearing impaired students.
ACEHI Journal, 14, 105–121.
MacSweeney, M., Campbell, R., & Donlan, C. (1996). Varieties
of short-term memory coding in deaf teenagers. Journal of
Deaf Studies and Deaf Education, 1, 249–259.
Mallery-Ruganis, D., & Fischer, S. (1991). Characteristics that
contribute to effective simultaneous communication. American Annals of the Deaf, 136. 401–408.
Marmor, G., & Petitto, L. (1979). Simultaneous communication
in the classroom: How well is English grammar represented?
Sign Language Studies, 23, 99–136.
Mason, D., & Ewoldt, C. (1996). Whole language and deaf
bilingual-bicultural education naturally. American Annals
of the Deaf, 141, 23, 293–298.
Maxwell, M., & Bernstein, M. (1985). The synergy of sign and
speech in simultaneous communication. Applied Psycholinguistics, 6, 63–82.
Mayberry, R., & Chamberlain, C. (1994, November). How ya
gonna read da language if ya dun speak it? Reading development
in relation to sign language comprehension. Paper presented at
the Boston UniversityConference on Language Development, Boston, MA.
Mayberry, R. I., & Waters, G. (1987, April). Deaf children’s recognition of written words: Is fingerspelling the basis? Paper presented to the Society for Research in Child Development,
Baltimore, MD.
Mayer, C., & Akamatsu, C. T. (1999). Bilingual-bicultural models of literacy education for deaf students: Considering the
claims. Journal of Deaf Studies and Deaf Education, 4, 1–8.
Mayer, C., & Wells, G. (1996). Can the linguistic interdependence theory support a bilingual-bicultural model of literacy
education for deaf students? Journal of Deaf Studies and Deaf
Education, 1, 93–107.
Meier, R. (1991). Language acquisition by deaf children. American Scientist, 79, 60–79.
Moores, D. F., & Sweet, C. (1990). Factors predictive of school
achievement. In D. F. Moores and K. P. Meadow-Orlans,
Educational and developmental aspects of deafness. Washington, DC: Gallaudet, pp. 154–201.
Mounty, J. L (1989). Beyond grammar: Developing stylistic variation when the input is diverse. Sign Language Studies,
Spring, 43–62.
Musselman, C. (1990). The relationship between measures of
hearing loss and speech intelligibility in young deaf children. Journal of Childhood Communication Disorders, 13,
193–205.
Musselman, C., & Akamatsu, C. T. (in press). The interpersonal
communication skills of deaf adolescents and their relationship to communication history. Journal of Deaf Studies and
Deaf Education.
Musselman, C., & Kircaali-Iftar, G. (1996). The development of
spoken language in deaf children: Explaining the unexplained variance. Journal of Deaf Studies and Deaf Education,
1, 108–121.
Musselman [Reich], C., & Reich, P. A. (1976). Communication
patterns in adult deaf. Canadian Journal of Behavioral Science, 8, 56–67.
Nelson, K. E. (1998). Toward a differentiated account of facilitation of literacy development and ASL in deaf children. Topics in Language Disorders, 18, 73–88.
Deaf Readers
Nelson, K. E., & Camarata, S. M. (1996). Improving English literacy and speech-acquisition learning conditions for children with severe to profound hearing impairments. Volta
Review, 98, 17–42.
Newport, C. L., & Meier, R. P. (1985). The acquisition of American Sign Language. In D. Slobin (Ed.). The Crosslinguistic
Study of Language Acquisition. Vol. 1. The Data (pp. 881–
938). Hillsdale, New Jersey: Erlbaum.
Padden, C. A. (1993). Lessons to be learned from the young deaf
orthographer. Linguistics and Education, 5, 71–86.
Padden, C., & Humphries, T. (1988). Deaf in American: Voices
from a culture. Cambridge, MA: Harvard University Press.
Padden, C., & Ramsey, C. (1998). Reading ability in signing deaf
children. Topics in Language Disorders, 18, 30–46.
Parasnis, I. (1983). Visual perceptual skills and deafness: A research review. Journal of the Academy of Rehabilitative
Audiology, 16, 161–181.
Parasnis, I., & Samar, V. J. (1982). Visual perception of verbal
information by deaf people. In D. Sims, G. Walter & R.
Whitehead (Eds.), Deafness and communication: Assessment
and training (pp. 53–71). Baltimore: Williams & Wilkins.
Parasnis, I., & Whitaker, H. A. (1992, April). Do deaf signers access phonological information in English words: Evidence from
rhyme judgments. Paper presented at the American Educational Research Association Annual Meeting, San Francisco, CA.
Paul, P. V. (1996a). First- and second-language English literacy.
Volta Review, 98, 6–16.
Paul, P. V. (1996b). Reading vocabulary knowledge and deafness.
Journal of Deaf Studies and Deaf Education, 1, 3–15.
Paul, P. V. (1997). Reading for students with hearing impairments: Research review and implications. Volta Review, 99,
73–87.
Paul, P. V. (1998). Literacy and deafness: The development of reading, writing, and literate thought. Toronto: Allyn & Bacon.
Paul, P. V., & Gustafson, G. (1991). Hearing-impaired students’
comprehension of high-frequency multi-meaning words.
Remedial and Special Education, 12, 52–62.
Quigley, S. P. (1969). The Influence of fingerspelling on the development of language, communication, and educational achievement
in deaf children. Urbana, IL: Institute for Research on Exceptional Children.
Quigley, S. P., Power, D. J. & Steinkamp, M. W. (1977). The language structure of deaf children. Volta Review, 79, 73–84.
Quinn, L. (1981). Reading skills of hearing and congenitally deaf
children. Journal of Experimental Child Psychology, 32,
139–161.
Reich, P., & Bick, M. (1977). How visible is visible English? Sign
Language Studies, 14, 59–72.
Robbins, N. L., & Hatcher, C. W. (1981). The effects of syntax
on the reading comprehension of hearing-impaired children. Volta Review, 83, 105–115.
Rogers, W. T., Leslie, P. T., Clarke, B. R., Booth, J. A., & Horvath, A. (1978). Academic achievement of hearing impaired
students: Comparison among selected subpopulations. B.C.
Journal of Special Education, 2, 183–213.
Sacks, O. (1989). Seeing voices: A journey into the world of the deaf.
Stoddart: University of California.
31
Schildroth, A., & Karchmer, M. (1986). Deaf children in America.
San Diego, CA: College-Hill.
Schirmer, B. R. (1985). An analysis of the language of young
hearing-impaired children in terms of syntax, semantics,
and use. American Annals of the Deaf, 130, 15–19.
Singleton, J. L., Supalla, S. J., Litchfield, S., & Schley, S. (1998).
From sign to word: Considering modality constraints in
ASL/English bilingual education. Topics in Language Disorders, 18, 16–29.
Stanovich, K. E. (1980). Toward an interactive-compensatory
model of individual differences in the development of reading fluency. Reading Research Quarterly, 16, 32–64.
Stanovich, K. E. (1991). Cognitive science meets beginning
reading. Psychological Science, 2, 70–81.
Stanovich, K. E. (1993). Does reading make you smarter? Literacy and the development of verbal intelligence. In H. W.
Reese (Ed.), Advances in child development and behavior (vol.
2). San Diego, CA: Academic.
Stanovich, K. E. (1994). Constructivism in reading education.
Journal of Special Education, 28, 259–274.
Stewart, D. (1993). Bi-bi to MCE? American Annals of the Deaf,
138, 331–337.
Stokoe, W. C. (1975). The use of sign language in teaching English. American Annals of the Deaf, 120, 417–421.
Strassman, B. K. (1997). Metacognition and reading in children
who are deaf: A review of the research. Journal of Deaf Studies and Deaf Education, 2, 140–149.
Strong, M., & Prinz, P. M. (1997). A study of the relationship
between American Sign Language and English literacy.
Journal of Deaf Studies and Deaf Education, 2, 37–46.
Treiman, R,. & Hirsh-Pasek, K. (1983). Silent reading: Insights
from second-generation deaf readers. Cognitive Psychology,
15, 39–65.
Vernon, M. & Koh, S. (1970). Effects of manual communication
on deaf children’s education achievement, linguistic competence, oral skills, and psychological development. American
Annals of the Deaf, 115, 527–536.
Waters, G. S., & Doehring, D. B. (1990). Reading acquisition in
congenitally deaf children who communicate orally: Insights from an analysis of component reading, language, and
memory skills. In T. H. Carr & B. A. Levy (Eds.), Reading
and its development: Component skills approaches (pp. 323–
373). San Diego, CA: Academic.
Wilbur, R. B. (1987). American Sign Language: Linguistic and applied dimensions. 2nd ed. Toronto: College-Hill.
Wilcox, S., & Corwin, J. (1990). The enculturation of BoMee:
Looking at the world through deaf eyes. Journal of Childhood
Communication Disorders, 13, 63–72.
Wilson, M., Bettger, J. G., Niculae, I., & Klima, E. S. (1997).
Modality of language shapes working memory: Evidence
from digit span and spatial span in ASL signers. Journal of
Deaf Studies and Deaf Education, 2, 150–160.
Wilson, M., & Emmorey, K. (1997). Working memory for sign
language: A window into the architecture of the working
memory system. Journal of Deaf Studies and Deaf Education,
2, 121–130.