Do Deaf Children Use Phonological Syllables as Reading Units? Catherine Transler Université de Bourgogne Jacqueline Leybaert Université Libre de Bruxelles Jean-Emile Gombert Université de Haute Bretagne This study aimed at examining whether deaf children process written words on the basis of phonological units. In French, the syllable is a phonologically and orthographically welldefined unit. French deaf children and hearing children matched on word recognition level were asked to copy written words and pseudo-words. The number of glances at the item, copying duration, and the locus of the first segmentation (i.e., after the first glance) within the item were measured. The main question was whether the segments copied by the deaf children corresponded to syllables as defined by phonological and orthographic rules. The results showed that deaf children, like hearing children, used syllables as copying units when the syllable boundaries were marked both by orthographic and phonological criteria. However, in a condition in which orthographic and phonological criteria were differentiated, the deaf children did not perform phonological segmentations while the hearing children did. We discuss two explanatory hypotheses. First, items in this condition were difficult to decode for deaf children; second, orthographic units were probably easier to process for deaf children than phonological units because of a lack of automaticity in their phonological conversion processes for pseudo-words. Finally, incidental observations during the experimental task raised the question of the use of fingerspelled units. This research was supported by grants from the Centre National de la Recherche Scientifique (C.N.R.S.), France (Programme Europe, Département des Sciences de l’Homme et de la Société); from the Direction Générale de la Recherche Scientifique, Communauté Française de Belgique (A.R.C. 96/01–203) and from the Founds Houtman (Belgium). We thank Virginie Lasjuilliarias and Delphine Pourquery for their assistance in testing the children as well as the speech therapists Sylvie Chanusseau and Michelle Munsch for their help in establishing the speech intelligibility level of deaf children. Correspondence should be sent to Catherine Transler, L.E.A.D./C.N.R.S.-6, Bd Gabriel-F 21000 Dijon, France (email: [email protected]). q1999 Oxford University Press. One of the components of reading activity in normal hearing children consists of decoding words. To decode a word, the child must establish the correspondence between the written string and the oral string, that is, the phonological form of the item. More precisely, this cognitive activity has been described as a grapho-phonemic assembling process: graphemes (e.g., b, d, th, oo) are associated with the corresponding phonemes (phonemes are the smallest units that can differentiate two spoken words; e.g., /b/, /d/, /θ/, /u/). This assembling process is important because it is a necessary step in reading development in language communities with alphabetic writing systems (Frith, 1985; Goswami & Bryant, 1990; Harris & Coltheart, 1986; Marsh, Friedman, Welch, & Desberg, 1981; Morton, 1989; Seymour, 1997). However, the existence of such an assembling process among deaf children has been questioned. Some authors have argued that deaf children cannot “think of the sounds” of language. Consequently, they believe that deaf people cannot associate written forms with their phonological correspondents. Indeed, some deaf children have failed to exhibit experimental effects that reveal phonological assembling1 during reading whereas hearing children have. This has been observed most frequently in lexical decision tasks (Burden & Campbell, 1994; Harris & Beech, 1994; Waters & Doehring, 1990). Nevertheless, prelingually severe and profoundly deaf children have shown a sensitivity to certain phonological characteristics of words in various other reading, spelling, and memory tasks. Experi- Syllabic Units in Deaf Children’s Reading ments based on short-term memory paradigms have shown effects of the phonological characteristics of the list to be remembered (Conrad, 1979; Hanson, 1982; Hanson, Liberman, & Shankweiler, 1984; Lichstenstein, 1998; Reynolds, 1986). Spelling paradigms have revealed the existence of phonological processes: deaf youngsters are sensitive to the sound-to-spelling regularity2 (Burden & Campbell, 1994; Dodd, 1980, with mixed results; Hanson, Shankweiler, & Fisher, 1983; Leybaert & Alegria, 1995); their errors also revealed the presence of phonological processes (Padden, 1993). In reading, different paradigms have also revealed the intervention of phonological processes in deaf children (see Leybaert, 1993, and Marschark, 1993, for reviews): letter cancellation tasks (Chen, 1976; Dodd, 1987; Gibbs, 1989; Leybaert, 1980; Quinn, 1981; but see also Locke, 1978, for negative results); probe letter recognition task (Hanson, 1986); reading aloud paradigms (Leybaert, 1993; but also see Beggs, Breslaw, & Wilkinson, 1982, for contradictory results in younger deaf readers). Finally, evidence for phonological coding has also been provided by the Stroop paradigm,3 suggesting that the phonological form of the written words is processed automatically by some deaf people (Leybaert & Alegria, 1993; Leybaert, Alegria, & Fonck, 1983). As a matter of fact, these results are not contradictory but complementary because several factors influence the emergence of phonological processes in deaf people. The emergence of assembling processes in reading depends on the individual characteristics of the deaf people who are involved in the study. Indeed, the utilization of phonological assembling processes in reading depends on the previous development of children’s sensitivity to the phonological structure of environmental spoken language (Gombert, 1992). But this development varies greatly among deaf children. The assembling process also depends on the experimental paradigms used (for a review, see Campbell, 1994) and to some extent on the material (unfamiliar words for pseudo-words mobilized more phonological strategies than familiar words). Our study was designed to take this issue further by investigating the reading unit characteristics of deaf participants. To this end, deaf children were faced with a situation they often meet at school: they simply had to copy written words and pseudo-words (pseudo- 125 words can be pronounced as words but have no meaning). Reading Units for Hearing and for Deaf People The assembling process described above not only consists of establishing the correspondence between one grapheme and one phoneme. A simultaneous correspondence may also be established between multiple graphemes and multiple phonemes. These grapheme groups are not formed randomly. On the contrary, they are organized into reading units. The existence of intermediate reading units between the whole word and the letter level is established and accepted by many specialists (Prinzmetal, Treiman, & Rho, 1986; Santa, Santa, & Smith, 1977). Most authors agree that reading units correspond to morpho-phonological units (syllables, rhymes, i.e., the end of a syllable composed of the vowel possibly plus the following consonants, morphemes, etc.) that are important in oral language. This phenomenon is particularly obvious for children: for instance, the reading aloud of pseudo-words not only depends on a regular grapho-phonemic assembling in English children; their reading is also significantly influenced by the earlier pronunciation of words sharing the same rime (Goswami, 1988, 1993). Reading in teenagers and adults is a matter of considerable debate: do reading units correspond to phonological units or do they correspond to orthographic units? Indeed, some authors have argued that reading units are defined by the statistical characteristics of the letter strings of written language, that is, orthographic redundancy (Seidenberg, 1987; Seidenberg & McClelland, 1989). Conversely, Rapp (1992) has shown that phonological units (syllables in this case) produced a facilitation effect on word reading when the frequency of the letter strings was systematically controlled. This debate indicates that the question of how reading units are related to phonological units can be studied, provided that bigram frequencies are taken into account. If we consider that reading units are linked to phonological units in people who can hear, we may wonder what kind of units are processed by deaf people in whom the audio-oral process of language is impaired. To our knowledge, only studies in English have raised 126 Journal of Deaf Studies and Deaf Education 4:2 Spring 1999 the problem of reading units in deaf children. Gibson, Shurcliff, and Yonas (1970) asked American deaf and hearing participants to write down sequences of letters presented for a duration of 100 msec. The deaf participants were 34 youngsters at Gallaudet College with a congenitally or very early “maximal hearing loss.” Both groups achieved better performances for pronounceable sequences (e.g., glurck) than for nonpronounceable ones (e.g., ckurgl). The authors concluded that orthographic redundancy rules were used by deaf subjects even though they could not use pronounceability. However, in their study, high orthographic redundancy could not be differentiated from the pronounceability of the written material. This question was addressed again by Hanson (1986). The deaf participants were undergraduates or recent graduates of Gallaudet College; they were all prelingually and profoundly deaf. The deaf students were categorized into two groups on the basis of their speech intelligibility. A control group of hearing students was formed and was matched on the level of accuracy in the experimental task. The subjects first saw a 6-letter sequence for about 100 msec and then had to judge whether a probe letter belonged to the sequence or not. Hanson manipulated two factors: the bigram positional frequencies of the sequences (low versus high), and orthographic regularity (pronounceable versus unpronounceable sequences). She found that the deaf participants displayed the same sensitivity as the hearing subjects to the positional frequency information. In the deaf group, sensitivity to sequence pronounceability varied with speech intelligibility, good speakers showing a greater pronounceability effect than poor speakers. This study thus indicated that deaf readers may make use of a mapping between written and spoken language. Yet the type of units deaf people process was not studied directly. In the light of this experimental background, our aim was to investigate one phonological reading unit in French deaf people: the syllable. There are strong reasons for considering the syllable to be a reading unit in the French language. Syllable Is a Reading Unit in French The syllable is a basic unit in the spoken chain. Linguists define the syllable as a fundamental structure (an elementary scheme, Jackobson, 1963) that brings together phonemes of the spoken chain (Dubois et al., 1991): the syllable is a period centered around a peak of sound energy realized during articulation (Lerot, 1993). There is a considerable consensus that in French, as well as in English, one vowel (and only one) determines the peak of the syllable; the other phonemes constitute the margins of the syllable. Syllabication rules in spoken French are generally clearer than in English; this is also the case in written language4 (for French, see Dubois et al. 1991; Ducrot & Todorov, 1972; Jackobson, 1963; Lerot, 1993; and for English see Clements & Keyser, 1983; Fowler, Treiman, & Gross, 1993; Kreidler, 1989; Selkirk, 1984). We focused our interest on the syllable because it serves as a reading unit for experienced as well as for beginning readers of French. As far as experienced readers are concerned, it has been shown that the first syllable of a word is detected faster than a unit corresponding to the syllable plus or minus one letter by French listeners (Mehler, Dommergues, Frauenfelder, & Segui, 1981; Segui, 1984). This phenomenon of speech perception has also been found for speech production: adults named words and pseudo-words faster when the targets were primed by a syllable than when they were primed by a syllable plus or minus one letter (Ferrand, Segui, & Grainger, 1996). Other authors have found that the pronunciation of the vowel “e” in nonwords is strongly influenced by a prime consisting of the phonological syllable, but less so by nonsyllabic units (Taft & Radeau, 1995). In English language studies, recent results from adults also suggest that the syllable may be a unit in naming, that is, in speech production, but only in the case of words with clear syllabic boundaries (Ferrand, Segui, & Humphreys, 1997). The high salience of the syllable in spoken French leads one to assume that it should be a preferred reading unit for beginning French readers (Seymour, 1996). This assumption was confirmed in the study conducted by Colé and Magnan (1997). Using Ferrand et al.’s paradigm (1996), these authors observed that children’s word naming responses were faster for words primed with a written segment corresponding to the target’s first syllable than for words primed by the syllable plus or minus one letter. Although the syllable is a reading unit in French, Syllabic Units in Deaf Children’s Reading this may not be true in all languages. In English in particular, there is more evidence in favor of the importance of the rhyme for learner readers. It is well established that the rhyme is used by English learner readers (Bowey, 1996; Coltheart & Leahy, 1992; Goswami, 1988, 1991, 1993). French children exhibit a lower sensitivity to rhyme than their English peers when reading pseudo-words aloud (Gombert, Bryant, & Warrick, 1997; Goswami, Gombert, & Fraca de Barrera, 1998). In fact, the fact that the relevant reading units vary between languages constitutes a strong argument in favor of the interdependence of phonological units and reading units in development. The number of syllables in one word is an item of phonological information that may be accessible to deaf people. Indeed, the syllable is the basic unit of articulation and most of the deaf participants in experimental studies have received speech therapy. Thus, the syllable could be a language processing unit for deaf people. Certain experimental arguments lend support to this hypothesis. These experiments were performed with English-speaking deaf people. Campbell and Wright (1990) showed that deaf teenagers with intelligible speech were sensitive to word length within a picture memorization paradigm: recall of lists of trisyllabic words was significantly worse than that of lists of mono- and bisyllabic words. Sterne (1996) also showed that prelingually and profoundly deaf children (six were educated in a strictly oral fashion and eight were in a total communication setting) were able to make word length judgments for words pairs represented by pictures. He presented two pictures representing a word of five or six letters in length corresponding to either one or three syllables (e.g., a fridge and a banana; a swing and a piano). The children were asked to choose the picture that corresponded to the “longer word.” The deaf children, like the hearing children (matched on real age), were able to identify the longer words on the basis of the pictures. These results showed that deaf children have activated phonological representations of words and that this phonological information consists of the number of syllables from which the word is formed. These results, together with the status of the syllable in spoken language, and in particular in French, lead us to assume that deaf children could develop a sensitivity to the syllabic nature of spoken French. If 127 this is the case, this sensitivity would have repercussions on the reading units used by deaf beginning readers, as is the case for French hearing children. The Copy Paradigm We were interested in a paradigm that was able to provide evidence of the reading units used by children and that would enable us to analyze the syllabic or nonsyllabic nature of those reading units. We chose a paradigm that (1) does not necessitate reading aloud; (2) permits the use of pseudo-words (as we have noted, pseudo-words are particularly powerful activators of assembling in reading); and (3) represents a common activity for deaf pupils. This paradigm is a copy paradigm involving words and pseudo-words. The principle of the copy paradigm is that a child has to copy a word presented in its written form; since the word is too long to be copied at a single glance, the child has to look at the item several times: he looks at the word, then write down one or more letters, then looks at the word a second time to write down the following letters, and so on. The segmentations made by the children might reveal their representational units. Those representational units are likely to correspond to phonological units. The assumption is that children look at the word for the first time, they read it, and the phonological form of the word is activated. The copy paradigm has already revealed the use of syllabic units among hearing French-speaking children. Studies conducted by Lambert and Espéret (1997) have examined the role of the syllable in a delayed copy task. In this task, subjects had to read a word and to write it down on a graphic tablet after a short delay (the graphic tablet made it possible to record the movements of the pen during writing). Their results suggested that the syllable is a processing unit for expert readers and also for third and fifth graders. Rieben, Meyer, and Perregaux (1991) observed children in their classroom. A text (composed of several sentences) containing new vocabulary chosen by children was written on a board and studied in the classroom. The children had to produce a new text formed using the same vocabulary as that written on the board. The way the children copied words during their composition was analyzed. The authors observed that after a few months of schooling, the children developed syllabic 128 Journal of Deaf Studies and Deaf Education 4:2 Spring 1999 copying strategies. The use of syllabic strategies preceded the copying of the whole word at once. However, this method did not enable Rieben et al. to control word characteristics from a phonological or orthographic point of view (see also Rieben & SaadaRobert, 1997). Studies by Humblot, Fayol, and Lonchamp (1994) implemented a more classical experimental situation. Children (grades 1 and 2) had to copy twenty bisyllabic words that varied as a function of their regularity and familiarity. The regular words could be correctly pronounced by applying grapho-phonological rules (the irregular words contained one letter that was not pronounced); familiarity was defined as the fact that the words were known to the children (it was assessed by five teachers of grades 1 and 2). The words were presented in such a way that the children had to turn their head and look sideways. The results showed that familiar and regular words were copied with fewer glances than less familiar words and irregular words. As in Rieben et al.’s study, children at the beginning of the first grade copied words in sequences of one or two letters that did not correspond to syllables. The syllable seemed to become a unit of information processing for children in the middle of first grade. The use of the syllable copy strategy varied with word familiarity and regularity: the more regular and the more familiar the words, the sooner syllabic copying appeared. Humblot et al. concluded that the syllabic strategy indicated the use of phonological processes in copying. Our Study The studies mentioned above have shown that syllabic units are processed during copying performed by French hearing children. Humblot’s paradigm could provide a way of assessing whether deaf children process words into phonological units during reading, as do hearing children. In order to test the phonological hypothesis, we controlled for the frequency effects of bigrams. The participants in the study consisted of severely and profoundly deaf children and hearing children from different schools. The deaf children were affected by a prelingual deafness (detected before they were 2 years old). They wore hearing aids (no cochlear im- plant) and there was no HF system at school (i.e., amplification of teacher’s speech). Most of them had hearing parents. Their every-day communication at school took the form of sign communication. At lessons, their linguistic environment varied slightly from one school to the other. What all the schools had in common was the fact that sign communication alternated with the French version of Cued Speech (Cornett, 1967). More precisely, most of the time, the adults produced signs and spoke aloud at the same time; however, at other times, the adults no longer produced signs, but used Cued Speech instead. The cued versions of words were given to children after they had understood the meaning of the words and sentences. Cued Speech was always used when administering the reading and spelling instruction. The letters were linked with phonemes that result from lip-reading together with the movements and position of the hand around the face (see Leybaert & Charlier, 1996, for a description of Cued Speech). To control the linguistic characteristics of our participants as closely as possible, we applied pretests. Indeed, several variables affect the development of phonological skills in deaf children. These variables are of three types: auditory, visual, and articulatory. First, the impact of sensory impairment varies greatly, depending on the degree of hearing loss and the degree of improvement with acoustical or electrical prostheses. Phonological abilities are more developed in children with better residual hearing (Conrad, 1979). Second, the ability to perceive oral language visually also varies between deaf children. There are strong interindividual differences in speechreading abilities (Dodd & Murphy, 1992), which predict phonological skills and oral language development (Dodd & Hermelin, 1977). The ability to perceive speech visually also depends on whether children have been educated with phonetically augmented systems like Cued Speech (Alegria, Charlier, & Matthys, in press; Leybaert, Alegria, Hage, & Charlier, 1998; Nicholls, 1979; Nicholls & Ling, 1982; Périer, Charlier, Hag, & Alegria, 1988). Children exposed early and intensively to Cued Speech develop accurate phonological representations, leading to better rhyming skills (Charlier & Leybaert, in press; Leybaert & Charlier, 1996). Third, the intelligibility of speech also explains a part of the individual variations Syllabic Units in Deaf Children’s Reading in phonological skills (Conrad, 1979; Leybaert & Alegria, 1995). However, the impact of speech production skills on phonological processes used in reading should be less important than that of speech perception skills (Leybaert et al., 1998). All these factors are involved to a greater or lesser extent in variations of reading processes in deaf people. That is the reason why children’s speech production and speech perception levels were controlled for by means of pretests. In the present experiment, Humblot et al.’s copying paradigm (1994) was used. The targets varied in lexicality (words and pseudo-words) and length (monosyllabic and trisyllabic items). Video recordings of the children’s copying operations enabled us to measure copying duration, number of glances, type of segmentation (i.e., glances), and errors. 129 “nonsyllabic.” A variation in the children’s first segments on these items as a function of syllabic boundaries would constitute strong evidence for their use of phonological processes: hearing children should produce more syllabic than nonsyllabic segmentations. The important question was whether deaf children (or some of them) would exhibit the same tendency. Finally, we also analyzed copying errors. Of particular interest are orthographic errors that respect the phonology of items (e.g., téléfonne instead of téléphone, where both -onne and -one represent /ɔn/). We wanted to observe whether (and which) deaf children would make this kind of mistake. Method Participants Hypotheses Real words should be copied with fewer glances than pseudo-words because the former are known and familiar to all subjects. Long items should be copied with more glances than shorter items. These length and lexicality effects might reveal the sensitivity of the copying task to lexical processing. First, deaf children were expected to copy trisyllabic items syllable by syllable, as hearing children do. To check this point, we considered the first segments (part of items) that were copied in one glance. The children’s first segmentations should thus correspond to the spoken syllabic boundary in a certain number of cases (e.g., champignon “mushroom” will be copied as cham/pignon or champi/gnon). But in this case, the syllables were defined by both orthographic and phonological boundaries, because these two boundaries coincided /ʃã - pi - ®õ/. In order to distinguish between the phonological and orthographic criteria, we created a supplementary condition with five pairs of pseudo-words. The pseudo-words had a different syllabic structure while being orthographically very similar to one another (e.g, rentala versus renalat). The first syllabic boundary came after the “n” in the case of rentala (ren/ta/la) and before the “n” in the case of renala (re/na/la). In both cases, those segmentations were “syllabic” whereas segmentations like ren/alat and re/ntala would be Experimental group. Twenty-one deaf children from six special education classes in two different areas (Burgundy and Champagne) took part in the study. Their every-day communication was signed (French Sign Language, FSL). At school, they were educated both in FSL and spoken language. Nineteen children had hearing parents, one had a hearing father and a deaf mother and one had two deaf parents. The mean age of the group was 10 years 6 months (range 7y, 5m to 12y, 4m). They did not suffer from any psychological, sensory, or motor handicap not explained by their deafness (evaluated by psychologists at their schools). Their intellectual level was normal according to psychologists’ evaluations made less than 1 year before our experiment (performance tests from the revised Wechsler intelligence scale, French version). Their vision was normal or corrected by glasses. Their hearing loss was severe or profound: more than 70 dB measured on 3 frequency (.5, 1, 2 kHz) pure-tone average audiometric thresholds for better ear without hearing aid. Their hearing loss was detected before they were 2 years old. With hearing aids, their hearing loss was 32 to 40 dB for 7 children, 40 to 50 dB for 7 children; and more than 50 dB for 7 children (cf. Table 1 for more details). Control group matched on lexical level. A control group of hearing children consisted of 21 normal hearing children from grades 2 and 3 (CE1 and CE2 in France). Table 1 Characteristics of hearing and deaf subjects, ages, and scores on pretests Lexical decision: correct recognition (/20) Lexical decision: correct responses (/40)a Speech perception level (/38) Auditive loss with hearing aidb Speech intelligibility (/5) Deaf participants D1 144 D2 145 D3 131 D4 113 D5 101 D6 111 D7 116 D8 139 D9 122 D10 113 D11 103 D12 132 D13 89 D14 129 D15 139 D16 146 D17 121 D18 135 D19 147 D20 148 D21 131 Mean 126.4 6 6 6 6 6 7 8 8 10 10 11 11 11 12 13 14 15 15 16 20 20 11.0 20 20 22 26 30 27 28 28 19 20 20 24 31 29 25 25 29 35 34 36 39 27 29 15 20 38 16 12 6 15 28 37 36 21 34 26 22 27 22 18 32 36 35 25 37 35 68 43 43 48 60 37 35 32 38 45 42 53 88 60 42 53 48 30 52 47.2 4 1.5 2.5 2.5 1.5 1.5 1 2 2.5 5 4 1.5 4 3 1 2.5 1 2 2.5 2.5 2.5 2.4 Hearing participants H1 86 H2 84 H3 95 H4 93 H5 90 H6 91 H7 90 H8 93 H9 93 H10 98 H11 85 H12 89 H13 95 H14 86 H15 90 H16 92 H17 87 H18 99 H19 99 H20 92 H21 96 Mean 91.6 5 6 7 8 8 6 7 8 8 9 10 9 11 10 11 11 13 12 19 19 20 10.3 13 13 13 14 14 14 14 15 16 17 18 19 20 20 22 24 24 28 37 39 40 20.7 Participants Age (months) Correct recognition of words plus correct rejections of nonwords. a Three frequency (.5, 1, 2 kHz) pure-tone average audiometric thresholds for better ear. b Syllabic Units in Deaf Children’s Reading Their mean age was 7 years and 7 months (range 7y to 8y, 3m). They all were native French speakers. None of them had repeated a school year. Both classes came from the same school in Burgundy. The majority of the hearing children came from middle-class backgrounds in a rural area as did the deaf children. They were matched with the deaf group for lexical accuracy assessed by a lexical decision task: all the children were given a list containing 20 words and 20 pseudo-words randomly mixed up. The words were chosen from the words that were recognized by the majority of children in the very first lexical decision task (see the experimental items selection below). The children had to process as many items as they could in a fixed time of 1 minute. The instructions explained that the list contained real words that they could recognize and pseudo-words that were invented and that they could not recognize. They had to write down a “1” in front of the words they knew and a “2” in front of the words they did not recognize. Four training items enabled us to ensure the children had understood the instructions correctly. The test was administered collectively. The score was the number of words correctly identified (so the correct recognition score was on a 20-point scale). The mean for the hearing group was 10.3 (SD 5 4.3) and the mean for the deaf group was 11 (SD 5 4.4). The difference between the two groups was not significant (t(40) , 1). Twenty-eight deaf children were originally tested, but nine of them were eliminated because they could not be matched with hearing children: their scores on the lexical decision task were lower than those of the hearing children in grade 2. Material To help us select words familiar to deaf participants, we administered a lexical decision task. Three hundred written items, 100 pronounceable pseudo-words and 200 words (monosyllabic and trisyllabic), were presented one by one. The participants were asked to sort the stimuli into two categories of known and unknown items: they had to enter the known words in an appropriate envelope and throw away words that they could not recognize. To this end, they were informed that 131 some of the words were real whereas others were invented. The correct response rate for words was 40%. Only 4% of pseudo-words were not thrown away. This revealed that the subjects were not responding by chance. The experimental items were selected from among the words recognized by more than half of the subjects. The selected words were all regular for reading; that is, they can be read on the basis of regular grapho-phonological rules. They did not contain consonant clusters (consonants corresponding to several phonemes, like “vr,” “st”) in order to avoid problem in syllabic boundary definitions. However, all the items contained at least one plurigraph (a plurigraph is a string of two or three letters corresponding to only one phoneme, e.g., in French “ph” is always pronounced “/f/”). According to their teachers, the words were familiar to the hearing subjects. The stimuli consisted of (see Appendix for details): 1. Ten four-letter monosyllabic words (e.g., juin, “June”). 2. Ten trisyllabic words. Their length varied between 7 and 10 letters (e.g., champignon, “ mushroom”). 3. Ten monosyllabic pseudo-words. These had the same phonological structure as the monosyllabic words and were constructed by a process of grapheme permutation (e.g., jain, derived from juin “June” and main “hand”). 4. Ten trisyllabic pseudo-words. They were formed using syllables coming from different trisyllabic words. For instance, rapilon is created with the first syllable of the word ramasser “to gather up” and syllables from champignon “mushroom” and pantalon “trousers.” 5. Five trisyllabic pseudo-words (named the nV items) were matched with five other items (named nC items). The nV and nC items had a different phonological structure whereas orthographically they were very similar. Their beginnings were identical: phon; ren; con; pan; chan. However, their syllabic boundary varied depending on the letter following “n.” In the nC items, “n” was followed by a consonant, so “n” was integrated in the first phonological and orthographic syllable (e.g., phongarcho is pronounced /fõgarʃo/; the first syllable is 132 Journal of Deaf Studies and Deaf Education 4:2 Spring 1999 /fõ/; the syllabic boundary comes after the letter “n”). In the nV items, “n” was followed by a vowel, so it became the first consonant of the second syllable (e.g., phonarcho is pronounced /fonarʃo/; the first syllable is /fo/; the syllabic boundary comes after the first vowel, i.e., after the V position). The bigram frequencies were controlled on the basis of the French database BRULEX (Content, Radeau, & Mousty, 19885). The mean frequency of all the bigrams was 1,478 per million (SD 5 995) for nC items and 1,520 (SD 5 963) for nV items. The difference was not significant (t(4) 5 1.68; p . .05). We also checked the frequency of bigrams formed with the letter “n” letter followed by either a consonant in nC items (e.g., ng of phongarcho) or by a vowel in nV items (e.g., na of phonarcho). The difference between nC frequency (514; SD 5 567) and nV frequency (423; SD 5 240) was not significant (t(4) , 1; ns). The frequencies of bigrams corresponding to the vowel and the letter “n” were of course strictly equal for nC and nV (1,696; SD 5 251), because they were the same in every pair of items (e.g., on in phongarcho and phonarcho). However, their mean frequency was higher than the mean frequency of the bigrams formed with “n” plus the following letter (t(4) 5 6.1; p , .01 for nC; t(4) 5 10.3; p , .001 for nV). In the same way, a bigram like on was more frequent than no and ng; but bigrams like no and ng did not have a significantly different bigram frequency. This point will be important for the interpretation of certain results. Procedure Pretests. In order to judge their speech intelligibility, we asked the deaf children to name nine simple colors and ten digits presented on a page. Their productions were tape-recorded and judged by two independent speech therapists who have regularly worked with deaf people over a period of several years. They had to score the intelligibility of the children’s oral productions on a 5-point scale: 1 corresponding to unintelligible speech and 5 corresponding to very intelligible. None of their scores differed by more than one point for any of the children. Thanks to this consistency between the scores, we were able to decide that the children’s global speech intelligibility score should be the mean of both speech therapists’ scores. A speech perception test was adapted from Boon’s clinical test (1995) for speech therapists. Pseudowords, words, and sentences were presented in Cued Speech plus oral speech. All the items were repeated three times. The children had to choose the written form of the item from among alternatives that have a similar coding or a similar lip-read image: they had to choose 16 pseudo-words (from among 6 choices in each case); 18 words (4 choices); and 4 sentences (4 choices). The final score was the sum of the correct responses (maximum 5 38). Thus, this test is not only a measure of Cued Speech perception but also a more general measure of deaf children’s ability to perceive oral language and to establish a correspondence with its written form. All the instructions were given in FSL, because Sign language was the form of communication used most frequently by the participants. Only items used in Boon’s speech perception test were given in Cued Speech (but instructions were also given in Sign language). Experimental task. The items were presented behind the children’s backs. The children had to turn around to see them before copying them down. When the child was turning around to see the stimulus, the experimenter issued a light signal within camera shot (see experimental device in Figure 1). This signal made it possible to measure the copying duration. The child’s hand was filmed during copying. The items were printed in lower case letters (Times New Roman, 1.2 3 1.9 cm) on a white card (21 3 30cm) presented at a distance of 2 meters. The list of 45 stimuli was divided into two parts (27 and 28 stimuli). Half of the participants were presented the first part during the first session and the second part during the second session and vice versa for the other half of the participants. The nC items were included in the first part, and the nV items were included in the second part. The nC and nV items were thus not presented in the same session (e.g., phonarcho and phongarcho were not presented in the same session). In each session, items were presented in the same random order to all Syllabic Units in Deaf Children’s Reading 133 Figure 1 Experimental device. participants. The children were told (in FSL for deaf children): “I will show you one word here, look at it. On this page you have to copy the same word, exactly the same. When you have finished, take a new page here and carry on with the next word that I show you.” The experimental list was preceded by two training items in each session that were not taken into account in the results. Hearing children and deaf children were seen in three and four separate sessions, respectively: once collectively (twice for the deaf children) and twice individually (all). First, the deaf children were seen once 1 month before the experiment involving the lexical decision task leading to the selection of the word items (30 minutes). One or two weeks before the experimental task, all the participants were given the fixed time lexical decision task collectively in their classroom (a few minutes). In the case of the deaf children, this task was followed by the speech perception test (45 minutes). The children were then seen in two individual sessions for the experimental task (about 20 minutes each time). The tape recordings for speech intelligibility evaluation were made at the beginning of the first individual session. Results Length and Lexicality Effects: The Sensitivity of the Task The aim of these first analyses was to determine whether the deaf and hearing subjects made an equal number of segmentations (i.e., glances) back to a stimulus while copying it. A second objective was to ensure that our task would be sensitive to the difficulty of the material. Long items should obviously involve more segmentations than short items, and pseudo-words should involve more segmentations than words. Two measures were taken as dependent variables: numbers of glances and copy duration. A glance was counted each time the child looked at the item and copied a part of the item. Consecutive glances with no copying in between were counted as only one glance. The copying duration was measured from the beginning of the first glance (when the experimenter gave the light signal) to the point at which copying of the item terminated. The mean numbers of glances and mean duration of copy are presented in Table 2. The data were analyzed using an ANOVA with participants as a random factor (F1) and then with 134 Journal of Deaf Studies and Deaf Education 4:2 Spring 1999 Table 2 Mean numbers (N) of glances (and standard deviation) and mean copying times in seconds (T) (and standard deviation) as a function of the length and lexicality of items, for deaf and hearing participants Deaf (n5 21) N Monosyllabics W (10 items) PW (10 items) Trisyllabics W (10 items) PW (10 items) Means (/10) Hearing (n5 21) T N Combined (n5 42) T N T 1.08 (0.13) 1.18 (0.21) 5.36 (1.18) 5.72 (1.38) 1.10 (0.12) 1.15 (0.16) 6.90 (1.98) 7.34 (1.96) 1.09 (0.12) 1.17 (0.18) 6.13 (1.79) 6.53 (1.87) 2.02 (0.89) 2.64 (0.86) 1.73 (0.89) 12.86 (3.38) 13.87 (2.92) 9.45 (4.60) 1.72 (0.55) 2.24 (0.68) 1.55 (0.64) 14.43 (3.50) 15.68 (4.41) 11.08 (5.06) 1.87 (0.74) 2.44 (0.79) 1.64 (0.78) 13.65 (3.49) 14.77 (3.80) 10.27 (4.89) stimuli as a random factor (F2). Those double analyses were made for all ANOVAs in this study. Indeed, this double approach is very important in studying deaf populations using linguistic stimuli.6 The data were analyzed using a 2 3 2 3 2 design (Hearing status [Hearing vs. Deaf ] 3 Length [monosyllabic vs. trisyllabic] 3 Lexicality [words vs. pseudo-words]) with repeated measures on the last two factors. Participants made more glances and took more time for trisyllabic than for monosyllabic items (glances: F1(1, 40) 5 106.86, p , .001; F2(1, 36) 5 111.4, p , .001; duration: F1(1, 40) 5 529.68, p , 0.001; F2(1, 18) 5 319.94, p , .001). They also made more glances and took more time for pseudo-words than for words (glances: F1(1, 40) 5 103.33, p , .001; F2(1, 36) 5 11.3, p , .01; duration: F1(1, 40) 5 21.16, p , .001; F2(1, 18) 5 1.97, ns). The effect of lexicality was much more important for trisyllabics than for monosyllabics (interaction measured on glances: F1(1, 40) 5 52.86, p , .001; F2(1, 36) 5 6.85, p , .05; duration: F1(1, 40) 5 6.50, p , .02; F2(1, 18) , 1, ns). Thus, copying seemed to be facilitated when items were known and when they were short. Lexicality did not affect monosyllabics because many of these items were copied in only one glance. There was a significant effect of Hearing status on the glance measure only when items were taken as the random factor (F1(1, 40) 5 1.92, ns; F2(1, 36) 5 55.37, p , .001). It is more likely that the tendency of hearing subjects to make fewer glances than the experimental group did not reach a significant level because of the heterogeneity of the deaf population (SD 5 0.89 in deaf participants and 0.64 in hearing participants). That is why we consider this effect to be only a tendency. This Hearing status effect did not interact significantly with any other factor. Deaf children copied the items faster than their hearing peers (F1(1, 40) 5 4.61, p 5 .05; F2(1, 18) 5 10.85, p , .004). This effect of Hearing status did not interact with the effects of the material. Thus, effects of length and of lexicality were identified both for the number of glances and for the copy duration, showing that the task was sensitive to the difficulty of the material: words were copied faster and with fewer glances than pseudo-words, so were short items compared with long items. We also wanted to control the matching of experimental and control groups. Effects due to the material were similar in both groups (no interaction) despite faster copying by the deaf participants. This latter phenomenon is easily explained by the fact that the deaf children were older than the hearing children. The similarity of the effects of the material in both groups validates our choice of the control group and permits the following qualitative analysis concerning the nature of units copied in one glance. Syllabic Units in Deaf Children’s Reading 135 Figure 2 Mean number of syllabic and nonsyllabic segmentations made on the 20 trisyllabic items by deaf and hearing participants (and chance level). Syllabic Segments Hearing children were expected to make more syllabic segmentations than nonsyllabic ones. This means that they should stop their copying more often between two syllables (on the syllabic boundary) than within a syllable. The question was whether deaf children would use syllables as units for copying. Only the segments to be copied of trisyllabic items (words and pseudo-words) were taken into account because of the low numbers of segmentations for monosyllabic items (see Table 2). These first segments were classified into syllabic or nonsyllabic segments. The syllabic segments corresponded to a glance at the end of the first syllable (e.g., cham/pignon) or at the end of the second syllable (e.g., champi/gnon). Nonsyllabic segments corresponded to a glance before the end of the first syllable (e.g., cha/ mpignon), between the first and second syllable (e.g., champ/ignon), or before the end of the word (e.g., champigno/n). To simplify presentation, the results obtained for words and pseudo-words have been processed together. The sum of the syllabic and nonsyllabic segments was computed for each participant. The scores were converted with a view to making it possible to compare participants’ scores regardless of the number of nonsegmented items: each individual score was changed to a score out of 20. For instance, if a subject had not segmented 8 items (of the 20 trisyllabics, this subject would have copied 8 in only one glance), only 12 items were segmented. On the basis of this example, let us imagine that this subject made four syllabic segmentations. His score of 4/12 became 6.66/20. This conversion was necessary to eliminate the effect of items copied in only one glance, that is, without any segmentation. The mean numbers of syllabic and nonsyllabic segmentations on trisyllabic items as a function of their type are presented in Figure 2 for deaf and hearing participants. For each type of segmentation, we calculated the mean probability of this segmentation occurring by chance (Chance Level entered along x axis of Figure 2). All segmentations occurring between two adjacent letters were considered to be equally probable whatever the position of the letters in the word. Subjects could always perform syllabic segmentations twice for each item and each item could be segmented at seven or more places in total. Thus, the probability of a subject segmenting the item syllabically is exactly 40 chances out of 146 (146 is the total number of possible segmentations in the 20 items). The probability of subjects performing a nonsyllabic segmentation was higher (106 chances in 146). In Figure 2, the chance level is represented by a score out of 20 points: 40/146 becomes 5.48; 106/146 becomes 14.56). A comparison test between the scores of participants and the scores expected by chance (bilateral Student’s t test) was computed. 136 Journal of Deaf Studies and Deaf Education 4:2 Spring 1999 Hearing participants produced significantly more syllabic segments than chance level (t(19) 5 6.73, p , .001) and fewer nonsyllabic segments than chance level (t(19 ) 5 26.6, p , .001). (One hearing subject copied all the items without segmentation. His scores are not taken into account; n 5 20 and df 5 19.) Deaf participants also produced significantly more syllabic segments than chance level (t(20) 5 6.89, p , .001) and fewer nonsyllabic segments than chance level (t(20) 5 26.41, p , .001). This indicates that the first segments copied by hearing and deaf participants are more often syllabic than nonsyllabic when compared to chance level. So far, however, the syllables in our material have corresponded to phonological and orthographic units. Indeed, in our material the bigrams frequency was higher inside syllables (mean frequency 5 2,065) than at syllable boundaries (mean frequency 5 488; Student’s for dependent samples, t(9) 5 4.34, p , .01). The purpose of the next analysis was to test more directly the case in which the syllabic boundary is determined by phonology. When Syllabic Boundaries Are Defined by Phonology: nC Items and nV Items This analysis concerns the comparison between the segmentations made on the trisyllabic pseudo-words nC (e.g., rentala) and nV (e.g., renalat). Hearing subjects should make more segmentations that respect syllabic boundaries, that is, between two syllables, than nonsyllabic segmentations. Thus, they are expected to make (1) more segmentations before “n” for nV than for nC items (e.g., segmentation before “n” of renalat (nV item) is syllabic; segmentation before “n ” of rentala (nC item) is nonsyllabic) and (2) more numerous segmentations after “n” for nC items than for nV items (e.g., segmentation after “n ” of renalat (nV item) is nonsyllabic; segmentation after “n” of rentalat (nC item) is syllabic). Table 3 presents the mean number of items segmented before the “n,” the mean number of segmentations after the letter “n,” and the mean number of items not segmented at those positions (items that are not segmented at all and items that are segmented before the letter preceding “n” and after the letter following Table 3 Mean number (and standard deviation) of items segmented before and after “n” and not segmented in either condition as a function of the type of item (nC or nV) for hearing and deaf participants Before “n” Hearing (n5 21) Deaf (n5 21) After “n” Hearing Deaf Not segmented or segmented elsewhere Hearing Deaf nC items (5) nV items (5) Re/ntala 0.09 (0.3) 0.28 (0.56) Ren/tala 2.43a (1.57)a 2.38a (1.56)a Re/nalat 0.86a (0.79)a 0.71a (0.72)a Ren/alat 1.33 (1.15) 2.24 (1.22) 2.48 (1.5) 2.33 (1.56) 2.81 (1.4) 2.05 (1.5) Syllabic segmentations. a “n”). When subjects performed segmentations at V and at V 1 1 for the same item (e.g., re/n/tala), the two segmentations were taken into account, but such occurrences were rare. An ANOVA was carried out on the mean number of nonsegmented items and on the mean number of segmentations occurring before “n” and after “n.” For each measure, data were analyzed with a 2 3 2 design (Hearing status [Hearing vs. Deaf ] 3 Items [nC vs. nV]) with repeated measures on the last factor. The mean number of nonsegmented items did not vary significantly as a function of either Hearing status (F1(1, 40) 5 1.14, ns; F2(1, 8) 5 3.07, ns) or type of item (F1(1, 40) , 1, ns; F2(1, 8) , 1, ns) and there was no significant interaction (F1(1, 40) 5 2.93, ns; F2(1, 8) 5 1.44, ns). As there were no quantitative differences between the experimental and control groups, this enabled us to analyze the qualitative differences on the segmentations occurring before and after the letter “n.” Segmentations before “n.” As predicted, the mean number of segmentations was higher for nV items than for nC items (F1(1, 40) 5 25.93, p , .001; F2(1, 8) 5 5.78, p , .05). Hearing status had no significant effect (F1 , 1; F2 , 1). The interaction between Hearing status and Items did not reach a significant level (F1(1, 40) 5 Syllabic Units in Deaf Children’s Reading 137 Table 4 Numbers and percentages of errors made by deaf and hearing participants for all items Legal errors Deaf (n 5 21) 5 8.47% Hearing (n 5 21) 33 43.42% Lexical Addition and Letter confusions Transpositions omission confusion All errors 3 5.08% 1 1.32% 11 18.64% 7 9.21% 21 35.60% 14 18.42% 2.03, ns; F2(1, 8) 5 4.9, ns). This suggests that subjects performed more syllabic segmentations for nV items (“re” is the first syllable of “renalat”) than nonsyllabic segmentations for nC items (“re” is not the first syllable of “rentala”). This dependent variable seems to indicate that all the subjects performed segmentations that corresponded to the syllable unit. However, segmentations before “n” were infrequent for nV items and were close to zero for nC items in both groups. Segmentations after the “n.” As predicted, the number of segmentations after the “n” was higher for nC items than for nV items (F1(1, 40) 5 13.88, p , .001; F2 (1, 8) 5 6.23, p , .05). There was no significant effect of Hearing status (F1(1, 40) 5 1.17, ns; F2(1, 8) 5 2.03, ns). The interaction between Hearing status and Items was significant but only in the F1 analysis (F1(1, 40) 5 8.21, p , .01; F2(1, 8) 5 2.51, ns). Hearing subjects performed more segmentations for nC than for nV items; that is, they made more syllabic than nonsyllabic segmentations. In contrast, deaf subjects produced nearly the same number of segmentations for nC and nV items (the difference was not significant in a Newman-Keuls a posteriori test). Analysis of Errors The video recording of the copy enabled us to count all misspellings as errors, even those immediately corrected by the subject. Several types of error were analyzed. We categorized the errors within a hierarchy. First, “legal errors” were considered: the phonology of the target is not disturbed, for example, “téléphonne” instead of téléphone (/telefɔn/). Next, “lexical confusions” were counted: the subject wrote a word instead of the item presented, for example, “mais” (“but”) in- 19 32.20% 21 27.63% 59 100% 76 100% stead of main (“hand”). Transposition errors were considered next. These consisted of errors in the order of the letters; for example, “ramssare” instead of ramasser (“to gather up”). Omissions and additions of one letter were then counted, for example, “aniaux” instead of “animaux” (“animals”). Finally, we considered confusions of letters, that is, cases in which the subject substituted one letter for another, for example, “renatat” instead of renalat (pseudo-word). Because the errors were rare, the mean number of errors is presented for Deaf and Hearing subjects in Table 4, collapsed on words and pseudo-words, monosyllabic and trisyllabic items. T tests for independent samples (Hearing, Deaf) were performed on each type of error (except for the rare lexical errors). The comparison between hearing and deaf children was significant for legal errors (t(40)53.44, p , .001) but not for lexical errors, transposition errors, addition and omission errors, or confusion of letters errors (all ps . .10). Interindividual Differences For deaf children, the mean score in the speech reception test was 25 points on the 38-point scale; on the whole, scores were higher than the mean score (71% of subjects had a score higher than the arithmetic mean of 19). Their scores corresponded to a rather high level of speech reception (lipreading and Cued Speech). However, most of deaf children achieved only low scores for speech intelligibility: only five subjects achieved a score higher than 3, corresponding to intelligible speech; for others subjects, many words were unintelligible (see Table 1 for details). The correlations between lexical level, chronological age, speech reception level, hearing loss, and speech intelligibility for the 21 deaf children have been com- 138 Journal of Deaf Studies and Deaf Education 4:2 Spring 1999 puted. Speech intelligibility tended to be negatively correlated with hearing loss, but this correlation did not reach the conventional statistical level (r 5 .42, p 5 .06). Only the correlation between speech reception score and speech intelligibility score reached a significant level: intelligibility increased with the speech reception score (r 5 .72, p , .001). The correlation between chronological age and lexical level was weak and nonsignificant for deaf participants (r 5 .33, ns) and did not reach the level of significance for hearing children (r 5 .43, ns). No correlation was found between speech production and reception scores and experimental scores. Thus, no more advanced statistical analysis could be performed. Discussion This study aimed at investigating whether educated deaf French children use the syllable as a unit during the copying of written material. Severely and profoundly deaf children and younger hearing children matched for lexical level were filmed while copying monosyllabic and trisyllabic words and pseudo-words. The copying duration, number of glances, the type of the first segment copied, and the type of error were recorded. The number of glances and copy duration were expected to vary depending on the nature of the written material. Our results confirmed these hypotheses. The effects of lexicality and of length were highly significant. Copying duration and the number of segmentations were higher for long items than for short items and higher for pseudo-words than for real words. This last point confirms the Humblot et al. (1994) data. In our study, these effects were almost identical for hearing and deaf children. This point enabled us to compare both groups in a more qualitative way. Hearing children were expected to perform initial segmentations that respect syllabic boundaries. This prediction was confirmed. This finding supports results obtained with French hearing children (Humblot et al., 1994; Rieben et al., 1989). The main question was whether deaf children also processed syllabic segments. Our results show that deaf children do indeed use syllabic units when they copy words and pseudowords. First segmentations occurring within the syl- labic boundary were significantly above chance level. The deaf children’s sensitivity to the syllabic nature of the written items lead us to assume that they could have converted the written material into a phonological form corresponding to the syllable, as the hearing children did. Indeed, syllables are basic units of articulation. Thus, it is possible that deaf children might be sensitive to the syllabic features of environmental oral language (Campbell & Wright, 1990; Sterne, 1996). This syllabic sensitivity could lead to the processing of corresponding reading units during the task. However, syllable boundaries are often flanked by letter patterns with relatively low transition frequencies (Seidenberg, 1987). This was the case in our material where bigram frequencies were higher inside syllables than at syllabic boundaries. Therefore, the syllabic segmentations performed by hearing and deaf children could be explained to an extent by the use of orthographic information. In one particular case (nC and nV items), orthographic and phonological factors were not confounded. These items were orthographically very similar, for example, rentala (nC item) and renalat (nV item). An effect of the type of item on the first segmentations of the letter “n” was expected if subjects segment on the basis of the spoken syllabic structure. Segmentations before the letter “n” (segmentations occurring after “re” in our example) were more numerous for nV items (i.e., re/nalat, syllabic segmentations) than for nC items (i.e., re/ntala, nonsyllabic). This was observed in deaf as well as in hearing participants. However, this result is not particularly convincing if considered in isolation. Indeed, segmentations of nV items were not very numerous in absolute terms. Moreover, nV items were more often segmented after “n” (ren/alat, nonsyllabic) than before “n” (re/nalat, syllabic). A posteriori, the low number of segmentations for nC items could be explained by a legality effect. Indeed, when subjects segment nC items on V, the following segment is illegal (i.e., impossible), both orthographically and phonologically at the beginning of a syllable (e.g., -ntala in the nC item rentala). In other words, the fact that there are more segmentations before “n” on nV items (re/nala) than on nC items (re/ ntala) does not necessarily mean that children use the syllabic unit of spoken language. If we turn to the segmentations performed after Syllabic Units in Deaf Children’s Reading “n” (ren in our example), hearing subjects produced more of these segmentations after “n” for nC items (i.e., ren/tala, syllabic segmentations) than for nV items (i.e., ren/alat, nonsyllabic). This suggests that they were able to access the phonologically defined syllabic structure. Indeed, there is no orthographic effect that can explain this result: when subjects performed a segmentation after “n,” the following segments (for example, ta for ren/tala in nC items and al for renala in nV items) were always legal at the beginning of a syllable and their frequency was the same for nC items and for nV items. If we consider the results obtained using this measure, the hearing children were able to access the phonology of the items because they segmented the bigrams en, an, and on differently as a function of their phonological correspondences, that is, as a function of the phonological boundaries between syllables. Those results are compatible with those in many other studies. Since the syllable is a basic unit of articulation, it is used as a processing unit in several paradigms: shortterm memorization (see a review of the studies presenting the links between number of syllables, articulation characteristics, and short-term memory in Baddeley, 1990); reading aloud (Colé & Magnan, 1997) and copy paradigms (see Humblot et al., 1994; Lambert & Espéret, 1997; Rieben et al., 1991). The results of error analyses for hearing children are compatible with these findings: legal errors constituted the most frequent type of error made by hearing children. This observation provides support for the hypothesis of an assembling strategy in reading and copying for normally-hearing children. In contrast, the deaf subjects’ results did not reveal any such phonological strategy. Indeed, the number of segmentations after “n” was nearly the same for nC and nV items. One interpretation is that the deaf children were reluctant to segment letter strings such as “en,” “an,” or “on.” Instead, they acted as if they were trying to “preserve” these units in contexts where such letter strings take the form of digraphs (ren/tala), as well as in contexts in which they do not (ren/ala). In other words, the deaf children might not separate digrams that potentially correspond to one phoneme. This tendency might result from an explicit knowledge of digraphs. Indeed, when learning to read and to spell, 139 the children in our population are taught that “en,” “an,” or “on” are graphemic units that correspond to only one “sound,” which is expressed by precise representations in Cued Speech (/ã/, /ã/ and /õ/). This tendency not to segment potential digrams might also result from visuo-orthographic phenomena: the sequence V 1 “n” is a more frequent digram than the sequences “nC” and “nV” (for details, see the description of the material). Since the deaf children in our sample had been exposed to written language for several years, the learning of orthographic regularities might have occurred, at least in an implicit manner. This interpretation is compatible with Gibson et al.’s (1970) and Hanson’s results (1986), which show that deaf readers are sensitive to orthographic redundancy. The analysis of the errors provided no evidence in favor of phonological processes in reading by deaf children. On the contrary, the deaf children’s errors violated the phonology of the items almost systematically. The main question is to discover the reason why no evidence for the conversion of written items into phonological units was provided by the deaf participants. In the speech perception test, the deaf participants were obliged to discriminate between items sharing the same labial image and to perform a phonological conversion of speech into spelling. Why did they not do the opposite and convert written material into its phonological form in the copy task? Two nonmutually exclusive hypotheses can be proposed in order to explain our results: first, nC and nV items might have been too difficult to decode; second, the results may be due to the lack of automatization of deaf children’s assembling abilities. The first hypothesis is that deaf children would not be able to perform the precise phonological processing necessary to be sensitive to the nC and nV items. It is highly possible that deaf children might not be sensitive to subtleties of grapho-phonological decoding such as the fact that a digraph can be a phoneme in some cases and two phonemes in other cases. Deaf participants may have benefited from the Cued Speech technique, and possibly it has improved their phonological representations as shown by the results of the speech reception test. However, their phonological representations did not reach the same level of precision as those of hearing children or of deaf children who have been 140 Journal of Deaf Studies and Deaf Education 4:2 Spring 1999 intensively exposed to Cued Speech from an early age (for a review, see LaSasso & Metzger, 1998; Leybaert et al. 1998). According to the second hypothesis, deaf children do not perform a grapho-phonological conversion because the cognitive cost of this process is too high. Seen from this viewpoint, the deaf children were able to perform grapho-phonological conversion, but it was not automatic for them. It required a cognitive effort. Orthographic units were more easily and more rapidly processed by the deaf children in our sample than the corresponding phonological units. In contrast, the hearing children probably activated the phonological form of items automatically or more rapidly in order to maintain them in memory, and this phonological storage led to their use of phonological copying units. These results can be discussed in the light of Leybaert et al.’s studies (1983). Their results revealed an automatic activation of phonological representations of color nouns in a Stroop experiment by deaf children. Deaf children are probably able to process phonological representations of short and very familiar items automatically. However, they encounter more difficulty when the items are long and unknown. In both cases, we concluded that our results failed to indicate that the type of coding used by deaf children in order to read and remember experimental material was of a phonological nature at least for the nV and nC items. Nevertheless, the deaf children’s results provided evidence in favor of reading units that respect orthographic redundancy. Incidental observations of the deaf participants’ behavior during the task enabled us to explore this possibility more thoroughly. Whereas a great number of hearing subjects made subvocalizations during the experimental tasks, many of the deaf participants made fingerspelling movements (some deaf children sometimes produced the sign of the word in FSL, but only very few made vocalizations and none of them used Cued Speech). This observation raises certain questions concerning fingerspelling: it is possible that children use fingerspelling when they have to store written material before copying it. Indeed, several authors have found that fingerspelling improves the memorization of word lists (Hirsh-Pasek, 1987) or letter lists (Hanson et al., 1984; Locke & Locke, 1971) in native deaf signers. In this case, there could be a link between the units the child processes while making his fingerspelling movements and the units used in copying. Indeed, fingerspelling is used by deaf people as if they were using units, and sublexical units emerge as if they were forming clusters of letters (Hanson, 1982; Hirsh-Pasek, 1987). At present, we do not know the nature of these fingerspelling units either in English or in French. However, our observations raise new questions. Do fingerspelling units correspond to orthographic redundancy, to morphemes, or to the phonological units involved in articulation? Our observations would tend to support the hypothesis that fingerspelling units correspond to orthographic redundancy, but further research will help to solve this problem. Appendix Experimental Items Monosyllabic items Words Beau /bo/ beautiful Juin /Zyε̃/ June Lion /ljõ/ lion Main /mε̃/ hand Peau /po/ skin Ciel /sjεl/ sky Cour /kur/ yard Jour /Zur/ day Neuf /nûf/ new Voir /vwar/ to see Pseudo-words cuin /kyε̃/ jain /Zε̃/ jeau /Zo/ neau /no/ vion /vjõ/ bour /bur/ coir /kwar/ leuf /lûf/ peuf /pûf/ viel /vjεl/ Trisyllabic items Words animaux /animo/ animals éléphant /elefã/ elephant champignon /ʃãpi®õ/ mushroom chocolat /ʃɔkɔla/ chocolate pantalon /pãtalõ/ trousers papillon /papijõ/ butterfly ramasser /ramase/ to gather up regarder /rûgarde/ to look at téléphone /telefɔn/ telephone confiture /kõfityr/ jam Syllabic Units in Deaf Children’s Reading Pseudo-words apiphant /apifã/ énillon /enijõ/ pafimaux /pafimo/ rapilon /rapilõ/ técosser /tekose/ nC items conléder /kõlede/ rentala /rãtala/ phongarcho /fõgarʃo/ panlégnone /pãle®ɔn/ chantature /ʃãtatyr/ nV items conélder /konelde/ renalat /rûnala/ phonarcho /fonarʃo/ panégnone /pane®ɔn/ chanature /ʃanatyr/ 141 6. Whatever precautions we took in the choice of the experimental sample of participants, the deaf population’s cognitive processes will necessarily be more heterogeneous than that of the hearing population. Thus, variability across subjects is important. We had to control whether effects that are significant with subjects as fixed factor (F2) are also significant with subjects as random factor (F1): if F2 is significant but not F1, we conclude that the effect exists but cannot be extended to the whole population (population of deaf children who share the same characteristics as our sample); conversely, because of the difficulty of working with linguistic stimuli, we need to control whether effects are significant when items are the random factor (F2) and not only when they are a fixed factor (F1). If the levels of significance are elevated both for F1 and F2, we can consider that our result can be extended to the population and to items that share the same characteristics as those in our experiment. Notes Bibliography 1. Grapho-phonemic assembling occurs when one phoneme is associated with one grapheme (e.g., the letter “b” is associated with the phoneme /b/). Grapho-phonological assembling is a more general term; it refers to the fact that a phonological unit (which may be a phoneme but may also be another unit such as a syllable, rhymes, cluster of consonants, and so forth) is associated with letter strings. 2. When the reading or the spelling of a word can be found by using grapho-phonological assembling rules, this word is regular. But if no grapho-phonological rules can predict the pronunciation or the spelling of a word, it is irregular. If subjects read or spell regular words more easily than irregular ones, they are thought to use grapho-phonological assembling in reading or spelling. 3. The principle of the Stroop paradigm is that participants have to name the colors in which words are written. When these words are themselves color names, the participants make more errors in naming the color. For instance, if the word “red” is written with yellow ink, the participant’s response should be “yellow.” This interference phenomenon is explained by an automatic activation of the phonological form of the written word that interferes with the name of the color of the ink that has to be pronounced. 4. However, even in French, there is no consensus when there are consonant clusters inside words. Thus, consonant clusters are avoided in our material. 5. BRULEX (Content, Mousty, & Radeau, 1990) is a computerized lexical database for the French language. It was created in 1986 for experimental psycholinguistics. It provides orthographic, phonological, grammatical, and frequency information for approximately 36,000 French words. The information is collected and published by the Centre de Recherche pour un Trésor de la Langue française. Frequency is estimated by the number of occurrences of a character string (per million) according to a sample of texts from the second half of the twentieth century. The bigram frequency is the number of occurrences of two adjacent letters in French words. It is computed as a function of their positions in words (beginning, middle, end). Alegria, J., Charlier, B., Matthys, S. (in press). The role of lipreading and Cued-Speech in the processing of phonological information in French-educated deaf children. The European Journal of Cognitive Psychology. Baddeley, A. (1990). Human memory, Theory and Practice. Hillsdale, NJ: Lawrence Erlbaum. Beggs, W. D., Breslaw, P. I., & Wilkinson, P. I. (1982). Eye movements and reading achievement in deaf children. In R. Groner & P. Fraisse (Eds), Cognition and eye movements (pp. 179–193). Amsterdam: North Holland. Boon, L. (1995). Perception visuelle de la parole par l’enfant sourd, tentative d’élaboration d’un test de langage parlé complété. Mémoire de logopédie. Unpublished manuscript. Louvain: Institut Marie-Haps. Bowey, J. A. (1996). Phonological recoding of non word orthographic rime primes. Journal of Experimental Psychology: Learning, Memory and Cognition, 22, 117–131. Burden, V., & Campbell, R. (1994). The development of wordcoding skills in the born deaf: An experimental study of deaf school-leavers. British Journal of Developmental Psychology, 12, 331–349. Campbell, R. (1994). Spelling in the prelingual deafness. In G. D. Brown & N. C. Ellis (Eds.), Handbook of spelling (pp. 249–259). New York: Wiley and Sons. Campbell, R., & Wright, H. (1990). Deafness and immediate memory for pictures: Dissociations between “inner speech ” and the “inner ear”? Journal of Experimental Child Psychology, 50, 259–286. Charlier, B., & Leybaert, J. (in press). The rhyming skills of deaf children educated with phonetically augmented speechreading. Quarterly Journal of Experimental Psychology. Chen, K. (1976). Acoustic image in visual detection for deaf and hearing college students. Journal of General Psychology, 94, 243–246. Clements, G. N., & Keyser, S. J. (1983). CV phonology, a generative theory of the syllable. Cambridge: MIT Press. Colé, P., & Magnan, A. (1997). Phonological recoding in first and second grade readers: Effects of word length and familiarity. 142 Journal of Deaf Studies and Deaf Education 4:2 Spring 1999 Eighth European Conference on Developmental Psychology, September, 3–6, Rennes. Coltheart, V., & Leahy, J. (1992). Children’s and adults’ reading of non words: Effects of regularity and consistency. Journal of Experimental Psychology: Learning, Memory and Cognition, 18, 718–729. Conrad, R. (1979). The deaf school child, language and cognitive function. London: Harper & Row. Content, A., Mousty, P., & Radeau, M. (1990). BRULEX, une base de données lexicales informatisée pour le français écrit et parlé (Brulex, a lexical data base for spoken and written French language). L’Année Psychologique, 90, 551–566. Content, A., Radeau, M., & Mousty, P. (1988). Données statistiques sur la structure orthographique du Français (Statistical data for orthographic structure of French). Cahiers de Psychologie Cognitive. European Bulletin of Cognitive Psychology, hors-série. Cornett, O. (1967). Cued Speech. American Annals of the Deaf, 112, 3–13. Dodd, B. (1980). The spelling abilities of profoundly prelinguistically deaf children. In U. Frith (Ed.), Cognitive processes in spelling (pp. 423–443). New York: Academic Press. Dodd, B. (1987). Lip-reading, phonological coding and deafness. In B. Dodd & R. Campbell (Eds.), Hearing by eyes: The psychology of lip-reading (pp. 177–189). London: Lawrence Erlbaum. Dodd, B., & Hermelin, B. (1977). Phonological coding by the prelinguistically deaf. Perception and Psychophysics, 21, 413–417. Dodd, B., & Murphy, J. (1992). Visual thinking. In R. Campbell (Ed.), Mental lives: Case studies in cognition (pp. 47–60). Oxford: Blackwell. Dubois, J., Giacomo, M., Guespin, L., Marcellesi, C., Marcellesi, J. B., & Mevel, J. P. (1991). Dictionnaire de linguistique (Linguistic dictionary). Paris: Larousse. Ducrot, O., & Todorov, T. (1972). Dictionnaire encyclopédique des sciences du langage. Paris: Seuil. Ferrand, L., Segui, J., & Grainger, J. (1996). Masked priming of word and picture naming: The role of syllabic units. Journal of Memory and Language, 35, 708–723. Ferrand, L., Segui, J., & Humphreys, G. (1997). The syllable’s role in word naming. Memory and Cognition, 25, 458–470. Fowler, C. A., Treiman, R., & Gross, J. (1993). The structure of English syllables and polysyllables. Journal of Memory and Language, 32, 115–140. Frith, U. (1985). Beneath the surface of developmental dyslexia. In K. E. Patterson, J. C. Marshall, & M. Coltheart (Eds.), Surface dyslexia: Cognitive and neuropsychological studies of phonological reading (pp. 301–330). Hillsdale, NJ: Lawrence Erlbaum. Gibbs, K. W. (1989). Individual differences in cognitive skills related to reading ability in the deaf. American Annals of the Deaf, 124, 214–218. Gibson, E. J., Shurcliff, A., & Yonas, A. (1970). Utilization of spelling patterns by deaf and hearing children. In H. Levin & J. P. Williams (Eds.), Basic studies on reading (pp. 57–73). New York: Basic Books. Gombert, J. E. (1992). Metalinguistic development. Chicago: University of Chicago Press. Gombert, J. E., Bryant, P., & Warrick, N. (1997). Children’s use of analogy in learning to read and to spell. In C. Perfetti, M. Fayol, & L. Rieben (Eds), Learning to spell, theory, research, and practice across languages (pp. 221–245). Hillsdale, NJ: Lawrence Erlbaum. Goswami, U. (1988). Children’s use of analogy in learning to spell. British Journal of Developmental Psychology, 6, 21–33. Goswami, U. (1991). Learning about spelling sequences. The role of onsets and rimes in analogies in reading. Child Development, 62, 1110–1123. Goswami, U. (1993). Toward an interactive analogy model of reading development: Decoding vowel graphemes in beginning reading. Journal of Experimental Child Psychology, 56, 443–475. Goswami, U., & Bryant, P. (1990). Phonological skills and learning to read. Hillsdale, NJ: Lawrence Erlbaum. Goswami, U., Gombert, J. E., & Fraca De Barrera, L. (1998, in press). Children’s orthographic representations and linguistic transparency: Nonsense word reading in English, French and Spanish. Applied Psycholinguistics. Hanson, V. L. (1982). Use of orthographic structure by deaf adults: Recognition of fingerspelled words. Applied Psycholinguistics, 3, 343–356. Hanson, V. L. (1986). Access to spoken language and the acquisition of orthographic structure: Evidence from deaf readers. Quarterly Journal of Experimental Psychology, 38A, 193–212. Hanson, V. L., Liberman, I. Y., & Shankweiler, D. J. (1984). Linguistic coding by deaf children in relation to beginning reading success. Journal of Experimental Child Psychology, 37, 378–393. Hanson, V. L. Shankweiler, D., & Fisher, F. W. (1983). Determinants of spelling ability in deaf and hearing adults: Access to linguistic structure. Cognition, 14, 323–344. Harris, M., & Beech, J. (1994). Reading development in prelingually deaf children. In K. Nelson & Z. Reger (Eds.), Children’s language (pp. 181–202). Hillsdale, NJ.: Lawrence Erlbaum. Harris, M., & Coltheart, M. (1986). Language processing in children and adults: An introduction. London: Routledge & Kegan. Hirsh-Pasek, K. (1987). The metalinguistics of fingerpelling: An alternate way to increase reading vocabulary in congenitally deaf readers. Reading Research Quarterly, 22, 455–474. Humblot, L., Fayol, M., & Longchamp, K. (1994). La copie de mots en CP et CE1. Repères, 9, 47–59. Jackobson, R. (1963). Essais de linguistique générale, les fondations du langage. Paris: Editions de Minuit. Kreidler, C. W. (1989). The pronunciation of English, a course book in phonology. Oxford: Blackwell. Lambert, E., & Espéret, E. (1997). Syllables and phonemic complexity in word spelling. Eighth European Conference on Developmental Psychology, September, 3–6, Rennes. LaSasso, C. L., & Metzger, M. A. (1998). An alternate route for preparing deaf children for bi-bi programs: The homelanguage as L1 and the Cued Speech for conveying traditionally-spoken languages. Journal of Deaf Studies and Deaf Education, 3, 265–289. Lerot, J. (1993). Précis de linguistique générale. Paris: Editions de Minuit. Syllabic Units in Deaf Children’s Reading Leybaert, J. (1980). Contribution à l’étude de la nature des représentations verbales chez les sourds [Contribution to the study of verbal representation in the deaf ]. Unpublished master’s thesis, Université Libre de Bruxelles, Belgium. Leybaert, J. (1993). Reading in the deaf: The roles of phonological codes. In M. Marschark & D. Clark (Eds.), Psychological perspectives of deafness (pp. 203–227). London: LEA. Leybaert, J., & Alegria, J. (1993). Is word processing involuntary in deaf children? British Journal of Developmental Psychology, 11, 1–29. Leybaert, J., & Alegria, J. (1995). Spelling development in deaf and hearing children: Evidence for use of morphophonological regularities in French. Reading and Writing, 7, 89–109. Leybaert, J., Alegria, J., & Fonck, E. (1983). Automaticity in word recognition and word naming by the deaf. Cahiers de Psychologie Cognitive, 3, 255–272. Leybaert, J., Alegria, J., Hage, C., & Charlier, B. (1998). The effect of exposure to phonetically augmented lipspeech in the prelingual deaf. In R. Campbell, B. Dodd, & D. Burnham (Eds.), Hearing by Eye II (pp. 283–301). Hove: Psychology Press. Leybaert, J., & Charlier, B. (1996). Visual speech in the head: The effect of Cued-Speech on rhyming, remembering, and spelling. Journal of Deaf Studies and Deaf Education, 1, 234–248. Lichstenstein, E. (1998). The relationships between reading processes and English skills of deaf students. Journal of Deaf Studies and Deaf Education, 3, 80–134. Locke, J. L. (1978). Phonemic effects in the silent reading of hearing and deaf children. Cognition, 6, 175–187. Locke, J. L., & Locke, V. L. (1971). Deaf children’s phonetic, visual, and dactylic coding in a grapheme recall task. Journal of Experimental Psychology, 89, 142–146. Marschark, M. (1993). Psychological development of deaf children. New York: Oxford University Press. Marsh, G., Friedman, M., Welch, V., & Desberg, P. (1981). A cognitive-developmental theory of reading acquisition. In T. Waller & D. McKinnon (Eds.), Reading research: Advances in theory and practice (Vol. 3, pp. 199–227). New York: Academic Press. Mehler, J., Dommergues, J., Frauenfelder, U., & Segui, J. (1981). The syllable’s role in speech segmentation. Journal of Verbal Learning and Verbal Behavior, 20, 298–305. Morton, J. (1989). An information processing account of reading acquisition. In A. M. Galaburda (Ed.), From reading to neurons (pp. 43–65). Cambridge: MIT Press. Nicholls, G. H. (1979). Cued Speech and the reception of spoken language. Unpublished master’s thesis, McGill University. Nicholls, G. H., & Ling, D. (1982). Cued Speech and the reception of spoken language. Journal of Speech and Hearing Research, 25, 262–269. Padden, C. A. (1993). Lessons to be learned from young deaf orthographer. Linguistics and Education, 5, 71–86. Périer, O., Charlier, B., Hage, C., & Alegria, J. (1988). Evaluation of the effects of prolonged Cued Speech practice upon the reception of spoken language. In I. G. Taylor (Ed.), The education of the deaf: Current perspectives (Vol. 1., pp. 616–625). London: Croom Helm. 143 Prinzmetal, W., Treiman, R., & Rho, S. H. (1986). How to see a reading unit. Journal of Memory and Language, 25, 461– 475. Quinn, L. (1981). Reading skills of hearing and congenitally deaf children. Journal of Experimental Psychology, 32, 139–161. Rapp, B. (1992). The nature of sublexical orthographic organization: The bigram trough hypothesis examined. Journal of Memory and Language, 31, 33–53. Reynolds, R. N. (1986). Performance of deaf college students on a criterion-referenced modified cloze test of reading comprehension. American Annals of the Deaf, 131, 361–364. Rieben, L., Meyer, A., & Perregaux, C. (1991). Individual differences and lexical representations: How five 6-year-old children search for and copy words. In L. Rieben & C. Perfetti (Eds.), Learning to read: Basic research and its implications (pp. 85–101). Hillsdale, NJ: Lawrence Erlbaum. Rieben, L., & Saada-Robert, M. (1997). Relations between word-search strategies and word-copying strategies in children aged 5 to 6 years old. In C. A. Perfetti, L. Rieben, & M. Fayol (Eds.), Learning to spell, research, theory and practice across languages (pp. 295–318). Hillsdale, NJ: Lawrence Erlbaum. Santa, J. L., Santa, C., & Smith, E. E. (1977). Units of word recognition: Evidence for the use of multiple units. Perception and Psychophysics, 22, 585–591. Segui, J. (1984). The syllable, a basic perceptual unit in speech perception? In H. Bouma & D. G. Bouwhuis (Eds.), Attention and performance X: Control of language processes (pp. 165–181). Hillsdale, NJ: Lawrence Erlbaum. Seidenberg, M. S. (1987). Sublexical structures in visual word recognition: Access units or orthographic redundancy. In M. Coltheart (Ed.), Attention and performance (pp. 245–263). London: Lawrence Erlbaum. Seidenberg, M. S., & McClelland, J. L. (1989). A distributed, developmental model of visual word recognition and naming. Psychological Review, 96, 523–568. Selkirk, E. (1984). On the major class features and syllable theory. In M. Aronoff & R. T. Oehrle (Eds.), Language sound structure (pp. 107–136). Cambridge: MIT Press Seymour, P. H. K. (1997). Foundations of orthographic development. In C. A. Perfetti, L. Rieben, & M. Fayol (Eds.), Learning to spell, theory, research and practice across languages (pp. 319–337). Hillsdale, NJ: Lawrence Erlbaum. Seymour, P. H. K. (1996). Framework of cross-linguistic analysis of reading acquisition, varieties of dyslexia, and linguistic awareness. Cost A8 Paris Workshop, November. Sterne, A. (1996). Phonological awareness, memory, and reading in the deaf children. Thesis for the degree of Doctor of Philosophy, Cambridge. Taft, M., & Radeau, M. (1995) The influence of the phonological characteristics of a language on the functional units of reading: A study of French. Canadian Journal of Experimental Psychology, 49, 330–346. Waters, G. S., & Doehring, D. G. (1990). Reading acquisition in congenitally deaf children who communicate orally: Insights from an analysis of components reading, language, and memory skills. In T. Carr & B. A. Levy (Eds.), Reading and its development: Component skill approaches (pp. 323– 373). San Diego: Academic Press.
© Copyright 2026 Paperzz