The Volta Review - Alexander Graham Bell Association

Volume 113, Number 1
Spring 2013
ISSN 0042-8639
The Volta Review
Alexander Graham Bell Association
for the Deaf and Hard of Hearing
The
Volta
Narrative Comprehension Abilities of Children with Typical
Hearing and Children Using Hearing Aids: A Pilot Study
Barbra Zupan, Ph.D., and Lynn Dempsey, Ph.D.
Differences in Spoken Lexical Skills: Preschool Children
with Cochlear Implants and Children with Typical Hearing
Joan A. Luckhurst, Ph.D., CCC-SLP;
Cris W. Lauback, Psy.D.; and
Ann P. Unterstein VanSkiver, Psy.D.
A Survey of Assessment Tools Used by LSLS Certified
Auditory-Verbal Therapists for Children Ages Birth to
3 Years Old
Deirdre Neuss, Ph.D., LSLS Cert. AVT;
Elizabeth Fitzpatrick, Ph.D., LSLS Cert. AVT;
Andrée Durieux-Smith, Ph.D.; Janet Olds, Ph.D.;
Katherine Moreau, M.A., Ph.D.; Lee-Anne Ufholz, MLIS;
JoAnne Whittingham, M.Sc.; and David Schramm, M.D.
Interactive Silences: Evidence for Strategies to Facilitate
Spoken Language in Children with Hearing Loss
Ellen A. Rhoades, Ed.S., LSLS Cert. AVT
5
29
43
Review
57
The Volta Review
Spring 2013
Volume 113, Number 1
The Volta
Review
Volume 113, Number 1
Spring 2013
ISSN 0042-8639
The Alexander Graham Bell Association for the Deaf and Hard of Hearing helps families, health care providers and education professionals understand childhood hearing loss and the importance of early diagnosis and
intervention. Through advocacy, education, research and financial aid, AG Bell helps to ensure that every child
and adult with hearing loss has the opportunity to listen, talk and thrive in mainstream society. With chapters
located in the United States and a network of international affiliates, AG Bell supports its mission: Advocating
Independence through Listening and Talking!
03
Editors’ Preface
Joseph Smaldino, Ph.D., and Kathryn L. Schmitz, Ph.D.
Research
05
29
43
57
Narrative Comprehension Abilities of Children with Typical Hearing and Children Using
Hearing Aids: A Pilot Study
Barbra Zupan, Ph.D., and Lynn Dempsey, Ph.D.
Differences in Spoken Lexical Skills: Preschool Children with Cochlear Implants and
Children with Typical Hearing
Joan A. Luckhurst, Ph.D., CCC-SLP; Cris W. Lauback, Psy.D.; and
Ann P. Unterstein VanSkiver, Psy.D.
A Survey of Assessment Tools Used by LSLS Certified Auditory-Verbal Therapists for
Children Ages Birth to 3 Years Old
Deirdre Neuss, Ph.D., LSLS Cert. AVT; Elizabeth Fitzpatrick, Ph.D., LSLS Cert. AVT;
Andrée Durieux-Smith, Ph.D.; Janet Olds, Ph.D.; Katherine Moreau, M.A., Ph.D.;
Lee-Anne Ufholz, MLIS; JoAnne Whittingham, M.Sc.; and David Schramm, M.D.
Interactive Silences: Evidence for Strategies to Facilitate Spoken Language in Children
with Hearing Loss
Ellen A. Rhoades, Ed.S., LSLS Cert. AVT
Book Review
75
Building Comprehension in Adolescents
Ellen A. Rhoades, Ed.S., LSLS Cert. AVT
Regular Features
77
Information for Contributors to The Volta Review
Permission to Copy: The Alexander Graham Bell Association for the Deaf and Hard of
Hearing, as copyright owner of this journal, allows single copies of an article to be made for
personal use. This consent does not extend to posting on websites or other kinds of copying,
such as copying for general distribution, for advertising or promotional purposes, for creating
new collective works of any type, or for reasale without the express written permission of the
publisher. For more information, contact AG Bell at 3417 Volta Place, NW, Washington, DC 20007;
email [email protected]; or call (202) 337-5220.
The Alexander Graham Bell Association
for the Deaf and Hard of Hearing
Officers of the Association: President, Donald M. Goldberg, Ph.D., CCC-SLP/A, FAAA, LSLS Cert.
AVT (OH); President-Elect, Meredith K. Knueve, Esq. (OH); Immediate Past President, Kathleen S.
Treni (NJ); Secretary-Treasurer, Ted Meyer, M.D., Ph.D. (SC); Executive Director/CEO, Alexander T.
Graham (VA)
Board of Directors: Joni Y. Alberg, Ph.D. (NC); Corrine Altman (NV); Rachel Arfa, Esq. (IL); Evan
Brunell (MA); Wendy Deters, M.S., CCC-SLP, LSLS Cert. AVEd (IL); Kevin Franck, Ph.D., MBA,
CCC-A (MA); Catharine McNally (VA); Lyn Robertson, Ph.D. (OH)
The Volta Review
Editor: Joseph Smaldino, Ph.D.
Senior Associate Editor: Kathryn L. Schmitz, Ph.D.
Associate Editors: Tamby Allman, Ed.D.; Jessica Bergeron, M.E.D.; Paula Brown, Ph.D., CCC-SLP;
Jill Duncan, Ph.D., LSLS Cert. AVT; Rebecca M. Fischer, Ph.D., CCC-A, LSLS Cert. AVT; Marianne
Gustafson, M.S., CCC-SLP; Holly B. Keddington, M.S., M.A., Ph.D.; Jane Madell, Ph.D., CCC-A/SLP,
LSLS Cert. AVT; Kevin Miller, Ed.D., CCC-SLP, CED; Jan Allison Moore, Ph.D.; Katherine Pike,
Au.D.; Ellen Rhoades, Ed.S., LSLS Cert. AVT; Jane Seaton, M.S., CCC-A/SLP; Barbra Zupan, Ph.D.
Review Panelists: Anne Beiter, Sandy Bowen, Janet Brown, Teresa Caraway, Diana Christiana,
Terrell Clark, Rick Durity, Jan Gatty, Phoebe Gillespie, Melody Harrison, Susan Lane, Marybeth
Lartz, Susan Lenihan, John Luckner, Janie Luter, Jane Madell, Kevin Miller, Teri Ouellette, Anne
Oyler, Christina Perigoe, Katherine Pike, Ellen Rhoades, Sharon Ringwalt, Barb Robinson, Jack
Roush, Jane Seaton, Donna Sorkin, and Rosalie Yaremko
The Volta Review Staff: Managing Editor, Melody Felzien; Director of Communications and Marketing,
Susan Boswell
General Information: The Volta Review (ISSN 0042–8639) is published three times annually.
Periodicals postage is paid at Washington, DC, and other additional offices. Copyright Ó 2013 by
Alexander Graham Bell Association for the Deaf and Hard of Hearing, 3417 Volta Pl., NW,
Washington, DC 20007. Visit us online at www.listeningandspokenlanguage.org.
Postmaster: Send address changes to Alexander Graham Bell Association for the Deaf and Hard of
Hearing, Attn: Membership Department, 3417 Volta Pl., NW, Washington, DC 20007-2778.
Telephone – (202) 337-5220 (voice)/(202) 337-5221 (TTY). Claims for undelivered journal issues
must be made within four months of publication.
The Volta Review is sent to all premium, student, senior, household and lifetime members of the
association. Yearly individual membership dues are: Premium: $60; Household, $75; Senior, $40;
Student $30; Life membership, $1,000. Yearly dues for residents of other countries: add $15 U.S. The
Volta Review comprises $18 of membership dues. Subscriptions for schools, libraries, institutions are
$115 domestic and $135 international (postage included in both prices). Back issues, when available,
are $10 each plus shipping and handling.
The Volta Review is abstracted and indexed in Applied Social Science Index and Abstracts, Chicorel
Abstracts to Reading and Learning Disabilities, Cumulative Index to Nursing & Health Literature, Current
Index to Journals in Education, Educational Resources Information Center (ERIC), Elsevier’s Bibliographic
Databases, Exceptional Child Education Resources, Language Behavior Abstracts, Mental Health Abstracts,
Psychological Abstracts, and Rehabilitation Literature. It is abstracted in PsychINFO and is indexed in
Educational Index.
The Volta Review is the peer-reviewed scholarly research journal, published three times per year, of the
Alexander Graham Bell Assocation for the Deaf and Hard of Hearing, a not-for-profit organization.
Annual membership dues are $60 for individuals and $115 (domestic) for institututional subscriptions.
The publishing office of The Volta Review is located at 3417 Volta Place, NW, Washington, DC 200072778.
Editors’ Preface
Joseph Smaldino, Ph.D.,
Editor
Kathryn L. Schmitz, Ph.D.,
Senior Associate Editor
The Volta Review continues to expand its services to readers and authors alike.
In December 2012, The Volta Review began providing a continuing education
opportunity for self-study of the journal. For the first time, readers can earn
continuing education units (CEUs) for simply doing what they’ve always done:
reading and learning about the latest research supporting listening and spoken
language outcomes in children who are deaf or hard of hearing.
The primary goal of the CEU program is to provide accessible and cost
effective continuing education opportunities for professionals who are
certified Listening and Spoken Language Specialists (LSLSe) and for those
who are seeking the credential. By offering a CEU opportunity, AG Bell is able
to take advantage of its new website by offering The Volta Review CEUs as a
feature of the Listening and Spoken Language Knowledge Center. This will
mark the first time that AG Bell is able to tie together The Volta Review,
professional development, and the website. CEU quizzes are currently
available for new editions. To access the CEU opportunity, please visit The
Volta Review online at ListeningandSpokenLanguage.org/TheVoltaReview.
AG Bell is also committed to incorporating research into best practices. As
part of this commitment, AG Bell will once again offer its Listening and Spoken
Language Symposium on July 18–20, 2013 in Los Angeles, Calif. This year’s
Symposium will focus on ‘‘Delivering Quality Services to Families,’’ a theme
that will be supported through presentations that integrate research,
technology, and best practices. Much of the research published in The Volta
Review is tied to clinical application, and what better place to meld research and
application together then at the premier professional development opportunity for professionals who support children with hearing loss. To learn more or
register, visit ListeningandSpokenLanguage.org.
This issue provides evidence to support many aspects of language
development for young children with hearing loss learning to listen and talk.
Two studies focus on speech and literacy, adding evidence to a growing body of
research that shows children who receive early intervention can and do have
language skills on par with their peers who have typical hearing. Another
study focuses on professionals and evaluation, highlighting the types of
evaluation tools practitioners use and exposing the need for more universal
assessment tools to gauge the language development of young children with
hearing loss. Finally, another study highlights how interactive silence can be
used to improve literacy and language skills.
The Volta Review has strived to be a publisher for new authors and professionals in the field. With tools available at ListeningandSpokenLanguage.org/
The VoltaReview, we hope that you will find the right guidance for creating and
submitting a manuscript for peer review. As always, please don’t hesitate to
contribute to The Volta Review.
Sincerely,
Joseph Smaldino, Ph.D.
Professor, Department of Communications Sciences
Illinois State University
[email protected]
Kathryn L. Schmitz, Ph.D.
Associate Professor and Associate Dean for Academic Administration
National Technical Institute for the Deaf/Rochester Institute of
Technology
[email protected]
4
The Volta Review, Volume 113(1), Spring 2013, 5–27
Narrative Comprehension
Abilities of Children with
Typical Hearing and Children
Using Hearing Aids: A Pilot
Study
Barbra Zupan, Ph.D., and Lynn Dempsey, Ph.D.
This pilot study explores differences in oral narrative comprehension abilities
between children with moderate-to-severe sensorineural hearing loss using hearing
aids and their peers with typical hearing matched for age and gender. All children
were between 3.5 and 5 years of age. Participants were read a patterned, illustrated
storybook. Modified versions of this narrative were then read for a Joint Story Retell
task and an Expectancy Violation Detection Task to measure both comprehension of
key story elements and comprehension monitoring ability. Speech perception was also
assessed. Analyses revealed no statistically significant differences between children
with and without hearing loss, but interesting trends emerged. Explanations for the
stronger than expected performance of the children with hearing loss on the narrative
comprehension measures are discussed. Importantly, this pilot study demonstrates
that the joint story retell task and expectancy violation detection task are viable
measures of narrative comprehension for children with hearing loss.
Narratives are prevalent and developmentally important in the everyday
lives of young children (Botting, 2010; Boudreau, 2008; Skarakis-Doyle &
Dempsey, 2008a). Typically, children are exposed to narratives early and
frequently, through both personal storytelling and storybook reading
(Boudreau, 2008; Crosson & Geers, 2001; Justice, Swanson, & Buehler, 2008;
Lynch et al., 2008; Skarakis-Doyle & Dempsey, 2008a). Hearing loss reduces a
child’s access and exposure to narratives, with potentially detrimental effects
on the development of narrative comprehension skills (Crosson & Geers, 2001;
Barbra Zupan, Ph.D., is an Associate Professor in the Department of Applied Linguistics
at Brock University in Ontario, Canada. Lynn Dempsey, Ph.D., is an Associate Professor
in the Department of Applied Linguistics at Brock University in Ontario, Canada.
Correspondence concerning this manuscript may be addressed to Dr. Zupan at bzupan@
brocku.ca.
Narrative Comprehension in Children Using Hearing Aids
5
Massaro & Light, 2004; Pittman, 2008). Deficits in prerequisite linguistic skills,
such as phonological coding (Carney & Moeller, 1998; Mayberry, Del Giudice,
& Lieberman, 2011; Robertson, Dow, & Hainzinger, 2006; Young et al., 1997),
and a reported preference for the visual portion of complex auditory-visual
(AV) signals (Schorr, Fox, van Wassenhove, & Knudsen, 2005), may also restrict
the development of narrative skills in children with hearing loss. However,
because studies examining how children with hearing loss cope with verbal
discourse are rare (Ibertsson, Hansson, Maki-Torkko, Willstedt-Svennssson, &
Sahlen, 2009), the extent and nature of narrative deficits experienced by this
population is not well understood. Given that narrative skills are interconnected with the development of language and literacy, and thereby a crucial
medium for academic and social learning (Botting, 2002; Boudreau, 2008; Paris
& Paris, 2003; Petersen, Gillam, & Gillam, 2008), investigations into the ability
of young children with hearing loss to engage in and learn from narratives are a
vital component of an overall analysis of the development and functioning of
these children.
The Impact of Hearing Loss on the Comprehension
of Oral Narratives
Comprehending an oral narrative requires that children create a complete
and accurate mental representation of what they hear, and then make any
necessary inferences required for accurate interpretation (Kendeou, BohnGettler, White, & van den Broek, 2008; Lynch et al., 2008). The mental
representation must contain all key narrative elements (e.g., agents, locations,
objects, actions) arranged in the appropriate temporal and causal sequence
(Skarakis-Doyle & Dempsey, 2008a; van den Broek et al., 2005). Hence,
narrative comprehension depends on the development of a myriad of other
knowledge processes, including semantic and syntactic knowledge, as well as
knowledge about people’s mental states, behavior, and daily events (Lynch et
al., 2008; Nelson, 1996; Skarakis-Doyle, Dempsey, & Lee, 2008; Stein & Albro,
1996). The ability to form narrative representations that contain story-specific
information typically develops between the ages of 3 and 5 years old (Dempsey,
2005; Stein & Albro, 1996; Trabasso & Nickels, 1992; van den Broek, Lorch, &
Thurlow, 1996).
Children who use hearing aids may reach this milestone later than their
peers, despite advances in identification, intervention practices, and improvements in hearing technology, all of which have contributed to improved
acquisition of speech and language skills (Justice et al., 2008; Worsfold, Mahon,
Yuen, & Kennedy, 2010). Deficits in vocabulary and grammatical comprehension (Carney & Moeller, 1998; Fagan & Pisoni, 2010; Robertson et al., 2006;
Young et al., 1997) and limitations in world experience (Easterbrooks,
Lederberg, & Connor, 2010) persist, potentially impeding narrative language
6
Zupan & Dempsey
comprehension. This is because children who use hearing aids continue to
receive a distorted auditory signal, complicating the perception and
discrimination of verbal messages (Zupan, 2013). This makes the integration
of the vocabulary, grammar, and world knowledge contained in oral narrative
difficult (Lynch et al., 2008; Skarakis-Doyle & Dempsey, 2008a; van den Broek et
al., 2005).
Children with hearing loss may also face challenges in oral narrative
comprehension because of their reported tendency to prefer visual information
when faced with a complex AV signal (Bergeson, Houston, & Miyamoto, 2010;
Erber, 1972; Schorr et al., 2005; Seewald, Ross, Giolas, & Yonovitz, 1985),
particularly when their access to sound comes later in life or is less than ideal
(Bergeson, Pisoni, & Davis, 2005; Rouger, Fraysse, Deguine, & Barone, 2007;
Tremblay, Champoux, Lepore, & Theoret, 2010). Napolitano and Sloutsky
(2004) suggest that a preference for auditory information is important for
language processing because it allows children to focus on the transient and
continually changing sounds that comprise words. Thus, inattentiveness to the
auditory portion of an AV signal may negatively affect how children with
hearing loss process language, including narrative forms.
Comprehension monitoring may also be disadvantaged by the use of hearing
aids. Comprehension monitoring is an important component of narrative
comprehension that requires the ability to detect and correct problems in the
narrative representations formed (Skarakis-Doyle & Dempsey, 2008a). Research has shown that typically developing children can detect violations made
to familiar script-based story material as early as 30 months of age (SkarakisDoyle, 2002). By 4 years of age, they are able to recognize disruptions to story
goals and other story errors (Hudson, 1988; Skarakis-Doyle & Dempsey, 2008b).
As is the case with primary narrative comprehension, comprehension
monitoring requires a coalition of skills including receptive vocabulary,
grammatical knowledge, knowledge of truth conditions, understanding of
narrative content (i.e., key elements), and verbal memory span (SkarakisDoyle, Dempsey, Campbell, Lee, & Jaques, 2005). Given that many children
with hearing loss have deficits in these component skills, problems with
comprehension monitoring might also be expected.
In a review of literature, only one study was found to have examined
narrative comprehension in children with hearing loss who use hearing aids. In
that study, Roberston and colleagues (2006) used a story retell procedure that
measured both comprehension and production abilities in a group of 10
children with hearing loss (7 hearing aid users; 3 cochlear implant users) and 11
children with typical hearing. Overall, analyses showed that both groups of
children were able to engage in story retell when prompted by their parents,
providing a similar number of words and phrases in their narratives. However,
the children with hearing loss were found to need more parental facilitation
(e.g., leading questions) than their peers with typical hearing. It is unclear
whether this is indicative of poorer narrative comprehension skills or whether
Narrative Comprehension in Children Using Hearing Aids
7
it reflects underlying deficits in other skills required by the task (e.g., verbal
working memory and verbal formulation), or both. As Skarakis-Doyle &
Dempsey (2008a) have pointed out, because traditional story retell tasks
require a complex array of skills, conclusions about narrative comprehension
may be confounded, resulting in underestimates of children’s narrative
comprehension skills.
The Current Study
In this pilot study, narrative comprehension measures that place limited
demands on working memory and expressive language relative to traditional
story retell tasks were used—one for primary comprehension (Joint Story
Retell; JSR) and one for comprehension monitoring (Expectancy Violation
Detection Task; EVDT; Skarakis-Doyle & Dempsey, 2008a; Skarakis-Doyle et
al., 2008). Both measures have been shown to be reliable and valid measures of
oral narrative comprehension for typically developing children and children
with language delays, ages 30 months to 5 years (Dempsey & Skarakis-Doyle,
2001; Skarakis-Doyle, 2002; Skarakis-Doyle & Dempsey, 2008b; Skarakis-Doyle
et al., 2008), but have never before been used with children with hearing loss.
Speech perception was also examined in order to evaluate the influence of
processing preferences on narrative comprehension abilities. As discussed
previously, children with hearing loss have been shown to rely more on the
visual portion of AV speech signals, while children with typical hearing tend to
rely on the auditory portion (Schorr et al., 2005). Since storybook comprehension requires the reader to process a verbal message while simultaneously
attending to story illustrations, the authors of this study wanted to explore
whether AV speech perception was related to the narrative comprehension
abilities of children who use hearing aids.
Specifically, this pilot study addressed the following research questions:
1. Do young children who use hearing aids differ from typically developing
children in their comprehension of an orally presented story, as indicated
by the number of key story elements supplied during a JSR? The authors
hypothesize that children using hearing aids would identify fewer key
semantic elements in the JSR task than children with typical hearing.
2. Do young children who use hearing aids differ from typically developing
children in their ability to monitor their comprehension of a story, as
indicated by the number of story violations detected during a re-reading
of a target story? The authors hypothesize that children using hearing aids
would detect fewer violations in the EVDT task than children with typical
hearing.
3. Are differences in processing preferences evident in children who use
hearing aids and children with typical hearing evident, and do these
differences correlate with narrative comprehension skills? The authors
8
Zupan & Dempsey
hypothesize that a preference for the visual portion of an AV speech
stimulus by children with hearing loss would interfere with their ability to
form complete and accurate mental representation of an oral narrative.
Method
Participants
Five children with hearing loss (2 males; 3 females) and 5 children with
typical hearing, matched for gender and age, participated in this pilot study.
Children with hearing loss used hearing aids and ranged in age from 41 to 67
months (M¼ 54.20, SD¼11.82). The age range for children with typical hearing
was 40 to 65 months (M ¼ 52.20, SD ¼ 11.39). An independent samples t-test
confirmed that the two groups of children were successfully age-matched, t(8)
¼2.72, p¼.79. All children with hearing loss had been diagnosed with at least a
moderate sensorineural hearing loss in both ears. Table 1 shows the level of
hearing loss, age at which hearing aids were received, duration of hearing aid
use, and intervention type for each child. All of the children with hearing loss
had parents with typical hearing and no identified additional developmental
challenges. Participants were recruited through local agencies that offer
support and clinical services to children with hearing loss. Children with
typical hearing were recruited through local daycare, preschool education, and
community facilities.
Speech and Language Measures
All children completed standardized speech and language testing. Receptive
vocabulary was assessed using the Peabody Picture Vocabulary Test–III,
Table 1. Descriptive information for children with hearing loss using hearing aids
Participant
HL1
HL2
HL3
HL4
HL5
Mean
SD
Level of Hearing Loss
Bilateral Moderate-Severe
Bilateral Moderate-severe
Bilateral Moderate-Severe
Bilateral Moderate
Left Ear-Profound;
Right Ear-Moderate-severe
Age
Length Age at
HAsa of HAsa time of
Received
Use
studya
4
6
4
18
42
50
35
40
47
25
54
41
44
65
67
14.8
16.28
39.4
9.96
54.2
11.82
Type of
Intervention
Auditory-Verbal
Auditory-Verbal
Auditory-Oral
Auditory-Verbal
Auditory-Verbal
Note. HL ¼ hearing loss; HA ¼ hearing aid; SD ¼ standard deviation
a
recorded in months.
Narrative Comprehension in Children Using Hearing Aids
9
(PPVT-III; Dunn & Dunn, 1997) and receptive language was assessed using the
receptive subtest of the Test of Early Language Development–3 (TELD-3;
Hresko, Reid, & Hammill, 1999). Researchers also administered the Macarthur
Communicative Development Inventory–III (MCDI-III; Fenson et al., 2007), a
parental report tool that measures global language ability. The MCDI-III
consists of a 100-item expressive vocabulary checklist, a 12-item grammatical
structures checklist, and 12 general questions regarding children’s use of
language. Speech sound production skills were assessed with the GoldmanFristoe Test of Articulation–2 (GFTA-2; Goldman & Fristoe, 2000). Speech
sound production errors involving sounds differing by one phonetic feature
(e.g., /d/ vs. /g/) are common in children with hearing loss and could
potentially lead researchers to incorrectly interpret verbal responses (Mildner,
Sindija, & Zrinski, 2006). Therefore, this testing was completed so researchers
could be confident that the verbal responses provided by both groups of
children were being accurately decoded by examiners. Parents of children with
hearing loss were also asked to complete the Meaningful Auditory Integration
Scale (MAIS; Robbins, Renshaw, & Berry, 1991). The MAIS consists of 11
questions about the consistency with which the hearing device is used and how
children respond to sound in everyday environments while using the hearing
device.
Stimuli
Narrative Comprehension
Researchers employed Splish Splash (Skarakis-Doyle & Wootton, 1995), a
patterned story (i.e., repeated syntactic frames, vocabulary, and episodes)
revolving around bath time. This story was specifically written for use with the
JSR and EVDT tasks and is told in five episodes organized around a muddy
girl’s repeated attempts to evade a bath. Because the story event is familiar,
children can utilize their bath time scripts to build narrative representations;
this provides an important memory scaffold for young participants (SkarakisDoyle et al., 2008). The book contains color illustrations and eight pages of text
(see Skarakis-Doyle et al., 2008, for detailed structural information and a
sample of the story).
Primary story comprehension was measured with the JSR using a modified
version of the Splish Splash story in which 10 of the original story elements were
omitted from the story to create a cloze task [e.g., So she___(put) her big toe into
the bathtub. . .]. Children who have constructed an accurate representation of
the narrative should be able to use their understanding of the context to provide
the appropriate story elements (Skarakis-Doyle & Dempsey, 2008a). Comprehension monitoring was assessed with the EVDT using a violation version of
the story (Skarakis-Doyle, 2002; Skarakis-Doyle & Milosky, 1999) in which eight
of the original story elements were altered or violated (e.g., One day a little frog
10
Zupan & Dempsey
named Sarah...). The ability to detect narrative violations indicates that a child
is making accurate judgments about the success of his/her comprehension
(Skarakis-Doyle, 2002; Skarakis-Doyle & Dempsey 2008b). The cloze and
violation versions of the story are slightly shorter than the original story but
resemble the original in all other respects (see Skarakis-Doyle et al., 2008, for
more detailed information on and further samples of the close and violation
versions of Splish Splash).
Speech Perception
The stimuli created for the speech perception task were designed for use
within a McGurk paradigm (McGurk & MacDonald, 1976), in which a visual
phonemic segment (e.g., /ga/) is combined with an incongruent auditory
segment (e.g., /ba/), resulting in an altered, integrated percept (e.g., /da/). A
single female speaker was recorded producing the phonemic segments /aga/
and /aba/. Visual recording was completed using a SONY DCR-DVD 108
Handycam digital video camera at a distance of 1.5 meters so only the speaker’s
head and shoulders were included in the visual frame. The speaker’s voice was
recorded using an Olympus DS-30 Digital Voice Recorder. The visual and
auditory segments were then edited and combined as either congruent
phonemic segments (e.g., auditory /aba/ paired with visual /aba/) or
incongruent phonemic segments (e.g., auditory /aba/ paired with visual /
aga/) using Magix Movie Edit Pro 12 software.
Procedure
Testing took place in participant homes, a university lab, and at participant
schools, depending on parental preference. All testing was conducted in a
quiet, well-lit, and distraction-free room, and lasted between 1.0 to 2.5 hours.
Portions of the testing were videotaped for later clarification and analysis. To
maintain attention and engagement throughout the session, standardized
language measures were interspersed with study tasks in a standard pattern
across participants (see Figure 1). While children completed the study protocol,
parents and caregivers completed a series of questionnaires. The literacy
questionnaire, developed by Dempsey and Skarakis-Doyle (2001) for other
studies of oral narrative comprehension, contains questions about the
frequency of storybook reading both within and outside the home and parental
book reading style. Since children’s experience with narratives could
potentially impact their performance on oral narrative measures, collecting
information about the frequency and/or nature of their narrative exposure was
important. All parents and caregivers also completed the MCDI-III, providing
important global language information. Parents and caregivers of children
with hearing loss additionally completed the MAIS.
Narrative Comprehension in Children Using Hearing Aids
11
Figure 1. Sequence of standard language measures administered and study tasks.
12
Zupan & Dempsey
Joint Story Retell Task
Prior to reading the story, children completed a 19-item picture-point
vocabulary comprehension test composed of words extracted from the test
story (8 verbs, 8 nouns, 2 locative adverbs, and 1 adjective). This test was
administered as an indicator of the participants’ familiarity with the story
vocabulary outside of the narrative context. Then, children were read Splish
Splash while seated next to the examiner. Following the story, children
completed four practice cloze sentences to ensure they understood the
demands of the cloze task. All participants successfully completed at least
one practice item and, thus, were permitted to continue with the experimental
task. After completing the practice items, children were told that the examiner
would read Splish Splash again, but that this time they could ‘help’. The
examiner then began the JSR task, reading the cloze version of the story and
pausing at the 10 predetermined points to allow children to supply the next
word(s) (e.g., I need my ____ ). At each ‘blank’ in the story, the examiner paused
for 5 seconds and allowed the child an opportunity to supply the missing
content. If the child did not respond, the examiner repeated a short section of
the text preceding the blank and paused again. If after two attempts no
response was elicited from a child, the examiner completed the blank herself
and continued with the story. No explicit feedback was provided to children on
their performance during the task.
The maximum raw score for the JSR task is 10, one point for each correct item.
Responses that conveyed the same meaning as the words in the original story
were scored correct. Raw scores were converted to a percentage scores for data
analysis.
Expectancy Violation Detection Task
Comprehension monitoring was tested using the EVDT. Participants first
completed a practice period during which they were read a short story
containing the names of several common animals. As the examiner read the
story, she deliberately misnamed the animals and prompted the children to
catch her mistakes. Verbal reinforcement was provided for successful catches.
All children demonstrated understanding of the EVDT by protesting at least
one violation during the practice period, and thus were eligible to continue to
the test phase. In this phase the examiner told children that she would read the
Splish Splash story again, but this time she ‘‘might be silly or make some
mistakes.’’ Children were instructed to try to catch these mistakes. The
examiner then read the violation version of the story. No feedback on
performance was provided during the task.
The EVDT contains a total of eight violations and thus has a maximum score
of 8. Responses to the EVDT were scored from videotape using procedures
developed by Skarakis-Doyle (2002). To merit a point, responses to violations
Narrative Comprehension in Children Using Hearing Aids
13
had to occur within 5 seconds of the violation and no later than the end of the
phrase immediately following the violation. Both verbal and nonverbal
responses were accepted. Verbal responses included single-word protests
(e.g., ‘‘No!’’), repetitions of the violation (e.g., ‘‘Mom!’’), and corrections (e.g.,
‘‘Not Mom. Sarah!’’). Acceptable nonverbal responses included changes in eye
gaze, facial expression, and body position. Skarakis-Doyle (2002) previously
demonstrated that these nonverbal behaviors are reliable indicators of
detection. Total score out of 8 on the task was obtained by determining the
number of violations responded to by each child. This raw score was then
converted to a percentage score for data analysis.
Speech Perception Task
Immediately prior to beginning the speech perception task, children
participated in a training task that taught them to associate the segments
/aba/, /ada/, and /aga/ with three different toys, and then with pictures of
those toys. Avariety of strategies were used to help promote association of each
sound with its corresponding toy and then with its picture. These included
asking the children to repeat the name of the toy following a model and having
them point to the corresponding toy and/or picture on request. The average
length for the training period was approximately 1 minute. Once a child had
accurately pointed to the picture associated with each of the three sound
segments five times, the test phase began.
To complete the speech perception task, children were seated comfortably in
front of a computer with a 14.5 inch monitor placed approximately 50
centimeters from them on the table. External speakers were placed on either
side of the screen and set to play auditory stimuli at 70 dB SPL. A key-pad with
three buttons was placed on the table in front of the child. Above each response
option (i.e., /aba/, /ada/, and /aga/) was a picture of the corresponding toy
from training. To ensure that children were comfortable with the computer task
used in the test phase and reliably associating each sound segment with the
corresponding picture on the key-pad, practice items were administered. In the
practice block, children were asked to confirm their selections before the
examiner entered their responses on the key-pad. The practice block included a
total of 10 randomized stimuli comprised of four congruent and six
incongruent phonemic segments. Stimuli used in the practice block were not
used in the test blocks. For both the practice and test blocks, children were
instructed to watch and listen to the video of a lady who was going to appear on
the screen and ask for a toy. They were instructed to point to the picture of the
toy corresponding to the lady’s request. The examiner then pressed the button
on the key-pad underneath that picture. This two-step response procedure was
implemented to reduce impulsivity and the potential of unintended responses.
All children correctly identified at least three of the four congruent stimuli in
the practice block, indicating task comprehension.
14
Zupan & Dempsey
The speech perception task consisted of two blocks of stimuli, each consisting
of 10 congruent trials and 25 incongruent trials, for a total of 20 congruent and
50 incongruent stimuli. The blocks were administered in a counter-balanced
order across participants and stimuli within each block were randomized using
Superlab 4.5 software (Cedrus Corporation, 2011). Each block took approximately 5 minutes to complete. Responses were recorded both manually and on
videotape to ensure accuracy in scoring and prevent data loss associated with
equipment error.
Responses to congruent stimuli in the speech perception task were scored as
correct and assigned 1 point if children correctly matched the corresponding
toy to the AV stimulus presented. Maximum score for these stimuli was 20. The
raw score out of 20 was then converted to a percentage score for data analysis.
Incongruent stimuli had no correct responses. Instead, responses were coded
into one of three categories: (1) a response consistent with the visual portion of
the stimulus (/aga/); (2) a response consistent with the auditory portion of the
stimulus (/aba/); or (3) a response indicating integration of the two sources of
information (/ada/). Scoring was completed by calculating the total number of
auditory based responses, number of visually based responses, and number of
integrated responses provided by each participant. Each of these scores was
then converted to a percentage score that represented the proportion of total
responses that fell into each response category.
Results
Language, Literacy, and Speech Sound Production Skills
Given the potential contribution of receptive vocabulary and grammatical
skills to the successful construction of narrative representations (SkarakisDoyle & Dempsey, 2008a; Stein & Albro, 1996), scores on the PPVT-III, the
receptive subtest of the TELD-3, and the MCDI-III were compared between
groups. Children in both groups scored within normal limits for their age on
both the PPVT-III and TELD-3. No statistically significant differences were
found between groups when independent samples t-tests were conducted for
each measure. Mean standard scores, standard deviations, and results of the ttests are presented in Table 2.
A review of parents’ responses to the literacy questionnaire indicated high
levels of narrative exposure in both groups. Parents reported that children in
both groups enjoyed reading books and were read to at least once daily. In
addition, over half of the parents across both groups reported engaging their
children in the stories using various literacy strategies, such as question-asking
and intentional plot violations. Overall, there were striking similarities
between the two groups in terms of their literacy exposure.
Speech sound production skills were assessed using the GFTA-2. A variety of
articulation errors were noted, many of which were age appropriate.
Narrative Comprehension in Children Using Hearing Aids
15
Table 2. Means, standard deviations of standard scores, and results of independent
samples t-tests for language tests by group
PPVT-III
TELD-3
MCDI-III
Hearing Loss
Typical Hearing
M
SD
M
SD
Group comparison
100.20
119.20
76.78
7.50
14.32
12.91
107.60
113.20
73.0
17.63
16.33
28.21
t(8) ¼ .86, p ¼ .41
t(8) ¼ .62, p ¼ .55
t(8) ¼ .38, p ¼ .74
Researchers were most interested in the children’s ability to produce /b/, /d/,
and /g/ since the speech perception task was based on these sounds. All of the
children with typical hearing clearly produced these three target phonemes at
the beginning, middle, and end of words on the GFTA-2. This was also true of all
but 1 of the children with hearing loss. That child was only able to produce
/g/ correctly in the initial position of words; errors were observed in medial and
final position. As this child consistently pointed to the toys associated with the
target sounds in the speech perception task (/aba/, /ada/, /aga/), researchers
were confident that he perceived the differences among these three phonemes.
Oral Narrative Comprehension and Monitoring
On average, children with hearing loss correctly pointed to the target
vocabulary item in the 19-item prevocabulary test 83% of the time, while
children with typical hearing were correct 74% of the time. These results
suggested adequate familiarity with the vocabulary contained in the story.
Despite their weaker vocabulary results, children with typical hearing
obtained slightly higher scores on the JSR than children with hearing loss
(Table 3). However, a one-way of analysis of variance (ANOVA), conducted to
compare task performance, revealed that the difference between groups was
not statistically significant [F(1,9) ¼ .05, p ¼ .83], suggesting that, on average,
children in the two groups had similar overall comprehension of the story.
A reverse pattern from performance on the JSR was found with the EVDT.
For the EVDT, children with hearing loss showed slightly better performance,
recognizing violations 47% of the time, compared to children with typical
hearing, who recognized violations only 32% of the time. However, again, the
difference in mean test scores was not statistically significant, F(1,9) ¼ .39,
p¼.55. Mean overall responses and standard deviations for performance on the
EVDT are shown in Table 3.
Speech Perception
To ensure that children were attending throughout the speech perception
task and not just randomly pointing to pictures of the toys, researchers
16
Zupan & Dempsey
Table 3. Responses to narrative comprehension measures by children with hearing
loss and children with typical hearing
Hearing Loss
JSR
EVDT
Typical Hearing
M (%)
SD
M (%)
SD
36.0
47.5
20.74
32.36
42.0
32.5
30.33
42.94
analyzed responses to the congruent AV stimuli. Interestingly, children with
hearing loss showed more accurate identification of these segments, correctly
identifying them 81% of the time (SD ¼15.57) compared to 77% (SD ¼ 29.07) by
children with typical hearing. A one-way ANOVA indicated that this
difference was not statistically significant, F(1, 9) ¼ .074, p ¼ .79. These accuracy
levels are well above those expected for chance responding (i.e., 33%), verifying
the credibility of participant responses. In addition, these results indicate that
participants in both groups were able to perceive and identify each of the
phoneme segments included in the speech perception task.
Figure 2 shows the proportions of auditory, visual, and integrated response
types to the incongruent speech segments for each group. To explore
perceptual modality preferences within each group of children, paired sample
t-tests were conducted to compare the mean proportion of each response type
provided. Family wise error rate was controlled across these tests using
Bonferroni’s approach (a ¼ .02). As seen in Figure 2, children with hearing loss
provided a similar proportion of auditory-based (37%), visually-based (24%),
and integrated responses (40%). This observation was confirmed by a lack of
any significant differences on paired-sample t-tests comparing response types
for this group. Children with typical hearing showed the greatest difference in
response types, providing many more auditory-based responses (72%) than
visually-based ones (11%). However, this difference only approached
significance, t(4) ¼ 3.06, p ¼ .04.
To compare response types provided across the two groups, a 3 3 2 ANOVA
was conducted with response type (auditory-based, visually-based, integrated) as the within-subjects factor and hearing status as the between-subjects
factor. Hearing status did not significantly impact response type, F(2, 16)¼1.65,
p ¼ .22, gp2 ¼ .17, and no main effect of response type was indicated,
F(2, 16) ¼ 2.54, p ¼ .11, gp2 ¼ .24. These results were somewhat surprising given
the within-group differences reported above, but may simply be a reflection of
small sample size.
Discussion
The primary aim of this pilot study was to compare the oral narrative
comprehension abilities of children who use hearing aids and rely on spoken
Narrative Comprehension in Children Using Hearing Aids
17
Figure 2. Proportion of auditory-based, visually-based, and integrated responses
selected by children with hearing loss and children with typical hearing.
communication to those of children with typical hearing. This study was novel
in its use of nontraditional narrative comprehension measures that place few
demands on working memory and require limited expressive language.
Adding further to the novelty of this study was the use of a tool that specifically
measures comprehension monitoring ability. Due to the small number of
participants in this pilot, the data cannot be generalized and should be
interpreted carefully. Nevertheless, the overall picture that emerged suggests
that some children with moderate-severe hearing loss who rely on hearing aids
to access sound can understand oral narratives at a level on par with their peers
who have typical hearing. In addition, these children may differ very little from
their peers in domains known to contribute to oral narrative comprehension
(e.g., vocabulary comprehension, literacy experience).
The first aim of this pilot study was to investigate whether young children
who use hearing aids differ from children with typical hearing in their ability to
supply key story elements in a joint retelling task (i.e., JSR), an indication that a
child has grasped vital agents, locations, actions, and objects in the narrative
(Dempsey & Skarakis-Doyle, 2001; Skarakis-Doyle & Dempsey, 2008a).
Researchers expected that the distortion existing in the hearing aid signal
(Friesen, Shannon, Baskent, & Wang, 2001; Turner et al., 1999; Zupan, 2013)
would limit the ability of the children with hearing loss to attend to, process,
and integrate key linguistic elements during an ongoing oral narrative. Thus,
researchers anticipated that these children would identify fewer key story
elements than their peers with typical hearing. Prior research has, in fact,
reported deficits in the narrative abilities of children with hearing loss,
specifically noting that their story retells lacked important key elements
18
Zupan & Dempsey
(Luetke-Stahlman, Griffiths, & Montgomery, 1998). Although in the present
study the performance of children with hearing loss was slightly poorer than
that of the matched group, the difference was not significant, suggesting that
children with and without hearing loss are similarly able to supply target
word(s) within a cloze test structure.
In addition to comparing oral narrative comprehension abilities, this pilot
study also aimed to compare comprehension monitoring abilities of these two
groups. Since the EVDT requires more sophisticated skills than the JSR,
researchers again hypothesized that children with hearing loss would find this
task more difficult than children with typical hearing. This hypothesis was not
supported. In fact, though the difference was not significant, the children with
hearing loss actually detected more violations than the children with typical
hearing.
Although the results of the oral narrative comprehension group comparisons
were at first surprising, there are several potential explanations. First, the
children with hearing loss who participated in this investigation were all found
to have receptive vocabulary and grammatical abilities within age-expectations. In fact, there were no statistically significant differences between the
groups on the PPVT-III, TELD-3, and MCDI-III. Since the linguistic skills
measured by these tests contribute to narrative comprehension (Nelson, 1996;
Stein & Albro, 1996), the stronger than expected performance by the children
with hearing loss on the oral narrative comprehension measures may, in part,
be attributable to the strength of their linguistic skills. The reverse pattern has
been reported in children known to have language delays. Skarakis-Doyle and
colleagues (2008) reported that children with language delays, who had
impaired receptive vocabulary and grammatical skills, also performed poorly
relative to typically developing children on the JSR and EVDT. This finding is
consistent with the suggestion that receptive vocabulary and grammatical
skills are key components of oral narrative comprehension. The fact that the
children with hearing loss in this investigation had age-appropriate abilities in
these domains may then help account for their stronger than expected
performance on the JSR and EVDT.
Intervention-type may also help account for the stronger than expected
performance of children with hearing loss on the oral narrative comprehension
measures. All children with hearing loss who participated in the current study
were receiving intervention focused on the spoken communication modality: 4
were receiving auditory-verbal therapy and 1 was receiving auditory-oral
therapy. Both types of intervention place considerable emphasis on attending
to the auditory signal, which encourages active listening through a variety of
strategies like acoustic highlighting and the use of parentese (Geers, Nicholas,
& Sedey, 2003; Hogan, Stokes, White, Tyszkiewicz, & Woolgar, 2008; Musselman & Kirkcaali-Iftar, 1996). As success on the JSR and EVDT requires that
listeners pay close attention to specific aspects of the auditory signal, an
intervention focused on audition may benefit children with hearing aids faced
Narrative Comprehension in Children Using Hearing Aids
19
with these tasks. Children with hearing loss receiving interventions focused on
audition are initially bombarded with enhanced suprasegmentals through
songs and other activities that focus on recognizing auditory patterns in words,
phrases, and sentences (Estabrooks, 2006). Perhaps, then, the children with
hearing loss were better able to detect violations in the EVDT-modified story
because those violations disrupted the suprasegmental pattern they had
associated with the patterned story.
Responses to the speech perception task need also be considered as a factor in
oral narrative comprehension performance. Although significant differences
between response types did not emerge within or between groups, the pattern
of performance warrants discussion. Given the results of a previous
investigation that reported a preference for visual information in children
with cochlear implants (Schorr et al., 2005), researchers hypothesized that the
children with hearing loss in the current study would provide significantly
more visually-based responses than any other response type. This hypothesis
was not supported. Surprisingly, the children with hearing loss provided a
similar number of each type of response, with visually-based responses
provided least often and integrated responses most (see Figure 2). In contrast,
children with typical hearing provided primarily auditory-based responses, a
finding that is consistent with previous investigations of modality preferences
in typically developing young children (Sloutsky & Napolitano, 2003).
The fact that the children with hearing loss provided a relatively high
number of integrated responses on the speech perception task helps explain the
relative strength of their EVDT performance. Integration of auditory and visual
cues is critical for comprehension during a storybook interaction. Because
children with hearing loss have a tendency to integrate aspects of the auditory
and visual signals, they may have been more adept at combining the auditory
and visual cues provided during the storybook readings of Splish Splash. The
ability to tune into the visual illustrations while simultaneously processing the
oral narratives may have made the material more tangible, allowing the
children with hearing loss to create a better framework in which to organize the
verbal content (Levin & Mayer, 1993). Similar facilitation effects of illustrations
have previously been reported for children without hearing loss (Greenhoot &
Semb, 2008). Illustrations are typically representative of the story content, as
they were in the case of the test story used in this investigation. So even if the
children with hearing loss were not wholly processing the auditory
information as the story was read aloud, they may have still been able to
sufficiently combine the two sources of information to create a complete
schema. This is consistent with research that reports improved perception
when visual and auditory sources of information are integrated versus the
reliance on information from only one of these modalities (Bergeson, Pisoni, &
Davis, 2003; Lachs, Pisoni, & Kirk, 2001; Massaro & Light, 2004). Thus, although
a preference for auditory information has been proposed as an important factor
in language development in young children (Napolitano & Sloutsky, 2004), the
20
Zupan & Dempsey
ability to also draw on visual cues may aid oral narrative comprehension in
some populations. In the case of hearing aid users, who receive a distorted
auditory signal, such ability may enhance understanding of the narrative,
enabling them to form representations and monitor comprehension on par
with their peers who have typical hearing.
However, it is also important to consider that performance on the speech
perception task may instead indicate that modality preferences in processing
are flexible in children with hearing loss, with focus given to the modality that
contains the most salient cue at the time. Future research on literacy
development in children with hearing loss should include auditory-only
and/or visual-only versions of the story to allow for more direct observation of
the modality relied upon for narrative comprehension.
Two important clinical implications arise if children with hearing loss do in
fact create more complete narrative schemas when provided with both
auditory and pictorial cues. The first concerns book choices. Professionals
working with young children with hearing loss (e.g., speech-language
pathologists, listening and spoken language practitioners) should instruct
parents and teachers in how to evaluate the fit between text and illustration
when making selections for joint book reading activities. Parents and teachers
should be encouraged to choose books with illustrations that are highly
representative of story content. The presence of compatible visual and auditory
information may strengthen the representations the children form and
optimize their narrative comprehension. The second implication concerns
classroom settings. Findings of this study suggest that children using an FM
system may still benefit from preferential seating in the early grades where
visual supports are often used to enhance lessons. Having a clear view of the
visual supports may allow young children with hearing loss to better integrate
the information presented to these two modalities, essentially compensating
for any degradation in the auditory signal.
To summarize, children with hearing loss who participated in the present
investigation performed similarly to a group of age-matched children with
typical hearing on two measures of oral narrative comprehension. The authors
speculated about possible reasons for the relatively strong performance of
children using hearing aids on these measures, positing that strong underlying
linguistic skills, intensive intervention focused on attending to and processing
the auditory signal, and an ability to make use of both auditory and visual
information may have been factors. Moreover, the similar amount and type of
narrative experience reported for children in both groups may also have
contributed to the similarity observed in JSR and EVDT scores.
Limitations
When interpreting the results of this pilot study, it is important to note the
limitations imposed by the relatively small sample size. As a result of the
Narrative Comprehension in Children Using Hearing Aids
21
sample size, it is not possible to draw conclusions about the interplay among
such potentially relevant factors as age of identification or amplification and
narrative comprehension ability. As Botting (2010) points out, unravelling the
intricate relationships among such factors is a key to understanding the
strengths and weakness of children with hearing loss and to developing
appropriate supports for them. The small sample size also limits the ability to
make generalizations to other hearing aid users engaged in spoken
communication as it is unclear whether the full range of narrative ability is
represented in the participating sample. Finally, the small sample size means
that it is possible there was insufficient power to detect between group
differences, which may have been evident with a larger sample. For this reason,
the researchers have chosen not to limit discussion solely to statistically
significant outcomes, but to examine trends in scores.
Importantly, these trends suggest that the nontraditional comprehension
measures used in this pilot study can be successfully administered to children
who use hearing aids. Further, these measures may more accurately represent
the narrative comprehension abilities of children with hearing loss since verbal
working memory and expressive language demands are reduced relative to the
traditional retell task. However, it is not possible to demonstrate the sensitivity
and specificity of the JSR and EVDT in this population without undertaking a
larger, more comprehensive study. As discussed by Eriks-Brophy (2004), there
is a great need to establish collaborative, multi-center research projects in this
field so that more rigorous studies can be conducted. Future studies
investigating oral narrative comprehension and potential contributing factors
(e.g., language development, speech perception, verbal working memory, and
communication mode) would most certainly benefit from such an approach.
Conclusion
The outcomes of the current study are encouraging. They suggest that children
who use hearing aids and participate in spoken language intervention are able to
form story representations that are as detailed and accurate as their peers with
typical hearing. As a result, these children appear to understand oral narratives
and engage in comprehension monitoring with similar success. For children with
hearing loss, as with children who have typical hearing, aptitude in oral story
comprehension may hinge on the strength of other related skills (e.g., receptive
vocabulary, grammatical ability, and global language ability). The extent to which
auditory training is a focus of intervention may also be a contributing factor in the
narrative comprehension performance of children with hearing loss. Finally,
attention to visual cues and an ability to integrate them with auditory information
may facilitate storybook comprehension in children with hearing loss. Tests of
these proposals in large-scale, rigorous studies would greatly enhance
understanding of narrative comprehension skills in children with hearing loss.
22
Zupan & Dempsey
Acknowledgements
This work was funded in part through the Humanities Research Institute
at Brock University.
References
Bergeson, T. R., Houston, D. M., & Miyamoto, R. T. (2010). Effects of congenital
hearing loss and cochlear implantation on audiovisual speech perception in
infants and children. Restorative Neurology and Neuroscience, 28, 157–165.
Bergeson, T. R., Pisoni, D. B., & Davis, R. A. O. (2003). A longitudinal study of
audiovisual speech perception by children with hearing loss who have
cochlear implants. The Volta Review, 100, 53–84.
Bergeson, T. R., Pisoni, D. B., & Davis, R. A. O. (2005). Development of
audiovisual comprehension skills in prelingually deaf children with
cochlear implants. Ear and Hearing, 26(2), 149–164.
Botting, N. (2002). Narrative as a tool for the assessment of linguistic and
pragmatic impairments. Child Language Teaching and Therapy, 18(1), 1–21.
Botting, N. (2010). ‘It’s not (just) what you do, but the way that you do it’:
Factors that determine narrative ability in atypical learners. Developmental
Medicine and Child Neurology, 52(10), 886–887.
Boudreau, D. (2008). Narrative abilities: Advances in research and implications
for clinical practice. Topics in Language Disorders, 28(2), 99–114.
Carney, A. E., & Moeller, M. P. (1998). Treatment efficacy: Hearing loss in
children. Journal of Speech, Language, and Hearing Research, 41, S61–S84.
Cedrus Corporation. (2011). SuperLab 4.5. San Pedro, CA.
Crosson, J., & Geers, A. (2001). Analysis of narrative ability in children with
cochlear implants. Ear and Hearing, 22(5), 381–394.
Dempsey, L. (2005). Event knowledge and the oral story comprehension abilities of
preschool age children. Unpublished doctoral dissertation, University of
Western Ontario, London, Ontario, Canada.
Dempsey, L., & Skarakis-Doyle, E. (2001). The validity of the joint story retell as
a measure of young children’s comprehension of familiar stories. Journal of
Speech Language Pathology and Audiology, 25, 201–211.
Dunn, L. M., & Dunn, L. M. (1997). Peabody Picture Vocabulary Test (3rd ed.).
Circle Pines, MN: American Guidance Services Publishing.
Easterbrooks, S. R., Lederberg, A. R., & Connor, C. M. (2010). Contributions of
emergent literacy environment to literacy outcomes for young children who
are deaf. American Annals of the Deaf, 55(4), 467–480. doi: 10.1353/aad.2010.
0024
Erber, N. P. (1972). Auditory, visual, and auditory-visual recognition of
consonants by children with normal and impaired hearing. Journal of Speech
and Hearing Research, 15, 431–422.
Narrative Comprehension in Children Using Hearing Aids
23
Eriks-Brophy, A. (2004). Outcomes of auditory-verbal therapy: A review of the
evidence and a call for action. The Volta Review, 104(1), 21–35.
Estabrooks, W. (2006). Auditory-verbal therapy and practice. Washington, D.C.:
Alexander Graham Bell Association for the Deaf and Hard of Hearing.
Fagan, M. K., & Pisoni, D. B. (2010). Hearing experience and receptive
vocabulary development in deaf children with cochlear implants. Journal of
Deaf Studies and Deaf Education, 15, 149–161.
Fenson, L., Marchman, V. A., Thal, D. J., Dale, P. S., Reznick, J. S., & Bates, E.
(2007). MacArthur-Bates Communicative Development Inventories: User guide
and technical manual. (2nd ed.). Baltimore, MD: Brookes.
Friesen, L. M., Shannon, R. V., Baskent, D., & Wang, X. (2001). Speech
recognition in noise as a function of the number of spectral channels:
Comparison of acoustic hearing and cochlear implants. Journal of Acoustical
Society of America, 110(2), 1150–1163.
Geers, A. E., Nicholas, J., & Sedey, A. (2003). Language skills of children with
early cochlear implantation. Ear and Hearing, 24, 46–58.
Goldman, R., & Fristoe, M. (2000). The Goldman-Fristoe Test of Articulation (2nd
ed.). Bloomington, MN: Pearson Assessments.
Greenhoot, A. F., & Semb, P. A. (2008). Do illustrations enhance preschoolers’
memories for stories? Age-related change in the picture facilitation effect.
Journal of Experimental Child Psychology, 99, 271–287.
Hogan, S., Stokes, J., White, C., Tyszkiewicz, E., & Woolgar, A. (2008). An
evaluation of auditory verbal therapy using the rate of early language
development as an outcome measure. Deafness and Education International,
10(3), 143–167.
Hresko, W. P., Reid, D. K., & Hammill, D. D. (1999). Test of Early Language
Development (3rd ed.). Austin, TX: Pro-Ed.
Hudson, J. A. (1988). Children’s memory for atypical actions in script-based
stories: Evidence for a disruption effect. Journal of Experimental Child
Psychology, 46(2), 159–173.
Ibertsson, T., Hansson, K., Maki-Torkko, F., Willstedt-Svensson, U., & Sahlen, B.
(2009). Deaf teenagers with cochlear implants in conversation with hearing
peers. International Journal of Language and Communication Disorders, 44(3),
319–337.
Justice, E., Swanson, L., & Buehler, V. (2008). Use of narrative-based
intervention with children who have cochlear implants. Topics in Language
Disorders, 28(2), 149–161.
Kendeou, P., Bohn-Gettler, C., White, M. J., & van den Broek, P. (2008).
Children’s inference generation across different media. Journal of Research in
Reading, 31(3), 259–272.
Lachs, L., Pisoni, D. B., & Kirk, K. I. (2001). Use of audiovisual information in
speech perception by prelingually deaf children with cochlear implants: A
first report. Ear and Hearing, 22(3), 236–251.
24
Zupan & Dempsey
Levin, J. R., & Mayer, R. E. (1993). Understanding illustrations in text. In Britton,
B. K., Woodward, A., & Brinkley, M. (Eds.), Learning from Textbooks, pp. 95–
113. Hillsdale, NJ: Erlbaum.
Luetke-Stahlman, B., Griffiths, C., & Montgomery, N. (1998). Development of
text structure knowledge as assessed by spoken and signed retellings of a
deaf second-grade student. American Annals of the Deaf, 143(4), 337–346.
Lynch, J. S., van den Broek, P., Kremer, K. E., Kendeou, P., White, M., & Lorch, E.
P. (2008). The development of narrative comprehension and its relation to
other early reading skills. Reading Psychology, 29(4), 327–365.
Massaro, D. W., & Light, J. (2004). Using visible speech to train perception and
production of speech for individuals with hearing loss. Journal of Speech,
Language, and Hearing Research, 42(2), 304–320.
Mayberry, R. I., Del Giudice, A. A., & Lieberman, A. M. (2011). Reading
achievement in relation to phonological coding and awareness in deaf
readers: A meta-analysis. Journal of Deaf Studies and Deaf Education, 16, 164–
188.
McGurk, H., & MacDonald, J. (1976). Hearing lips and seeing voices. Nature,
264, 746–748.
Mildner, V., Sindija, B., & Zrinski, K. V. (2006). Speech perception of children
with cochlear implants and children with traditional hearing aids. Clinical
Linguistics & Phonetics, 20(2/3), 219–229.
Musselman, C., & Kirkcaali-Iftar, G. (1996). The development of spoken
language in deaf children: Explaining the unexplained variance. Journal of
Deaf Studies and Deaf Education, 1, 108–121.
Napolitano, A. C., & Sloutsky, V. M. (2004). Is a picture worth a thousand
words? The flexible nature of modality dominance in young children. Child
Development, 75(6), 1850–1870.
Nelson, K. (1996). Language and cognitive development: Emergence of the mediated
mind. New York: Cambridge University Press.
Paris, A. H., & Paris, S. G. (2003). Assessing narrative comprehension in young
children. Reading Research Quarterly, 38(1), 36–76.
Petersen, D. B., Gillam, S. L., & Gillam, R. B. (2008). Emerging procedures in
narrative assessment: The index of narrative complexity. Topics in Language
Disorders, 28(2), 115–130.
Pittman, A. L. (2008). Short-term word-learning rate in children with normal
hearing and children with hearing loss in limited and extended highfrequency bandwidths. Journal of Speech, Language, and Hearing Research, 51,
785–797.
Robbins, A. M., Renshaw, J. J., & Berry, S. W. (1991). Evaluating meaningful
integration in profoundly hearing impaired children. American Journal of
Otolaryngology, 12(Suppl), 144–150.
Robertson, L., Dow, G. A., & Hainzinger, S. L. (2006). Story retelling patterns
among children with and without hearing loss: Effects of repeated practice
and parent-child attunement. The Volta Review, 106(2), 147–170.
Narrative Comprehension in Children Using Hearing Aids
25
Rouger, J., Fraysse, B., Deguine, O., & Barone, P. (2007). McGurk effects in
cochlear-implanted deaf subjects. Brain Research, 1188, 87–99.
Schorr, E. A., Fox, N. A., van Wassenhove, V., & Knudsen, E. I. (2005). Auditory–
visual fusion in speech perception in children with cochlear implants.
Proceedings of the National Academy of Sciences, 102(51), 18748–18750.
Seewald, R. C., Ross, M., Giolas, T. G., & Yonovitz, A. (1985). Primary modality
for speech perception in children with normal and impaired hearing. Journal
of Speech and Hearing Research, 28, 36–46.
Skarakis-Doyle, E. (2002). Young children’s detection of violations in familiar
stories and emerging comprehension monitoring. Discourse Processes, 33,
175–197.
Skarakis-Doyle, E., & Dempsey, L. (2008a). Assessing story comprehension in
preschool children. Topics in Language Disorders, 28(2), 131–148.
Skarakis-Doyle, E., & Dempsey, L. (2008b). The detection and monitoring of
comprehension errors by preschool children with and without language
impairment. Journal of Speech, Language and Hearing Research, 51, 1227–1243.
Skarakis-Doyle, E., Dempsey, L., & Lee, C. (2008). Identifying language
comprehension impairment in preschool children. Language, Speech &
Hearing Services in Schools, 39, 54–65.
Skarakis-Doyle, E., Dempsey, L., Campbell, W., Lee, C., & Jaques, J. (2005, June).
Constructs underlying emerging comprehension monitoring: A preliminary study.
Poster session presented at the 26th Annual Symposium on Research in Child
Language Disorders, Madison, WI.
Skarakis-Doyle, E. & Milosky, L. (1999). Assessing discourse representation in
preschool children: Elements of global fast map. Poster session presented at the
20th Annual Symposium for Research in Child Language Disorders,
Madison, WI.
Skarakis-Doyle, E. & Wootton, S. (1995). An investigation of children with
developmental language impairment’s ability to use everyday knowledge in
comprehension. In D. MacLaughlin & S. McEwen (Eds.). Proceedings of the
19th Annual Boston University Conference on Language Development, pp. 599–
610. Sommerville, MA: Cascadilla Press.
Sloutsky, V. M., & Napolitano, A. C. (2003). Is a picture worth a thousand
words? Preference for auditory modality in young children. Child Development, 74(3), 822–833.
Stein, N. & Albro, E. (1996). The emergence of narrative understanding:
Evidence for rapid learning in personally relevant contexts. Issues in
Education 2, 83–98.
Trabasso, T., & Nickels, M. (1992). The development of goal plans of action in
the narration of a picture story. Discourse Processes, 15, 249–275.
Tremblay, C., Champoux, F., Lepore, F., & Theoret, H. (2010). Audiovisual
fusion and cochlear implant proficiency. Restorative Neurology and Neuroscience, 28, 283–291.
26
Zupan & Dempsey
Turner, C. W., Chi, S., & Flock, S. (1999). Limiting spectral resolution in speech
for listeners with sensorineural hearing loss. Journal of Speech, Language, and
Hearing Research, 42, 773–784.
van den Broek, P., Kendeou, P., Kremer, K., Lynch, J., Butler, J., White, M. J., &
Lorch, E. P. (2005). Assessment of comprehension abilities in children. In S. G.
Paris & S. A. Stahl (Eds.), Children’s Reading Comprehension and Assessment
(pp. 107–130). Mahwah, NJ: Erlbaum.
van den Broek, P., Lorch, E. P., & Thurlow, R. (1996). Children’s and adult’s
memory for television stories: The role of causal factors, story-grammar
categories, and hierarchical level. Child Development, 67, 3010–3028.
Worsfold, S., Mahon, M., Yuen, H., & Kennedy, C. (2010). Narrative skills
following early confirmation of permanent childhood hearing impairment.
Developmental Medicine & Child Neurology, 52, 922–928.
Young, G. A., James, D. G. H., Brown, K., Giles, F., Hemmings, L., Hollis,
J.,. . .Newton, M. (1997). The narrative skills of primary children with a
unilateral hearing impairment. Clinical Linguistics and Phonetics, 11(2), 115–
138.
Zupan, B. (2013). The role of audition in audiovisual perception of speech and
emotion in children with hearing loss. In P. Belin, S. Campanella, & T. Ethofer
(Eds.), Integrating the Face and Voice in Person Perception (pp. 299–323). New
York: Springer.
Narrative Comprehension in Children Using Hearing Aids
27
The Volta Review, Volume 113(1), Spring 2013, 29–42
Differences in Spoken Lexical
Skills: Preschool Children with
Cochlear Implants and
Children with Typical Hearing
Joan A. Luckhurst, Ph.D., CCC-SLP; Cris W. Lauback, Psy.D.; and
Ann P. Unterstein VanSkiver, Psy.D.
Children with significant hearing loss may experience great difficulty developing
spoken language and literacy skills to a level commensurate with children of the same
age with typical hearing. While studies of children who use cochlear implants show
improved spoken language outcomes in some cases, when compared to the same
children’s earlier use of hearing aids only, many still lag behind children with typical
hearing. The current study examines levels of spoken lexical language in preschool
children with cochlear implants compared to children with typical hearing, matched
for age and gender and with control for nonverbal IQ. Outcomes indicate that
preschool children with cochlear implants achieve spoken lexical skills within the
average range when compared to children of the same age with typical hearing.
Results support the evidence that early cochlear implantation provides good benefit
for spoken language outcomes.
Lexical Development
Early lexical development plays an important role in children’s language
skills and literacy development. Though the rate of lexical development can
vary widely among children, research has shown early lexical development to
be a strong predictor of later success in literacy (Bowyer-Crane et al., 2008;
Harlaar, Hayiou-Thomas, Dale, & Plomin, 2008; Muter, Hulme, Snowling, &
Joan A. Luckhurst, Ph.D., CCC-SLP, is an Assistant Professor in the Speech-LanguageHearing Science Program at La Salle University. Cris W. Lauback, Psy.D., is an Associate
Professor of School Psychology in the Division of Counseling and School Psychology at
Alfred University. Ann P. Unterstein VanSkiver, Psy.D., currently works at Eastern State
Hospital in Virginia. Correspondence concerning this manuscript may be addressed to Dr.
Luckhurst at [email protected].
Effect of Cochlear Implants on Lexical Language
29
Stevenson, 2004; Rescorla, 2005; Shanahan, 2006). Evidence also suggests that
many factors impact lexical development, including child factors (i.e.,
phonological memory skills, ongoing cognitive development, and hearing
sensitivity) and environmental factors (i.e., the amount of time a child is spoken
to, the amount of speech a child hears, the nature of the speech a child hears, the
timing of the speech directed toward a child in response to focus of attention,
and socio-economic status [SES]) (Hoff, 2009; Owens, 2008). Children who are
developing typically begin recognizing words as early as 5 months of age,
demonstrate understanding of word meaning by 10 months of age, produce
true words between 10 and 15 months of age, and have a spoken vocabulary of
50 words by the time they reach their second birthday (Hoff, 2009; Owens,
2008). Hearing loss, however, has a dramatic impact on this development (Hoff,
2009).
Impact of Severe to Profound Hearing Loss
While early development of joint attention and gestural communication is
similar for both groups, children with hearing loss need significant educational
and audiological intervention to prevent delays in all areas of spoken
language—the greater the hearing loss, the more profound the impact (Hoff,
2009). Delays in lexical development of children with hearing loss are
characterized by word knowledge that is incomplete and limited to concrete
words, with subsequent inflexible use and understanding of vocabulary (Hoff,
2009). In recent years, advances in technology have provided greater access to
spoken language for children with severe to profound hearing loss. In 1990, the
U.S. Federal Drug Administration (FDA) approved use of cochlear implants
(CI) with children (Tye-Murray, 2009). Prior to this time, children with hearing
loss had to rely on hearing aids to gain access to sound. While hearing aids
benefit many children with hearing loss, those with limited residual hearing
frequently require better access to sound than even the best hearing aids can
provide in order to acquire age-appropriate lexical skills. A CI is a device that
provides direct stimulation to the auditory nerve, thereby maximizing access to
sound. This access to sound has proven results in better outcomes for spoken
language development in these children, as compared to children with similar
levels of hearing loss who use hearing aids (Tye-Murray, 2009).
Cochlear Implants and Spoken Language Outcomes
There is ample evidence that shows the benefits of CIs over hearing aids for
children with severe to profound hearing loss. It is less clear, however, as to the
level of spoken language outcomes for children who use CIs compared to
children with typical hearing. Research suggests there are a variety of
potentially important factors impacting the development of spoken language
in children who use CIs. These include age-of-implantation, type and amount
30
Luckhurst, Lauback, & VanSkiver
of early intervention, nonverbal cognitive skills, SES, gender, type and duration
of device use, and communication method (Ching et al., 2009; Colletti, 2009;
Connor, Craig, Raudenbush, Heavner, & Zwolan, 2006; Dettman et al., 2004;
Forli et al., 2011; Geers, Moog, Biedenstein, Brenner, & Hayes, 2009; Hayes,
Geers, Treiman, & Moog, 2009; Holt & Svirsky, 2008; Manrique, Cervera-Paz,
Huarte, & Molina, 2004; Miyamoto, Hay-McCutcheon, Kirk, Houson, &
Bergeson-Dana, 2008; Nicholas & Geers, 2007).
In studies that compared spoken language skills of children who use CIs to
children with typical hearing, group mean scores on measures of vocabulary,
receptive, and/or expressive language showed that earlier implantation
(receiving a CI within the first or second year of life) significantly improved
outcomes (Ching et al., 2009; Colletti, 2009; Connor et al., 2006; Geers et al.,
2009; Hayes et al., 2009; Holt & Svirsky, 2008; Manrique et al., 2004; Nicholas &
Geers, 2007). In some cases, outcomes reached levels commensurate or nearly
commensurate with children with typical hearing (Ching et al., 2009; Colletti,
2009; Connor et al., 2006; Hayes et al., 2009). In contrast, some children
receiving CIs early in life continued to demonstrate spoken language skills
significantly below the average range, even after several years of CI use
(Geers et al., 2009; Holt & Svirsky, 2008; Manrique et al., 2004). For all of these
studies, participants demonstrated wide ranges of individual variability in
outcomes.
In a systematic review of the clinical efficacy of CIs, Forli and colleagues
(2011) investigated the evidence regarding the effects of age-of-implantation,
binaural/bimodal stimulation, and outcomes in children with additional
disabilities. Included studies had to meet the following requirements: be
peer-reviewed, be published between 2000 and 2010, involve pediatric
samples of children with CIs (from infancy through adolescence), and focus
on the evaluation of verbal perception, spoken language comprehension,
and/or spoken language production. In addition, studies had to meet criteria
for rigor in methodology and utility of design in relation to the research
questions. Of 929 located articles, 49 met the criteria for full review. Existing
evidence supported the benefits of early cochlear implantation, specifically
converging around 18 months of age. However, the authors noted that there
were few studies involving children receiving CIs under 12 months of age.
‘Younger’ was also variably defined among studies. Studies on the benefits
of device use (unilateral vs. bilateral CI or bimodal stimulation) revealed a
wide range of individual outcomes, but showed advantages of bilateral and
bimodal over unilateral CIs for sound localization and listening in both noisy
and quiet environments. Evidence further showed that children who use CIs
and who have additional disabilities demonstrated improved communication outcomes, though not to the level of those without additional
disabilities.
Effect of Cochlear Implants on Lexical Language
31
Nonverbal Cognitive Skills and Age-of-Implantation
Nonverbal cognitive skills appear to be an important confounding factor in
the research on language development in children who use CIs. In their study
of 153 children who use CIs between the ages of 4.11 and 6.11 years, Geers and
colleagues (2009) found that less than half demonstrated spoken language
outcomes in the average range when compared to children with typical
hearing. Nonverbal cognitive skills accounted for the most variance in these
outcomes, followed by SES level, gender, and age-of-implantation. Children
with higher nonverbal cognitive skills scored better on all language outcomes
(vocabulary, receptive, and expressive language). The authors noted that
inclusion of children with IQ scores below the average range likely impacted
overall results.
In a different study of 76 children who use CIs, Nicholas & Geers (2007)
controlled for nonverbal cognitive skills. Spoken language outcomes,
measured at 3.5 years and again at 4.5 years, were compared to two groups
of children with typical hearing, matched by age. When nonverbal cognitive
skills were controlled, age-of-implantation significantly predicted spoken
language outcomes. Children who received CIs by 12–13 months of age
achieved scores within the average range on standardized measures of
receptive vocabulary and receptive and expressive language at 4.5 years of age.
Children receiving CIs at later ages did not reach average levels.
Hayes and colleagues (2009) investigated receptive vocabulary outcomes in
a sample of 65 children who received CIs before 5 years of age, were enrolled in
an intensive listening and spoken language program, and who had cognitive
skills in the average range. Age-of-implantation again was a significant
predictor of spoken language outcomes. Children receiving CIs by 12 months
of age reached average vocabulary levels after 2.5 years of CI use, while those
receiving CIs by 24 months of age required 4 years of use to reach the same
levels. Children receiving CIs at 3 and 4 years old did not attain average
vocabulary levels even after 6 years of CI use. However, the year-ofimplantation for participants ranged from 1991 to 2004, and there was a
moderate, significant correlation between year-of-implantation and age-ofimplantation. Further examination of that relationship showed that children
with a later year-of-implantation demonstrated significantly better initial
vocabulary scores, but only those with early age-of-implantation (1–2 years of
age) achieved vocabulary levels within the average range of children with
typical hearing.
Summary
Lexical development is a critical component of spoken language and later
literacy development. Studies of lexical development in children who use CIs
show mixed results, with some children reaching age-appropriate levels while
32
Luckhurst, Lauback, & VanSkiver
others experience significant delays. Nonverbal cognitive skills and age-ofimplantation are shown to be significant and apparently covaried predictors of
spoken language outcomes in this population.
Present Study
The present study investigated receptive and expressive vocabulary skills in
a small sample of preschool children who use CIs, were developing spoken
language, and were enrolled in their first or second year of preschool. It was
designed to address the following questions: (1) How do these children’s
spoken vocabulary skills compare to established norms for children with
typical hearing who are the same age?; and (2) How do their spoken vocabulary
skills compare to a sample of children with typical hearing when age and nonverbal IQ are controlled?
Method
Participants
Subjects in each group were between 43 and 77 months of age. The mean age
for children who use CIs was 59.4 months (4.95 years) and for children with
typical hearing was 58.3 months (4.86 years). The two groups were similar for
SES, ethnicity, and gender. Subjects were assessed during the spring of their
first or second year in preschool. The researchers received approval from their
institutional review board. Subject participation was determined through
parental consent. Subject demographics for both groups appear in Table 1.
Cochlear Implant Group
Children who use CIs were selected from children in an established
Auditory-Oral Education Center (the Center) for children with hearing loss,
located in a suburban area in the Northeast. The listening and spoken language
approach focuses on development of listening and speaking without use of
manual communication (Tye-Murray, 2009). Consistent use of technology is
also a strong component of this approach. Of the 17 children attending the
Center, parental permission to participate was granted for 9 children (6 boys
and 3 girls). All of these children scored in the average range on the Wechsler
Non-Verbal Scale of Ability (WNV) and were included in the study.
The average age of hearing loss identification for this group was 8.8 months,
and all participants were identified prior to their second birthday (range from
0–23 months). Participants wore hearing aids for an average of 6.1 months prior
to receiving a CI. Age-of-implantation ranged from 12–30 months of age, with a
mean age of 19.2 months. At the time of their first CI, all children met currently
accepted criteria for implantation: (a) bilateral profound sensorineural hearing
Effect of Cochlear Implants on Lexical Language
33
Table 1. Demographic data
Typical Hearing
Total N
42
Male
20
Female
22
Age at testing
M
58.3
Range
53–66
Age diagnosed
M
–
Range
–
Time hearing aids worn pre-implant
M
–
Range
–
Age-of-implantation
M
–
Range
–
Cochlear Implant
9
6
3
59.4
43–77
8.8
0–23
6.1
1–11
19.2
12–30
Note. Ages and time in months.
loss, (b) lack of auditory skills development, (c) limited benefit from
appropriately fit hearing aids, (d) no physical contraindications for placement
of the CI, (e) no medical contraindications, (f) medical clearance to undergo
surgery, (g) no contraindicating psychiatric or organic disabilities, (h)
enrollment in an aural (re)habilitation intervention that emphasizes developing listening skills, and (i) realistic expectations and commitment to follow-up
appointments (Thedinger, 1996; Tye-Murray, 2009). Six of the participants used
bilateral CIs, and the age range for the second implant was 50–60 months of age.
Typical Hearing Group
Forty-two preschool children with typical hearing from two universal Pre-K
classrooms in a Northeastern rural public school district were included as the
control group. Seven in this group scored below 85 (- 1 SD) on the WNV and
were counted in the study. Possible effects due to cognitive ability were
controlled for by entering WNV as a covariate in an analysis of covariance
(ANCOVA). All participants with typical hearing also scored within normal
limits on their latest hearing screening completed in school, and showed no
history of hearing problems in their school health records.
Site Descriptions
The Center is a small, private suburban school for children with hearing loss.
The Center uses a listening and spoken language approach to prepare students,
ages birth to 6 years old, for eventual transition into mainstream public or
34
Luckhurst, Lauback, & VanSkiver
private schools. Preschool children attend school 5 days a week in small classes
of approximately six to eight students. Each class has at least one teacher of the
deaf to provide academic instruction. The rooms are designed to meet the
unique acoustic needs of the children and include wall-to-wall carpeting,
acoustic ceiling tiles, wall coverings, and ceilings mounted with soundfield FM
systems. Many children also use personal FM systems. Children receive daily
30-minute individual speech-language therapy sessions, conducted by a
certified and licensed speech-language pathologist. Early literacy and
numeracy, writing, science, music, art, fine/gross motor, and play time are
provided through the same curriculum approved by the state education
department and utilized in general education programs across the state.
The public school universal Pre-K program is offered in two elementary
schools, 5 days a week. Development in gross and fine motor skills, reading and
vocabulary skills, writing, math, and social and emotional growth are
addressed through a teacher-directed and child-selected fashion and in
accordance with the State Education Department curriculum.
Lexical Language Measures
The Peabody Picture Vocabulary Test–4th Edition (PPVT-4) is a normreferenced measure designed to assess receptive vocabulary in individuals age
2 years, 6 months, through 90 years old. The PPVT-4 was recently updated to
include items that broadly sample words representing 20 content areas and
parts of speech across all levels of difficulty, and has received wide support as a
measure of receptive vocabulary (Dunn & Dunn, 2007). The Expressive
Vocabulary Test–2nd Edition (EVT-2) is also a norm-referenced measure,
recently updated and designed to assess expressive vocabulary for individuals
age 2 years, 6 months, through 90 years old (Williams, 2007).
For ease in comparing expressive and receptive vocabulary skills, the EVT-2
and the PPVT-4 were co-normed on the same population. The norming sample
included over 3,500 individuals, matching the current U.S. population along
parameters of gender, race/ethnicity, geographic region, socioeconomic status,
and clinical diagnosis or special-education placement. The co-norming of these
tests allows direct comparison of the two tests’ scales. Included in the norming
sample are individuals with Attention Deficit Hyperactivity Disorder
(ADHD), Emotional/Behavioral Disturbance, Language Delay, Language
Disorder, Learning Disability, Mental Disability, and Speech Impairment.
Both assessments are individually administered and take between 10 and 20
minutes. For the PPVT-4, the respondent must select the picture from a field of
four that best represents the meaning of the stimulus word presented orally by
the examiner. For the EVT-2, the respondent must provide a verbal response
when the examiner presents a picture and asks a question about it (e.g., ‘‘What
color is this?’’; ‘‘What is the ___ doing?’’; ‘‘What is this?’’). Testing for both
Effect of Cochlear Implants on Lexical Language
35
Table 2. Language performance: Children with typical hearing versus children
who use cochlear implants, with control for cognitive ability
PPVT-4
EVT-2
Typical Hearing
n ¼ 42; M (SD)
Cochlear Implant
n ¼ 9; M (SD)
g2
Sig.
Observed
Power
102.95 (14.55)
105.05 (12.28)
96.0 (15.46)
105.0 (16.87)
.072
.004
.060
.672
.471
.070
Note. Test conducted using ANCOVA. For g2 0.0099 ¼ small effect, 0.0588 ¼ medium effect, and
0.1379 ¼ large effect (Cohen, 1988).
assessments continues until the respondent reaches a specified level of error.
Assessment results for the PPVT-4 and EVT-2 are summarized in Table 2.
Cognitive Measure
The WNV is a norm-referenced, individually administered clinical instrument designed to assess the general cognitive ability in individuals age 4 years,
0 months, through 21 years, 11 months. The WNV was normed on a stratified
sample of 1,323 individuals in the United States. The sample was selected to
reflect the data gathered by the 2000 U.S. Census. Demographics reflect
stratification along the variables of age, gender, race/ethnicity, education level,
and geographic region (Wechsler & Naglieri, 2006).
This assessment tool was specifically chosen for the current study because its
normative sample included children who were deaf and children who were
hard of hearing (Wechsler & Naglieri, 2006). Hearing loss was either unilateral
or bilateral, and greater than or equal to 55 dB. More specifically, individuals in
the deaf group included children who use CIs. These individuals were
developing spoken language, had attended an auditory-verbal school, or were
postlingually deafened. The performance of these special populations showed
negligible effect sizes when compared to matched controls (Mhh ¼ 96.0, SD ¼
15.3; Md ¼ 103.0, SD ¼ 10.3; Mc ¼ 100.4, SD ¼ 15.2), supporting its use as a
measure of cognitive ability for individuals who are deaf or hard of hearing
(Wechsler & Naglieri, 2006).
The WNV is individually administered and takes between 10 and 20 minutes
to administer to the youngest age group, ages 4 years through 7 years, 11
months. The brief battery used for this study includes the Matrixes and
Recognition subtests. Reliability coefficients on the individual Matrixes and
Recognition subtests (r¼.78 – .89) and for the total brief battery (r¼.87 – .88) for
participants ages 4–5 years indicate acceptable reliability. The Matrixes subtest
involves completing a pattern using shapes and colors, and correlates with
general ability, perceptual reasoning ability, and simultaneous processing.
Recognition involves viewing a simple stimulus for 3 seconds and then
36
Luckhurst, Lauback, & VanSkiver
matching it to a response option, and is linked with immediate memory and
general ability (Wechsler & Naglieri, 2006).
Procedure
The investigator collaborated with the directors of the preschool programs to
identify children who were eligible to participate in the study. Children were
identified based on a review of their cumulative file and parental consent.
Medical information was provided to the examiner regarding CIs. Potential
participants with typical hearing were eligible with the following criteria: a
standard score of 70 or higher on the WNV, testing within normal limits on the
most recent annual school hearing exam within normal limits, and parental
permission. Medical information and information regarding time in school was
obtained by examining school records and by report of school professionals at
both sites. For children with hearing loss, information about age-ofimplantation and degree of hearing loss were obtained through school records
and by report of school professionals at the Center. Parental consent for all
children was obtained by mailing out a packet, which contained a description
of the study and details regarding confidentiality.
Two advanced graduate students in school psychology were trained to assist
in the assessment of the subjects. The actual assessments were conducted in a
small office with minimal distractions in the children’s school building by
either the investigator or by one of two trained graduate students.
The WNV was administered first, followed sequentially by the PPVT-4 and
EVT-2. All tests were administered and scored using standardized procedures.
Results
Table 2 reports the mean standard scores and standard deviations for the CI
and control groups. Both groups demonstrated lexical skill levels comparable
to the total normative sample for same age-range children (3.6 – 5.11) for these
co-normed instruments (n ¼ 520, M ¼ 100, SD ¼ 15). The difference of the
expected standard score from the nonclinical reference group for children with
CIs (n ¼ 46) on the PPVT-4 is 29.7 and on the EVT-2 is 22.5. For children with
hearing loss who don’t have CIs and were therefore assumed to have less
severe hearing loss (n ¼ 53), the expected difference in standard scores on the
PPVT-4 is 17.3 (Dunn & Dunn, 2007) and on the EVT-2 is 11.1 (Williams,
2007). Upon initial examination, the CI group appears to have exceeded these
published expectations, comparing favorably to both the normative sample
and the control group.
To determine whether significant differences existed for expressive or
receptive vocabulary, two ANCOVAs were conducted for EVT-2 and PPVT-4
scores comparing control and CI groups with cognitive entered as a covariate.
Levene’s (1960) test of equality of error variance for EVT-2 (p¼ .067) and PPVT-4
Effect of Cochlear Implants on Lexical Language
37
(p ¼ .679) was nonsignificant for heterogeneity of variance, indicating that, even
with the unequal sample sizes, the assumption of homogeneity of variance was
not violated. Therefore, comparison of unequal sample sizes can be supported.
The results were nonsignificant for EVT-2, F(1, 51)¼.181, p¼.672 and for PPVT-4,
F(1, 51) ¼ 3.71, p ¼ .060, indicating no difference between the control and CI
groups, with cognitive ability included (see Table 2). The nearly identical EVT-2
results produced a small effect size, g2 , .0099 (see Cohen, 1988, p. 283),
indicating that it is unlikely a larger CI sample would result in a significant
difference in expressive vocabulary performance. However, the PPVT-4 results,
with low observed power¼.471 and a medium effect size, g2 ¼.072, suggest that a
larger CI sample with a difference of a similar magnitude may reach significance.
Cognitive ability was not significant for EVT-2. For PPVT-4, F(1, 51) ¼ 5.55,
p ¼ .024, 10.2% of variance was explained by cognitive ability.
Discussion
The present study compared outcomes of a small group of preschool
children who use CIs to established norms for children with typical hearing on
standardized measures of vocabulary, and to a participant group of children
with typical hearing. Participants were matched for gender and age, and
nonverbal cognitive skills were controlled. Findings indicated that this group
of children who use CIs demonstrated both receptive and expressive
vocabulary skills within the average range as compared to peers and
established norms. Though the sample was small, the investigators believe
this study contributes to the evidence on spoken language outcomes for
children who use CIs. The existing evidence shows widely varying outcomes in
vocabulary achievement for young children who use CIs. Some of this
variability likely occurs as a result of uncontrolled confounding variables
(i.e., nonverbal cognitive skills and age-of-implantation).
For the present sample, the average age-of-implantation was 19.2 months,
with a range of 12–30 months (see Table 1). While some evidence indicates that
preschool children in this age range who receive a CI demonstrate vocabulary
skills well below average (Ching et al., 2009; Colletti, 2009; Nicholas & Geers,
2007), the children in the present sample achieved outcomes in the average
range. Results are consistent with Hayes and colleagues (2009), who found that
children who receive CIs by age 24 months can achieve vocabulary skills
commensurate to peers with typical hearing after only a few years of CI use. It is
interesting to note that the present sample’s characteristics were more similar
to the latter study’s sample. Both included children immersed in listening and
spoken language programs, with nonverbal cognitive skills in the average
range, and who rely only on spoken language communication. These particular
variables may explain some of the differences from other studies showing
lower outcomes. For example, the sample used by Nicholas & Geers (2007)
38
Luckhurst, Lauback, & VanSkiver
included children with cognitive skills as much as - 2 SD below the mean. As
noted by those authors, this was a likely factor in the overall lower spoken
language scores of their sample. In addition, 6 of the 9 participants in the
present sample received bilateral CIs. Binaural implantation has been shown to
significantly improve listening skills for young children (Forli et al., 2010), and
may have contributed to better outcomes.
The present study contains some significant limitations. First and foremost,
the small sample size limits generalization of outcomes. However, results of
statistical analyses suggest that the study’s design met required assumptions
for analysis. A second limitation involves selected outcomes. Spoken language
outcomes for this study were limited to vocabulary. Some studies indicate that
children tend to perform better on vocabulary tasks than more complex
language skills (i.e., comprehension and use of grammar and syntax) (Ching et
al., 2009; Holt & Svirsky, 2008; Manrique et al., 2004). Additional investigation
of more complex language tasks is therefore warranted to determine if overall
language skills for this group are consistent with its vocabulary outcomes.
Third, the present sample was relatively homogenous in that all children who
use CIs demonstrated nonverbal cognitive skills within the average range,
attended the same intensive listening and spoken language program, and
communicated only through spoken language. Therefore, no generalization of
results could be made to other types of programs, even given a larger sample
size. However, there is value for investigation of such homogenous groups.
Since educational programs can have significant impact on language outcomes
and academic achievement, it is an important issue for research. Hayes and
colleagues (2009) noted similar composition of their sample and suggested that
such investigation may be used as baseline information to compare against
other educational programs.
Practical Applications
Research in the field of cochlear implantation is shifting. Children with
hearing loss are being identified at much younger ages and those needing CIs
are now receiving them as young as 12 months of age (Tye-Murray, 2009). These
changes broaden the need for additional research with infants, toddlers, and
preschoolers. This also suggests the need for widespread education of
community service agents who work with children. It is likely that as
technology continues to improve and the choice to obtain CIs becomes
increasingly available to younger children, community service providers will
have more and more contact with children who use CIs.
Implications for Schools
The implications for schools are evident. Most importantly, schools should
be prepared to work with children using CIs. The Center in this study prepares
Effect of Cochlear Implants on Lexical Language
39
preschool children with hearing loss to enter mainstream educational settings.
At transition, the mainstream or general education setting acquires the
responsibility for the day-to-day management of technology, supports and
accommodations, special and related services, and the interface with the child’s
audiologist. The technology provides access to sound but it does not change the
fact that the child is deaf. Though early identification and implantation open
the door to greater spoken language proficiency, school personnel must be
prepared to provide the appropriate environment, education, and supports for
academic success. School systems can support professional preparedness
through continuing education activities that promote knowledge and skills of
spoken language development in children with hearing loss. In addition,
higher education programs should ensure that principles, methods and
practices supporting development of spoken language are incorporated into
the curriculum for teachers of the deaf and speech-language pathologists.
References
Bowyer-Crane, C., Snowling, M. J., Duff, F. J., Fieldsend, E., Carroll, J. M., Miles,
J.,. . .Hulme, C. (2008). Improving early language and literacy skills:
Differential effects of an oral language versus a phonology with reading
intervention. Journal of Child Psychology and Psychiatry, 49(4), 422–432.
Ching, T. Y., Dillon, H., Day, J., Crow, K., Close, L., Chisholm, K., & Hopkins, T.
(2009). Early language outcomes of children with cochlear implants: Interim
findings of the NAL study on longitudinal outcomes of children with hearing
impairment. Cochlear Implants International, 10(S1), 28–32.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ:
Lawrence Erlbaum.
Colletti, L. (2009). Long-term follow-up of infants (4–11 months) fitted with
cochlear implants. Acta-Otolaryngologica, 129, 361–366.
Connor, C., Craig, H., Raudenbush, S., Heavner, K., & Zwolan, T. (2006). The
age at which young deaf children receive cochlear implants and their
vocabulary and speech-production growth: Is there an added value for early
implantation? Ear and Hearing, 27, 628–644.
Dettman, S. J., Fiket, H., Dowell, R. C., Charlton, M., Williams, S. S., Tomov, A.
M., & Barker, E. J. (2004). Speech perception results for children using
cochlear implants who have additional needs. The Volta Review, 104(4), 361–
392.
Dunn, L. M., & Dunn, D. M. (2007). Peabody picture vocabulary test, fourth edition,
manual. San Antonio, TX: Pearson Education.
Forli, F., Arslan, E., Bellelli, S., Burdo, S., Mancini, P., Martini, A.,. . .Berrettini, S.
(2011). Systematic review of the literature on the clinical effectiveness of the
cochlear implant procedure in paediatric patients. Acta Otorhinolaryngologica
Italica, 31, 281–298.
40
Luckhurst, Lauback, & VanSkiver
Geers, A. E., Moog, J. S., Biedenstein, J., Brenner, C., & Hayes, H. (2009). Spoken
language scores of children using cochlear implants compared to hearing
age-mates at school entry. Journal of Deaf Studies and Deaf Education, 14(3),
371–385.
Harlaar, N., Hayiou-Thomas, M. E., Dale, P. S., & Plomin, R. (2008). Why do
preschool language abilities correlate with later reading? Journal of Speech,
Language, Hearing Research, 51, 688–705.
Hayes, H., Geers, A., Treiman, R., & Moog, J. (2009). Receptive vocabulary
development in deaf children with cochlear implants: Achievement in an
intensive auditory-oral educational setting. Ear and Hearing, 30(1), 128–135.
Hoff, E. (2009). Language development (4th ed.). Stamford, CT: Wadsworth
Cengage Learning.
Holt, R. F., & Svirsky, M. A. (2008). An exploratory look at pediatric cochlear
implantation: Is earliest always best? Ear & Hearing, 29, 492–511.
Levene, H. (1960). Robust tests for the equality of variance. In I. Olkin, (Ed.),
Contributions to Probability in Statistics, pp. 278–292. Palo-Alto, CA: Stanford
University Press.
Manrique, M., Cervera-Paz, F. J., Huarte, A., & Molina, M. (2004). Advantages
of cochlear implantation in prelingual deaf children before 2 years of age
when compared with later implantation. The Laryngoscope, 114, 1462–1469.
Miyamoto, R. T., Hay-McCutcheon, M. J., Kirk, K. I., Houston, D. M., &
Bergeson-Dana, T. (2008). Language skills of profoundly deaf children who
received cochlear implants under 12 months of age: A preliminary study.
Acta Oto-Laryngologica 128, 373–377.
Muter, V., Hulme, C., Snowling, M. J., & Stevenson, J. (2004). Phonemes, rimes,
vocabulary and grammatical skills as foundations of early reading
development: Evidence from a longitudinal study. Developmental Psychology,
40(5), 665–681.
Nicholas, J G., & Geers, A. (2007). Will they catch up? The role of age at cochlear
implantation in the spoken language development in children with severe to
profound hearing loss. Journal of Speech, Language, and Hearing Research, 50,
1048–1062.
Owens, R. E. (2008). Language development: An introduction (7th ed.). Boston:
Pearson Education, Inc.
Rescorla, L. (2005). Age 13 language and reading outcomes in late-talking
toddlers. Journal of Speech, Language, and Hearing Research, 48, 459–472.
Shanahan, T. (2006). The national reading panel report: Practical advice for teachers.
Naperville, IL: Learning Point Associates/North Central Regional Educational Laboratory. (ERIC Document Reproduction Service No. ED489535.)
Thedinger, B. S. (1996). Cochlear implants. In J. L. Northern (Ed.), Hearing
Disorders (pp. 291–298). Boston: Allyn and Bacon.
Tye-Murray, N. (2009). Foundations of aural rehabilitation, 3rd edition. Albany, NY:
Delmar Cengage Learning.
Effect of Cochlear Implants on Lexical Language
41
Wechsler, D., & Naglieri, J. A. (2006). Wechsler nonverbal scale of ability, technical
and interpretive manual. San Antonio, TX: Pearson Education.
Williams, K. T. (2007). Expressive vocabulary test, 2nd edition, manual. San Antonio,
TX: Pearson Education.
42
Luckhurst, Lauback, & VanSkiver
The Volta Review, Volume 113(1), Spring 2013, 43–56
A Survey of Assessment Tools
Used by LSLS Certified
Auditory-Verbal Therapists for
Children Ages Birth to 3 Years
Old
Deirdre Neuss, Ph.D., LSLS Cert. AVT; Elizabeth Fitzpatrick, Ph.D., LSLS
Cert. AVT; Andrée Durieux-Smith, Ph.D.; Janet Olds, Ph.D.; Katherine
Moreau, M.A., Ph.D.; Lee-Anne Ufholz, MLIS;
JoAnne Whittingham, M.Sc.; and David Schramm, M.D.
Infants 12 months of age or older who have a severe to profound hearing loss
frequently receive cochlear implants. Given the inherent challenges of assessing
children of this age, this study aims to determine how Listening and Spoken
Language Specialists Certified Auditory-Verbal Therapists (LSLS Cert. AVTe)
gauge the progress of very young children who use cochlear implants. A survey of
LSLS Cert. AVTs was conducted to determine how they assess the progress of young
children ages 1 to 3 years old. Respondents indicated that the most commonly used
Deirdre Neuss, Ph.D., LSLS Cert. AVT, is an auditory-verbal therapist at the Children’s
Hospital of Eastern Ontario and a clinical investigator at the Children’s Hospital of
Eastern Ontario Research Institute. Elizabeth Fitzpatrick, Ph.D., LSLS Cert. AVT, is an
Associate Professor in the Faculty of Health Sciences at the University of Ottawa and a
clinical investigator at the Children’s Hospital of Eastern Ontario Research Institute.
Andrée Durieux-Smith, Ph.D., is a Professor Emeritus in the Faculty of Health Sciences at
the University of Ottawa and Scientific Director of the Research Institute at the Montfort
Hospital. Janet Olds, Ph.D., is a Neuropsychologist at the Children’s Hospital of Eastern
Ontario and an clinical investigator at the Children’s Hospital of Eastern Ontario
Research Institute. Katherine Moreau, Ph.D., is an Associate Scientist at the Children’s
Hospital of Eastern Ontario Research Institute. Lee-Anne Ufholz, MLIS, is the Director of
the Health Sciences Library in the Faculty of Health Sciences at the University of Ottawa.
JoAnne Whittingham, M.Sc., is the Audiology Research Coordinator at the Children’s
Hospital of Eastern Ontario Research Institute. David Schramm, M.D., is an Associate
Professor in the Department of Otolaryngology at the University of Ottawa and a clinical
investigator at the Children’s Hospital of Eastern Ontario Research Institute.
Correspondence concerning this manuscript may be addressed to Dr. Neuss at dneuss@
cheo.on.ca.
Assessment Tools Used by LSLS Cert. AVTs
43
methods of gauging progress in children with hearing loss were: checklists, normreferenced tests, consultation of scales of typical development, observation, parent
report, videotaping, and language sampling. As using checklists is a longstanding
practice of listening and spoken language professionals, the survey focused on which
checklists are most commonly used as well as the perceived strengths and weaknesses
of these assessment tools. Findings indicate that 70% of respondents use checklists
regularly for a variety of reasons, including coaching parents, developing goals, and
monitoring progress. It is noteworthy that 29.3 % of respondents commented on the
ease of checklist use. Despite their widespread use, respondents expressed concerns
about the lack of clarity and comprehensiveness of the checklists presently available.
Introduction
Early identification of hearing loss through universal newborn hearing
screening programs and early access to sound has fundamentally altered
expectations for children with hearing loss (Cole & Flexer, 2011; Estabrooks,
2006). The cochlear implant has become the technology of choice for those who
do not receive sufficient benefit from hearing aids (Beiter & Estabrooks, 2006).
Infants with severe to profound hearing loss who are 12 months of age or older
frequently receive cochlear implants (Dettman, Pinder, Briggs, Dowell, &
Leigh, 2007). Professionals and parents widely recognize that children who
receive cochlear implants require intensive intervention to attain optimal
benefits from the device (Cole & Flexer, 2011; Nicholas & Geers, 2008).
Auditory-verbal therapy has gained widespread recognition as an intervention
approach that optimizes the child’s ability to access sound (Estabrooks, 2006).
The aim of this therapy is to coach parents in developing their child’s listening
and spoken language by using access to the acoustic signal provided by a
hearing aid and/or cochlear implant (Hogan, Stokes, White, Tyszkiewicz, &
Woolgar, 2008). This approach emphasizes early auditory stimulation, the use
of technology, specialized therapeutic techniques, and parent-focused intervention (Estabrooks, 2006; Wu & Brown, 2004).
A long-standing practice in auditory-verbal therapy has been to use
checklists developed for clinical use (see Estabrooks, 1998, and Simser, 1993,
for examples of checklists). Checklists are tools professionals use when
assessing communication skills in children. They have been informed by the
normal sequence of communication development in children with typical
hearing and by the developers’ knowledge of the acoustic properties of speech.
These assessments typically include lists of discrete skills in listening, speech,
and language development. Several authors have described the utility of these
checklists for goal-setting and guiding treatment (Estabrooks, 1998; Pollack,
Goldberg, & Caleffe-Schenck, 1997; Simser, 1993).
A potential limitation of these checklists is that many were developed for use
with children prior to the availability of cochlear implants. Cochlear implant
44
Neuss, et al.
technology allows children to access sound and develop listening skills earlier
and quicker than previous generations of children with severe to profound
hearing loss, who had only limited acoustic cues obtained from hearing aids
(Nicholas & Geers, 2008). Consequently, current checklists may be of limited
value when used with children who have received cochlear implants early in
life.
The authors of this paper conducted a survey to determine how Listening
and Spoken Language Specialists (LSLSe) Certified Auditory-Verbal Therapists (LSLS Cert. AVTe) assess the progress of young children who receive
cochlear implants by 12 months of age. In addition, the survey examined which
checklists are commonly used. Objectives of the survey were to:
(1) Determine the methods used by LSLS Cert. AVTs to gauge progress in
young children with cochlear implants.
(2) Identify a list of checklists used by LSLS Cert. AVTs to guide them in
assessing listening, speech, and language in young children.
(3) Determine the information LSLS Cert. AVTs gained from checklists, and
identify the strengths and weaknesses of checklists.
Method
Participants and Recruitment
The authors obtained a list of all LSLS Cert. AVTs who provide service in
English from the directory of the AG Bell Academy for Listening and Spoken
Language, the certifying body for LSLS professionals. All 275 English-speaking
LSLS Cert. AVTs listed in the directory at the time of the study, in 2008, were
invited to participate. The study was approved by the Research Ethics Board of
the Children’s Hospital of Eastern Ontario.
Instrument Development and Procedures
The primary purpose of this research was to learn how LSLS Cert. AVTs
gauge progress in young children with hearing loss, and specifically how they
use checklists. A survey was developed to collect information on the following
areas: methods used to gauge progress and set goals for children with hearing
loss; specific checklists used with children under 3 years of age who have
cochlear implants; and therapists’ views of checklists, including their strengths
and weaknesses. In addition, participant demographics including the number
of years of practice, sources of program funding, and ages of children served
were collected. The respondents’ names were anonymous to the researchers.
The survey questions are provided in Appendix A.
Assessment Tools Used by LSLS Cert. AVTs
45
Table 1. The international distribution of questionnaire invitations and responses
Country of Practice
United States of America
Canada
Australia
New Zealand
Singapore
United Kingdom
China
Taiwan
Argentina
India
Israel
Mexico
Paraguay
Philippines
Switzerland
TOTAL
Number of Participants
who Received the Questionnaire
170
55
26
7
3
3
2
2
1
1
1
1
1
1
1
275
Number of
Participants Responding
62
30
16
3
(36.5%)
(54.5%)
(61.5%)
(42.8%)
1
1
1
1
116
The questionnaire consisted of 17 items and was peer-reviewed by two LSLS
Cert. AVTs for clarity before being circulated. Information letters and surveys
were sent to the 275 potential participants by email. Using a modified version of
the Tailored Design Method (Dillman, 2000), two electronic reminder letters
were subsequently sent.
Data Analysis
Analyses consisted primarily of calculating descriptive statistics using SPSS
version 18.0 (SPSS Inc., Chicago IL). Analyses also determined the frequency
with which respondents use the various checklists. Open-ended responses on
the surveys were analyzed qualitatively. Two researchers independently
identified key terms, which were tabulated to determine frequency of
occurrence.
Results
Characteristics of Participants
Of the 275 surveys sent, 117 (42.2%) were returned; however, one
questionnaire was not completed leaving 116 surveys to be fully analyzed.
The mean number of years of practice of the respondents was 15.3 (SD ¼ 8.2)
with a range of 3 to 40 years. As shown in Table 1, 95.7% of participants
practiced in four countries: United States, Canada, Australia, and New
46
Neuss, et al.
Table 2. The characteristics of programs reported by participants
N (Percentage)
Country of Practice
USA
Canada
Australia
New Zealand
Other
Current Work Setting
Private Clinic/Practice
School Based
Hospital
NGO/Nonprofit
Other
Funding
Public Funds
Public/Private
Private Practice/Institutions
Public Charitable
Percentage of Caseload 0 to 3 years of age
More than 25%
Less than 25%
62
30
16
3
5
(53.4%)
(25.9%)
(13.8%)
(2.6%)
(4.3%)
52
24
19
16
5
(44.8%)
(20.7%)
(16.4%)
(13.8%)
(4.3%)
39
35
30
12
(33.6%)
(30.2%)
(25.9%)
(10.3%)
81 (69.8%)
35 (30.2%)
Zealand. The response rate was representative of the international distribution
of LSLS Cert. AVTs in the AG Bell Academy registry at the time of this survey.
As shown in Table 2, listening and spoken language services are provided
under a wide variety of funding arrangements. Respondents listed private
clinics/practice most frequently (44.8%) as their place of work. In addition,
74.1% reported work settings that involve some public funding. Work setting
and funding arrangements were considered relevant topics to address as these
issues can affect what services are provided to children, and therefore influence
the responses. A total of 81 respondents (69.8%) reported that over 25% of their
current caseloads included children in the birth to age 3 range. Thirty-five
respondents worked less frequently (less than 25% of their caseload) with
children in this age range, and only 3 individuals indicated that they did not
currently work with this age group.
Current Practices in Assessing Progress in Young Children with Hearing Loss
As indicated in Figure 1, LSLS Cert. AVTs use a combination of methods to
gauge progress and set goals for young children with hearing loss. The most
commonly reported assessment tools were checklists and norm-referenced
tests, with checklists being used by 69.7% of therapists. In comparison, 30.2% of
respondents reported using scales of typical development to gauge progress.
Researchers compared the use of checklists for the 81 respondents who
Assessment Tools Used by LSLS Cert. AVTs
47
Figure 1. Methods of gauging progress in children with hearing loss.
frequently provided services to children under age 3 (greater than 25% of
caseload) to the 32 respondents who followed fewer children in this age group.
There was no significant difference in the frequency of use of checklists
(p ¼ 0.669). The majority (94.7%) of respondents rated checklists as very useful
(64.3%) or somewhat useful (30.4%) in working with children. Figure 1
demonstrates that videotaping and language sampling are each used by less
than 5% of therapists.
As noted, LSLS Cert. AVTs primarily used norm-referenced tests (61.2%) and
checklists (69.0%) to gauge progress for children in the birth to age 3 group.
Given that almost 40% of respondents did not report using norm-referenced
Figure 2. Methods of gauging progress in children with cochlear implants.
48
Neuss, et al.
Table 3. The top seven checklists as rated by the participants
Checklist
Author
N (Percentage)
Development of Auditory Skills
Auditory-Verbal Ages and Stages of
Development
Listening for Littles
Checklist of Auditory Communication
Skills
Cottage Acquisition Scales for
Listening, Language & Speech
Rossetti Infant-Toddler Language
Scale
St. Gabriel’s Curriculum for the
Development of Audition,
Language, Speech and Cognition
(Simser, 1993)
(Estabrooks, 1998)
77 (67.0%)
75 (65.2%)
(Sindrey, 1997)
(Franz et al., 2004)
70 (60.9%)
43 (37.4%)
(Wilkes, 1999)
31 (27.0%)
(Rossetti, 1990)
18 (15.7%)
(Tuohy et al., 2005)
16 (13.9%)
tests, researchers examined whether there was a relationship between
workplace setting and test use. There was no significant difference in the use
of norm-referenced tests between therapists working in publicly funded
(64.7%) or privately funded (56.7%) settings (p ¼ 0.488).
Checklists Used with Children Who Have Cochlear Implants
For the subset of children who use cochlear implants, a very similar pattern
was observed (Figure 2). Respondents used a variety of methods to monitor
spoken language development in children with cochlear implants. A total of
52.6% of respondents reported using checklists with this population.
Therapists also reported that consultation with the cochlear implant team
helped to determine progress in children (this result is reported in the category
referred to as ‘‘other’’ in Figure 2). For children under age 3 who use cochlear
implants, 80.2% of respondents used checklists to track progress at least every 6
months.
Perspectives on Checklists Used
Participants listed 62 different checklists used in current practice, some of
which were utilized by only 1 or 2 therapists (Neuss et al., 2009). Table 3 outlines
the seven checklists that were utilized by more than 10% of respondents.
Several of these tools cover multiple domains of development, including
audition, speech, language, and cognition (e.g., St. Gabriel’s Curriculum for the
Development of Audition, Language, Speech, and Cognition; Tuohy, Brown, &
Mercer-Moseley, 2005), while others focus more specifically on the developAssessment Tools Used by LSLS Cert. AVTs
49
Figure 3. Strengths of checklists.
ment of auditory skills (e.g., Checklist of Auditory Communication Skills;
Franz, Caleffe-Schenck, & Iler Kirk, 2004).
Strengths and Weaknesses of Checklists
Participants were asked to describe their views on the strengths and
weaknesses of the checklists they used. Responses related to the strengths of
these checklists were coded into several categories, as shown in Figure 3.
Therapists found them particularly useful for coaching parents, while
development of teaching goals and monitoring progress were also seen as
valuable features. Nearly one-third (29.3%) of respondents identified the ease
of use with young children as a particular strength of checklists.
Despite the usefulness of monitoring protocols, 70.2% of participants
expressed a lack of confidence in the measures. The lack of clarity and
comprehensiveness were noted to be of particular concern. As stated by 1
participant, who indicated that she always uses checklists and finds them very
useful, ‘‘They are not standardized in any way—moreover they cannot give
you information about whether the amount of progress made is good, O.K., or a
cause for concern.’’
Discussion
This study aimed to identify the assessment tools used by LSLS Cert. AVTs
who work with young children (ages birth to 3 years old) with hearing loss.
Through a survey, respondents indicated that they use a range of methods to
gauge progress with young children who have hearing loss. Checklists were
50
Neuss, et al.
reported as the most common method of monitoring progress, with 62 different
checklists being identified by the respondents.
Although 69.7% of respondents report regularly using checklists and 94.7%
find them useful, the majority (70.2%) also expressed concerns regarding their
lack of clarity and the inability to draw conclusions about a child’s progress
based on checklist results. It is noteworthy that nearly one-third (29.3%) of
participants commented on the ease of using checklists, a finding that may
account for their widespread application in monitoring progress. Participants
mainly reported using checklists for coaching parents, developing goals, and
monitoring progress. Other investigators have highlighted that these are
important goals when assessing young children (Nicholas & Geers, 2008;
Nikolopoulos, Archbold, & Gregory, 2005). The majority of checklists that
respondents reported using include auditory behaviors that emerge in young
children as they learn to listen. This finding suggests that measures that probe
specific auditory skills are considered highly useful in assessing young
children developing listening and spoken language.
Similar findings were obtained for children with cochlear implants, where
respondents also reported using a range of methods, including the extensive
use of checklists, to gauge progress. It is noteworthy that the majority of the
checklists in use were developed prior to the widespread use of cochlear
implants, and that some items may not be entirely applicable in current
practice.
Limitations
One potential limitation of the current study is that participants were
provided with four specific examples when asked to identify the checklists they
use in their practice. This may have resulted in bias by the inclusion of those
checklists at the expense of others. Nonetheless, a total of 62 different measures
were identified, which suggests that any bias may have been offset. As with any
survey, there is the possibility that questions may be interpreted differently by
the respondents than was the intent of the researchers. Although all but 3
respondents worked with young children, 27.6% reported working with a
smaller number of children (less than 25% of caseload) in this age range at the
time they completed the survey. No information was collected on participants’
previous experience. The respondents practiced primarily in four countries,
and therefore represent a range of clinical practices in different geographic
areas. This survey was conducted in 2008 and reflects respondent’s experience
at that time.
Conclusion
This research demonstrates that checklists continue to be widely used by
LSLS Cert. AVTs in working with a young population. The results indicate that
listening and spoken language professionals apply the information to parent
Assessment Tools Used by LSLS Cert. AVTs
51
coaching, goal development, and monitoring progress, and value the ease of
administration. This research provides an international perspective on the
methods used by LSLS Cert. AVTs to monitor children’s listening and spoken
language development.
Acknowledgements
We would like to thank the Faculty of Health Sciences of the University of
Ottawa and the Research Institute of the Children’s Hospital of Eastern
Ontario for providing the Partnership Funding Grant for this study. The
authors gratefully acknowledge the insightful comments provided by our
auditory-verbal therapist colleagues, Erin McSweeney, Kelley Rabjohn,
Rosemary Somerville, and Pamela Steacie. Research time provided by the
former Director of Operations of the Rehabilitation Patient Service Unit,
Lloyd Cowin and Audiology Professional Practice Leader, Linda Moran,
allowed the first author the time necessary to complete this project. We want
to thank Emma LeBlanc and Erin McSweeney for their contributions to the
editing and formatting of this work.
References
Beiter, A., & Estabrooks, W. (2006). The cochlear implant and auditory-verbal
therapy. In W. Estabrooks (Ed.), Auditory-verbal therapy and practice (pp. 45–
73). Washington, DC: Alexander Graham Bell Association for the Deaf and
Hard of Hearing.
Cole, E. B., & Flexer, C. (2011). Children with hearing loss: Developing listening and
talking birth to six (2nd ed.). San Diego, CA: Plural Publishing.
Dettman, S. J., Pinder, D., Briggs, R. J., Dowell, R. C., & Leigh, J. R. (2007).
Communication development in children who receive the cochlear implant
younger than 12 months: Risks versus benefits. Ear and Hearing, 28(Suppl.),
11S–18S.
Dillman, D. (2000). Mail and internet surveys: The tailored design method. New
York, NY: John Wiley & Sons.
Estabrooks, W. (1998). Cochlear implants for kids. Washington, DC: Alexander
Graham Bell Association for the Deaf and Hard of Hearing.
Estabrooks, W. (2006). Auditory-verbal therapy and practice. Washington, DC:
Alexander Graham Bell Association for the Deaf and Hard of Hearing.
Franz, D. C., Caleffe-Schenck, N., & Iler Kirk, K. (2004). A tool for assessing
functional use of audition in children: Results in children with the MED-EL
COMBI 40þ cochlear implant system. The Volta Review, 104(3), 175–196.
Hogan, S., Stokes, J., White, C., Tyszkiewicz, E., & Woolgar, A. (2008). An
evaluation of auditory-verbal therapy using the rate of early language
development as an outcome measure. Deafness and Education International,
10(3), 143–167.
52
Neuss, et al.
Neuss, D., Fitzpatick, E., Durieux-Smith, A., Olds, J., Moreau, K., Pleau, I.
A.,. . .Yazdi, F. (2009). Early communication outcomes in infants and toddlers after
cochlear implantation: Towards a reliable assessment tool. Poster presented at the
12th Pediatric Cochlear Implantation Conference, Seattle, Washington.
Nicholas, J. G., & Geers, A. E. (2008). Expected test scores for preschoolers with
a cochlear implant who use spoken language. American Journal of SpeechLanguage Pathology, 17, 121–138.
Nikolopoulos, T. P., Archbold, S. M., & Gregory, S. (2005). Young deaf children
with hearing aids or cochlear implants: Early assessment package for
monitoring progress. International Journal of Pediatric Otorhinolaryngology,
69(2), 175–186.
Pollack, D., Goldberg, D., & Caleffe-Schenck, N. (1997). Educational audiology for
the limited hearing infant and preschooler (3rd ed.). Springfield, IL: Charles C.
Thomas.
Rossetti, L. (1990). The Rossetti Infant-Toddler Language Scale. East Moline, IL:
LinguiSystems.
Simser, J. (1993). Auditory-verbal intervention: Infants and toddlers. The Volta
Review, 95(3), 217–229.
Sindrey, D. (1997). Listening for littles. London, Ontario: Word Play Productions.
Tuohy, J., Brown, J., & Mercer-Moseley, C. (2005). St. Gabriel’s curriculum for the
development of audition, language, speech, cognition, early communication, social
interaction, fine motor skills, gross motor skills: A guide for professionals working
with children who are hearing-impaired (birth to six years) (2nd ed.). Sydney: St.
Gabriel’s Auditory-Verbal Early Intervention Centre.
Wilkes, E. (1999). Cottage Acquisition Scales of Listening, Language and Speech. San
Antonio, TX: Sunshine Cottage School for Deaf Children.
Wu, C. D., & Brown, P. M. (2004). Parents’ and teachers’ expectations of
auditory-verbal therapy. The Volta Review, 104(1), 5–20.
Assessment Tools Used by LSLS Cert. AVTs
53
Appendix A. Survey
1. How many years have you been in the field of auditory-verbal practice?
2. In what country do you practice?
3. What is your current work setting? (Check all that apply)
Hospital
School
Private Clinic
Other (please specify)
4. How are your services funded? (check all that apply)
Public Funds
Privately Funded Clinic or Institution
Private Practice
Other (please specify)
5. How do/did you gauge progress and set goals amongst your 0–3
population?
6. What percentage of your caseload is 0–3 years of age?
0–25%
26–50%
51–75%
76–100%
N/A
7. How often do you use checklists?
Never
Rarely
Sometimes
Often
Always
N/A
8. How useful are checklists?
Very useful
Somewhat useful
Neutral
Not useful
9. What do you see as the strengths of checklists?
10. What do you see as the weaknesses of checklists?
54
Neuss, et al.
11. Please indicate each of the checklists that you use/or have used in your
auditory verbal practice:
Development of Auditory Skills (Simser, 1993)
Listening for Littles (Sindrey, 1997)
Auditory-Verbal Ages and Stages of Development (Estabrooks, 1998)
Checklist of Auditory Communication Skills (Franz, Caleffe-Schenck,
& Iler Kirk, 2004)
Other (please specify)
12. For what purpose do you use checklists?
Lesson planning/goal setting
Gauge progress
Diagnosis
Prognosis
Feedback to parents
Feedback to other service providers
Other (please specify)
13. I think a checklist that tracks progress on the 0–3 year population of
children who use cochlear implants would be useful.
Strongly disagree
Disagree
Neutral
Agree
Strongly agree
14. How do/did you gauge progress and set goals amongst your 0–3 year
population of children who use cochlear implants?
15. How frequently would you use a checklist that tracks progress in the 0–3
year population of children who use cochlear implants?
Daily
Weekly
Monthly
Every six months
Annually
Never
Other (please specify)
16. Would you be willing to participate in the next step of this study? This
step will involve a survey that will lead to the development of a
preliminary composite checklist.
Assessment Tools Used by LSLS Cert. AVTs
55
17. If you are willing to participate in the next step of this study please
provide your contact information:
56
Name:
Address:
City/Town:
Postal Code:
Country:
Email Address:
Phone Number:
Neuss, et al.
The Volta Review, Volume 113(1), Spring 2013, 57–73
Interactive Silences: Evidence
for Strategies to Facilitate
Spoken Language in Children
with Hearing Loss
Ellen A. Rhoades, Ed.S., LSLS Cert. AVT
Interactive silences are important strategies that can be implemented by
practitioners and parents of children with hearing loss who are learning a spoken
language. Types of adult self-controlled pauses and evidence pertaining to the
function of those pauses are discussed, followed by a review of advantages to justify
implementation of these turn-taking strategies. Practitioners may benefit from selfanalyses to ensure that interactive silences facilitate the process of learning a spoken
language in clinical and classroom settings. A call for further evidence is made to
determine lengths of deliberate pauses that may be particularly advantageous for
young children with hearing loss. This is directly relevant to the optimization of
auditory-verbal practice. Suggestions for future study into this topic are also
provided.
Silences within social interactions are sometimes referred to as deliberate
pauses, lack of audible speech, or acoustic silences (Sacks, Schegloff, &
Jefferson, 1974). An essential condition of these interactive silences is that they
are adult self-controlled pauses, i.e., intervals of time following an adult’s verbal
utterance and preceding another verbal utterance by either a child or the same
adult (Bruneau, 1973).
Interactive silences are multifaceted and complex in their implications and
functions (Duez, 1982). For example, silences can structure and regulate
interpersonal relationships by serving as critical discourse markers (Maroni,
Gnisci, & Pontecorvo, 2008; Saville-Troike, 1995). Some refer to interactive
silences as ‘soft’ turn-taking strategies designed to initiate a subsequent
response from children, cognitively or affectively, verbally or otherwise
Ellen A. Rhoades, Ed.S., LSLS Cert. AVT, is an auditory-verbal consultant and mentor
whose services can be accessed through www.AuditoryVerbalTraining.com.
Correspondence concerning this article may be directed to Dr. Rhoades at ellenrhoades@
comcast.net.
Interactive Silences
57
(Ephratt, 2008). Interactive silences can also be a means of maintaining eye
contact and establishing an alliance with another person (Ephratt, 2008).
Depending on purpose and situational context, adults within similar
cultures tend to consciously use interactive silences in slightly different ways
and for different durational periods with children (Berger, 2011). However,
across dissimilar cultures, people have considerably different perceptions of
these silences, particularly in length of deliberate pauses (Roberts, Margutti, &
Takano, 2011). For example, people in Japan and China may be more patient
with extensive silences whereas Americans tend to minimize them due to
feelings of discomfort (Shigemitsu, 2005). A period of silence longer than 1–5
seconds can seem interminably prolonged to many North Americans and
Europeans (Vassilopoulos & Konstantinidis, 2012).
Typical infant language development is linked with optimum interactive
style used by parents (Beckwith & Rodning, 1996; Murray, Johnson, & Peters,
1990). Across the school years, interactive silences seem more important for
learning rather than teaching, primarily because they elicit student verbal
responses or cognitive processes (Jaworski & Sachdev, 1998). Likewise, selfcontrolled pauses can be considered critical operational strategies integral to
the optimal auditory-based intervention for language learning of children and
adolescents of all ages.
Simply because silence can be a negative experience for some people, it can
afford much strength. Rather than considering interactive silences to be
problematic (Berger, 2011), practitioners are encouraged to leverage the silence.
Some practitioners may find themselves rushing to fill the silent void by
answering their own questions or redirecting their questions to another person
in the room, whether it be another child or the child’s parent.
Some practitioners and parents demonstrate an interactional style that
reflects a fast-paced way of talking. Talk flows rapidly, smoothly, and in
staccato form, which includes questions not followed with pauses of any
significance and where questions are fired off in ‘machine-gun’ style (Tannen,
1981). While this conversational style may work in some cultures, it may not be
conducive toward optimizing the language learning process, particularly for
children with compromised listening skills.
It is critical that practitioners develop sensitivities to the interactional
relevance of self-controlled pauses, and understand how these pauses can be
modified for optimal use with children and adolescents with hearing loss
(Carey-Sargeant & Brown, 2003). Practitioners are encouraged to engage in
critical analyses of their own communicative talk, both in classroom and
clinical settings (Thornbury, 1996). This assumes, regardless of gender or other
background variables, that it is not so much the quantity of talking across
activities as the quality of verbal interaction that determines the authenticity of
the language learning process (Nunan, 1991; Rowe, 2003; Skinner, Pappas, &
Davis, 2005).
58
Rhoades
A review of publications, including checklists and rating scales for
mentoring and evaluating auditory-verbal practitioners, reveals that adult
self-controlled pauses tend to be viewed uni-dimensionally, i.e., primarily
pausing after asking questions (see Duncan, Kendrick, McGinnis, & Perigoe,
2010; Estabrooks, 2006; Estes, 2010). The varied purposes and types of
interactive silences do not yet seem widely recognized. The purpose of this
manuscript, then, is to show that judicious use of deliberate silence may
provide important evidence-based strategies for auditory-verbal practitioners
and parents of children with hearing loss.
Types of Self-Controlled Pauses
Deliberate silences exert great power in their meaningfulness. Types of adult
self-controlled pauses depend on their placement in an interaction and on
continuous or discontinuous conversation, and can be transformed when
interpreted differently (Maroni et al., 2008). Interactive silences herein refer to
the many types and purposes of adult self-controlled pauses, including waittime, think-time, expectant pause, impact pause, and phrasal intraturn pause.
Wait-time, sometimes referred to as the pre-response pause, is accorded the
most attention in the following review of aggregate evidence supporting the
use of deliberate silence with children. However, this should not be
misconstrued as assigning more importance to one type of deliberate pause
than another.
Wait-Time
Questions posed by adults impose a social obligation for children to respond,
usually because questions have strong prosodic cues at their ending (Heeman,
Lunsford, Selfridge, Black, & van Santen, 2010). The period of time following a
question is an operational type of silence commonly referred to as wait-time;
this tends to elicit a verbal response to the verbal inquiry. Tobin (1987) defined
wait-time as the length of undisturbed time that an adult will wait for a child’s
answer to a clear, well-structured question; this pause is also referred to as
reaction- or response-time latency. Wait-times permit children increased
opportunities to decode the meaning of the direction or question.
Rowe (1996) concluded that the length of uninterrupted silence should be at
least 3–5 seconds for children with typical hearing. This strategy is now
routinely recommended for classroom teachers (Gabel, 2004; Skinner et al.,
2005), and school-aged children with typical hearing tend to prefer wait-times
of at least 3–5 seconds (Altiere & Duell, 1991; Maroni, 2011; Rowe, 1969, 1974,
1978, 1986).
Duration of wait-time is related to the complexity of language in both
question and expected response (Maroni et al., 2008). Duration seems
dependent on children’s functional language level and context. Waiting as
Interactive Silences
59
long as 10 seconds may not be appropriate for obtaining a yes/no response or a
simple recall of information as some children may need to learn how to respond
within an appropriate time frame.
Wait-times have a strong relationship with socio-cultural attitudinal factors
of traditional or non-westernized worldview (Jegede & Olajide, 1995), hence
wait-times are culture-specific (Maroni et al., 2008; Nelson & Harper, 2006). For
example, while Asian cultures seem to generally expect longer delays in turntaking, there are differences among Asian countries (Damron & Morman, 2011;
Du-Babcock & Tanaka, 2010). Ultimately, all children have different cognitive
processing times in that their brains work differently, so wait-times need to be
individualized.
While correlation does not imply causation, wait-time tends to be associated
with positive outcomes, particularly when children need time to make sense of
and respond to the demands of complex questions (Brown & Wragg, 1993). Just
as advising parents to provide ongoing verbal input is too simplistic a piece of
advice, the advice of waiting for a predetermined number of seconds is also too
simplistic (Stichter et al., 2009). Duration of wait-time is an important
consideration for helping each child complete the linguistic-cognitive tasks
required in each situation (Stahl, 1994).
Extended wait-times can be effective with children learning English as a
second language (Al-Balushi, 2008). Children with auditory processing
difficulties and with autism spectrum disorder may also benefit from waittimes considerably longer than the customary 0.9 to 1.5 seconds that special
education teachers typically demonstrated (Rowe & Rowe, 2006), possibly
because they have slower processing and response times as well as less
sensitivity to either the question prosody or the social obligation of questions
(Heeman et al., 2010).
Hearing loss also influences processing speed (Zekveld, Kramer, & Festen,
2011). Children with hearing loss tend to require longer pause lengths to
process linguistic information prior to responding (Carey-Sargeant & Brown,
2003). The need for longer pause lengths seems associated with certain
situations, such as that of a noisy academic environment (Towne & Anderson,
1997). Young children have difficulty perceiving speech in noisy environments
(Talarico et al., 2007), more so if they have reading difficulties (Inoue,
Higashibara, Okazaki, & Maekawa, 2011) or hearing loss (Anderson, 2001).
The complexity of the linguistic input can further degrade processing speed;
hearing loss is more likely to negatively affect the rapidity of language
comprehension when listening conditions are difficult and the vocabulary is
more complex. Additionally, some children with hearing loss do not always
hear the linguistic input in its entirety; for those children, the process of ‘filling
in the missing pieces’ is important (Rhoades, 2011). Finally, age is a critical
mediator for the auditory perceptual system to ‘fill in’ what was not heard, a
process known as perceptual restoration (see Winstone, Davis, & De Bruyn,
2012).
60
Rhoades
Given the redundancy of language and a child’s current knowledge base,
coupled with contextual cues, time is often needed to figure out which parts
were not heard or were misunderstood. Anecdotal reports from auditoryverbal practitioners yield varying opinions as to the length of wait-times,
ranging from 10 to 45 seconds (Rhoades, 2011).
Some children do not respond even after extensive wait-time. Adults can use
this time to decide on an appropriate course of action (Tobin, 1986). For
example, some practitioners may choose to rephrase the question, thus giving
children another period of uninterrupted time to respond. Other practitioners
may opt for a detailed explanation of the initial linguistic unit. Another possible
practitioner choice is that of re-directing the question to another person in the
room. Still another possible choice is that of answering the practitioner’s own
question. Regardless of each adult’s follow-up strategy, wait-time affords both
children and adults time to think (Stahl, 1994).
Think-Time
The post-response pause is referred to as think-time. Brief intentional pauses
after children respond to a question may be helpful. This gives the child time to
add any further thoughts, minimizing the adult error of ‘jumping in’ when the
child hesitates. It also gives time for other children or adults in the room to add
any further thoughts, whether they be supportive or contradictory (Stahl,
1994). This additional self-controlled pause can exceed 3–5 seconds (Tobin &
Capie, 1983).
Expectant Pause
The expectant pause, another form of interactive silence, serves as a mild
directive prompt (Norris & Hoffman, 1994; Stephenson, Carter, & Arthur-Kelly,
2011). This pause is typically preceded by a prosodic cue, such as the intonation
rise at the end of a statement to signal incompleteness (Saville-Troike, 1995).
Stephenson and colleagues (2011) found that when a minimum of 5 seconds
was incorporated in the length of the expectant pause while teaching students
with severe disabilities, opportunities for student participation more than
doubled. Expectant pauses tend to involve eye contact, notably a sustained
gaze, which serves as an additional cue for turn-taking by subtly encouraging
children to vocalize or complete the adult vocalization or to execute the
direction (Carey-Sargeant & Brown, 2003). This type of deliberate pause is
integral to infant-directed speech (Schaffer, 1996).
Expectant pauses are typically preceded by a strategy known as the cloze
procedure in which a child completes the adult’s sentence; it is understood that
the adult expects a verbal utterance from the child, e.g., ‘‘The three little kittens
lost their __.’’ The cloze procedure is very effective for facilitating sustained
attention, more frequent turn-taking, greater semantic, and grammatical
Interactive Silences
61
complexity (Chatel, 2001; Crowe, Norris, & Hoffman, 2000; Lee, 2008). A
variation of this strategy occurs when the adult verbalizes a sentence involving
a relational tie in which a child is cued to add an element to one previously
presented, e.g., ‘‘So they __.’’
Impact Pause
The impact pause tends to occur within an extended language-based activity
in order to make a point, to grab children’s attention, and to have them wait in
anticipation for the next piece of information (Stahl, 1994). This within-speaker
type of interactive silence tends to act as a stress marker, permitting children a
very brief chunk of time to understand that an important piece of information
was just presented, and then to consolidate personal thinking. Serving as a
cognitive signal, no requests are made of listeners to provide a verbal response
(Stahl, 1994).
Phrasal Intraturn Pause
The phrasal intraturn pause is another type of within-speaker silence that does
not elicit a vocal utterance from anyone (Carey-Sargeant & Brown, 2003). This
pause reflects a natural rest in the melody of speech, typically indicating
junctures and meaning or grammatical units in spoken language (Berger, 2011;
Ephratt, 2008). Children learning a spoken language are more likely to
recognize grammatical boundaries when adults use phrasal intraturn pauses
(Bruneau, 1973; Jusczyk, 1997).
Just as elderly people benefit from select adult-controlled strategies that
include reduced cognitive load, slower linguistic input, and intraturn pauses
designed to facilitate syntactic analyses (Dagerman, McDonald, & Harm, 2006;
Titon et al., 2006), so do children with auditory processing difficulties (Rowe,
Pollard, & Rowe, 2005). Phrasal intraturn pauses improve the attention skills of
children with auditory processing atypicalities (Rowe et al., 2005). The use of
inserting prolonged pauses between sentences may positively influence
language learning for children with atypical capacities for holding, sequencing,
and recalling auditory information accurately. A phrasal intraturn pause can
render perceptual salience to a particular linguistic unit; therefore, it may be
used when acoustically highlighting a particular phoneme for children with
hearing loss, e.g., sound-object association activities (Rhoades, 2007).
Benefits of Self-Controlled Pauses
Evidentiary findings indicate that there are considerable advantages to using
deliberate silences with children with hearing loss who are in the process of
learning a spoken language. Practitioners may choose to incorporate these
62
Rhoades
strategies into their practice. Ten benefits to using interactive silences are
discussed here.
Facilitates Sustained Auditory Attention
By its very nature, self-controlled pauses seemingly facilitate and then
maintain children’s attention, yet there is limited evidence for this assumption
(Rowe et al., 2005). Given that attention is a core cognitive skill underpinning all
executive capacities, this is a critical consideration. Time is ‘bought’ for children
to hear or attend to what was said (Rowe, 1996; Tobin, 1983, 1986). When
children are routinely expected to actively participate in activities with adults,
they tend to become active listeners. For example, experienced practitioners
using the Ling Six-Sound Test (Ling, 2003) as part of their therapy sessions
intuitively understand the need for absence of sound interspersed with /m/, /
a/, /oo/, /ee/, /s/, and /sh/. ‘‘Listening to silences can be just as instructive as
listening to voices, maybe more so’’ (Losey, 1997, p. 191).
Promotes Listening by Bracketing Meaningful Language
Adults can provide impact pauses, usually thought of as stress markers,
during extended language input activities without requesting a child follow
through with a verbal response. This gives children some uninterrupted time to
briefly consider the information in smaller chunks rather than all at once (Stahl,
1994). In turn, this can facilitate attention to specific linguistic units and
promote comprehension of a particular construct or pattern.
Encourages the Processing of Linguistic Units
Young children who are learning a spoken language process that language
differently than adults (Swingley, Pinto, & Fernald, 1999). Depending on
acoustic-phonetic information and lexical constraints, additional processing
time is important for facilitating their comprehension of spoken language
(LoCasto, Connine, & Patterson, 2007). Because cognitive processes mediate
the effect of adult-controlled pauses on language learning, there is much
heterogeneity among children in how they process spoken language and the
degree to which interactive silences affect that processing. The processing time
afforded children for more complex questions or directions should be greater
than the time afforded for a less complex verbal response, relative to each
child’s functional level of language (Brown & Wragg, 1993; Tobin, 1987).
Nourishes Contemplative and Speculative Thinking
Interactive silences are viewed as advanced information processing times
(Stahl, 1994; Tobin, 1986; Tobin & Capie, 1983) where children can reflect upon
Interactive Silences
63
what was said during those intervals (Vassilopoulos & Konstantinidis, 2012).
Pause lengths differentially affect how listeners use their cognitive skills
(Weiss, Gerfen, & Mitchel, 2010), with more complex questions or responses
having a greater effect on how they use their executive capacities (Maroni et al.,
2008; Taylor, 1969). Collaboration among children in the classroom is
encouraged (Maroni, 2011), particularly because more children tend to verbally
participate (Tobin, 1986). In general, more significant educational benefits
accrue from silences extending beyond 3 seconds (Altiere & Duell, 1991;
Maroni, 2011; Riley, 1986).
Informs Listener of Expected Response
Both the expectant pause and wait-time, via sustained eye gaze and
inflectional voice difference at end of utterances, inform listeners that a
response is expected. This expectation, however mild, means these deliberate
pauses also serve a prompting function.
Provides Time to Formulate a Linguistic Response
When children respond to adults, the effort they expend is more cognitively
involved than just listening and processing the information (Met, 1994; Nunan,
1987, 1991). Deliberate silences afford children an opportunity to consider how
to formulate personal verbal responses (Arco & McClusky, 1981; Halle,
Marshall, & Spradlin, 1979). However, Stahl (1994) warns that imprecise adult
questions tend to increase a child’s confusion and to heighten frustration, thus
leading to no response at all. Wait-times seem to warrant longer periods of
deliberate silence. Adults should extend deliberate pauses for children from
whom lengthier responses are expected, appropriately modifying them to each
child and situation.
Fosters Improved Linguistic Responses
Frequency, length, and complexity of children’s responses are improved
when silence intervals are sufficient in length (Allen & Tanner, 2002; Deelman &
Connine, 2001; Pitt & Samuel, 1995; Zwitserlood & Schriefers, 1995), and fewer
of their responses are likely to be ‘I don’t know’ (Rowe, 1996). Additionally, the
amount of time that adults talk is decreased (Riley, 1986; Skinner, Fletcher, &
Henington, 1996).
Demonstrates Respect and Value for Each Child’s Opinion
Wait-time, think-time, and expectant pauses encourage children to initiate
verbal interaction more frequently, possibly because children perceive the
adults’ interest in what they have to say (Hayes, 1998). Adults who wait for
64
Rhoades
children to respond rather than being too quick to repeat the question, execute
the direction, or assume a child did not understand, demonstrate respect and
value for each child’s opinion (Black, Harrison, Lee, Marshall, & William, 2003).
In turn, this facilitates the child’s self-confidence, thus reducing the need for
adults to provide positive rewards (Rowe, 1996). When adult rewards decrease,
children tend to focus more on the task rather than on what they think adults
seek; the source of children’s satisfaction is then altered (Rowe, 1996).
Improves Adult Questioning Strategies
With consistent use of extended wait-times, the frequency, length, variability,
flexibility, and complexity of later adult questions tend to increase in quantity
(Rowe, 1996; Tobin, 1986). Moreover, adult reactions to children’s answers tend
to be more varied and flexible (Rowe, 1996).
Elevates Adult Expectation Levels
When children respond appropriately to interactive silences, adult expectations about what the children can do are often modified and, as reviewed
elsewhere by Rhoades’ (2010) explanation of expectation levels and the selffulfilling prophecy, this can have a significant influence on learning
opportunities afforded the child (Rowe, 1996).
Interactive Silences and Auditory-Based Practice
The generic advice of ‘talk, talk, talk’ that is sometimes given to parents of
children with hearing loss is not enough. Both quantity and quality of parental
and practitioner input matter (Rowe, 2012). The use of adult self-controlled
pauses reflects good pedagogy that enhances cognitive development and
interactive communication for both language and academic learning children
(Carlsen, 1991; Leach & LaRocque, 2011; Vassilopoulos & Konstantinidis, 2012).
At the least, wait-time is a type of interactive silence that has been incorporated
into some practitioner observation tools for those who work with children who
have typical hearing and those with hearing loss (Baysen & Baysen, 2010;
Brown, 2008).
The Need for Effective Implementation
Silence makes for great discomfort in some adults, including inexperienced
practitioners (Ollin, 2008). For many reasons, it can be challenging for
practitioners and parents to employ deliberate silences as a conscious strategy
within their respective settings. Entrenched patterns of interaction, considered
ritualized teaching behaviors, may be modified with some difficulty (Maingay,
1988; Thornbury, 1996). Adults may feel uncomfortable when stripped of their
Interactive Silences
65
dependence on non-interactive talking. Studies show that it can be relatively
easy to effect changes in their use of interactive silences (Nind, 1996;
Vassilopoulos & Konstantinidis, 2012), but maintaining extended pauses in
the months following training may be problematic for some (Tobin, 1987).
Nevertheless, it is important to sensitize adults to their own use of interactive
silences simply due to the evidence supporting its importance to the language
learning process (Beckwith & Rodning, 1996; Murray et al., 1990). Short video
self-analyses can be very important to sensitization and practitioner training
processes (Baysen & Baysen, 2010; Phillips, 1994); therefore, it is suggested that
this type of critical reflection be integral to the mentoring process. Such selfanalyses, also referred to as discourse analysis, can force individuals to
confront their own teaching habits and style (Nunan, 1987). It is imperative that
this raised awareness of interactive silences, along with appropriate questions
and answers as well as affective and cognitive engagement of all involved
persons, be translated into clinical and classroom practice. Awareness-raising
and reflection can and do facilitate positive change (Kumaravadivelu, 1994;
Swift & Gooding, 1983).
Carey-Sargeant and Brown (2003) state ‘‘...the overt teaching of how to use
pause length as a communication strategy may be a practical and useful tool,
particularly within early intervention programs, for developing the language
and communication skills of children with a hearing impairment’’ (p. 56).
Skilled practitioners can learn how to routinely implement self-controlled
pauses that are appropriate to an individual child’s needs and situations.
Implementation of deliberate silences provides clear advantages for all
children, including improvements in behavioral functioning, language
comprehension, and academic learning. Assuming the art of timing has been
mastered, prolonged moments of silence can afford auditory-based practitioners and parents of children with hearing loss with some highly effective
operational strategies. To paraphrase the poet Oliver Herford, experienced
practitioners should be known by the silence they keep.
The Call for Evidence
There are cross-linguistic findings pertaining to timing issues of interactive
silences for people with typical hearing (as reviewed by Heldner & Edlund,
2010). There is also evidence that deliberate pauses are integral to childdirected speech, which facilitates the language learning process for children
developing typically and atypically across cultures and languages (Bergeson,
Miller, & McCune, 2006; Kempe, Schaeffler, & Thoresen, 2010; Sheng,
McGregor, & Xu, 2003). In a study of adolescents with typical hearing, waiting
longer than 5–6 seconds did not yield significant response differences (Duell,
1994). This sort of finding raises an important question: Is there a point of
diminishing returns with wait-times that are extended beyond a certain
number of seconds? Does the optimal range of wait-time considerably vary
66
Rhoades
from younger children to adolescents? To what extent is length of wait-time
affected by the linguistic complexity of queries directed at language learning
children with hearing loss?
While the use of wait-time has been researched with some children with
cognitive delays (Halle et al., 1979; Heeman et al., 2010; Tincani & Crozier,
2007), this operational variable has yet to be studied for outcome effectiveness
among practitioners serving children and adolescents with hearing loss. There
is no evidence as of yet that wait-time should be of a particular duration when
used with children whose auditory access to spoken language is compromised.
Additionally, there is limited evidence that the length of phrasal intraturn
pauses during verbal interactions between parents and children with hearing
loss should be slightly longer and varied as a function of the child’s language
level (Carey-Sargeant & Brown, 2003). Do all young children with hearing loss
benefit from longer phrasal intraturn pauses? Would more perceptual and
cognitive processing time for children with hearing loss facilitate the language
learning process? If so, what is the optimal pause duration—500 milliseconds?
1–3 seconds? Does the child’s aided/implanted threshold serve as a mediating
variable, and, if so, to what extent?
Unanswered questions have to do with length and frequency of these
different types of interactive silences as a function of chronological age,
functional language level, degree of auditory access to soft conversational
sound, cultural background, and executive capacities as evidenced by
individual children. It has been suggested that further studies ‘‘could improve
sensitivity to the interactional relevance of pause and maximize professional
education and therapy techniques’’ (Carey-Sargeant & Brown, 2003, p. 55). Just
as spoken language matters, judicious use of silence also matters.
References
Al-Balushi, S. N. M. (2009). An investigation into how silent wait-time assists
language learning. In S. Borg (Ed.), Classroom research in English language
teaching in Oman (pp. 2–7). Muscat, Oman: Ministry of Education.
Allen, D., & Tanner, K. (2002). Approaches in cell biology teaching. Cell Biology
Education, 1, 3–5.
Altiere, M. A., & Duell, O. K. (1991). Can teachers predict their students’ wait
time preference? Journal of Research in Science Teaching, 28, 455–461.
Anderson, K. L. (2001). Kids in noisy classrooms: What does the research say?
Journal of Educational Audiology, 9, 21–33.
Arco, C. M., & McCluskey, K. A. (1981). ‘‘A change of pace’’: An investigation of
the salience of maternal temporal style in mother-infant play. Child
Development, 52, 941–949.
Baysen, E., & Baysen, F. (2010). Prospective teachers’ wait -times. Procedia –
Social and Behavioral Sciences, 2, 5172–5176.
Interactive Silences
67
Beckwith, L., & Rodning, C. (1996). Dyadic processes between mothers and
preterm infants: development at ages two to five years. Infant Mental Health
Journal, 17, 322–333.
Berger, I. (2011). Support and evidence for considering local contingencies in
studying and transcribing silence in conversation. Pragmatics, 21, 291–306.
Bergeson, T. R., Miller, R. J., & McCune, K. (2006). Mothers’ speech to hearingimpaired infants and children with cochlear implants. Infancy, 10, 221–240.
Black, P., Harrison, C., Lee, C., Marshall, B., & William, D. (2003). Working inside
the black box: Assessment for learning in the classroom: London: King’s College.
Brown, C. J. (2008). CASTLE Internship Teaching Behaviors. Carolina Children’s
Communicative Disorders Program, University of North Carolina-Chapel
Hill.
Brown, G., & Wragg, E. C. (1993). Questioning. New York: Routledge.
Bruneau, T. J. (1973). Communicative silences: Forms and function. Journal of
Communication, 23(1), 17–46.
Carey-Sargeant, C. L., & Brown, P. M. (2003). Pausing during interactions
between deaf toddlers and their hearing mothers. Deafness and Education
International, 5, 39–58.
Carlsen, W. S. (1991). Questioning in classrooms: A sociolinguistic perspective.
Review of Educational Research, 61, 157–178.
Chatel, R. G. (2001). Diagnostic and instructional uses of the cloze procedure.
The NERA Journal, 37(1), 3–6.
Crowe, L. K., Norris, J. A., & Hoffman, P. R. (2000). Facilitating storybook
interactions between mothers and their preschoolers with language
impairment. Communication Disorders Quarterly, 21, 131–146.
Dagerman, K. S., McDonald, M., & Harm, M. (2006). Aging and the use of
context in ambiguity resolution: Complex changes from simple slowing.
Cognitive Science, 30, 311–345.
Damron, J. C. H., & Morman, M. T. (2011). Attitudes toward interpersonal
silence within dyadic relationships. Human Communication, 14, 183–203.
Deelman, T., & Connine, C. M. (2001). Missing information in spoken word
recognition: Non-released stop consonants. Journal of Experimental Psychology: Human Perception and Performance, 27, 656–663.
Du-Babcock, B., & Tanaka, H. (2010). Turn-taking behavior and topic
management strategies of Chinese and Japanese business professionals: A
comparison of intercultural group communication. Proceedings of the 75th
Annual Convention of the Association for Business Communication, October 27–
30, 2010; Chicago IL.
Duell, O. K. (1994). Extended wait time and university student achievement.
American Educational Research Journal, 31, 397–414.
Duez, D. (1982). Silent and non-silent pauses in three speech styles. Language
and Speech, 25, 11–28.
68
Rhoades
Duncan, J., Kendrick, A., McGinnis, M. D., & Perigoe, C. (2010). Auditory
(re)habilitation teaching behavior rating scale. Journal of the Academy of
Rehabilitative Audiology, 43, 65–86.
Ephratt, M. (2008). The functions of silence. Journal of Pragmatics, 40, 1909–1938.
Estabrooks, W. (2006). Auditory-verbal therapy and practice. Washington, DC:
Alexander Graham Bell Association for the Deaf and Hard of Hearing.
Estes, E. L. (2010). Listening, language, and learning: Skills of highly qualified
Listening and Spoken Language Specialists in educational settings. The Volta
Review, 110, 169–178.
Gabel, D. (2004). Science. In G. Cawelti (Ed.), Handbook of research on improving
student achievement (pp. 123–143). Arlington, VA: Educational Research
Service.
Halle, J. W., Marshall, A. M., & Spradlin, J. E. (1979). Time delay: A technique to
increase language use and facilitate generalization in retarded children.
Journal of Applied Behavioral Analysis, 12, 431–439.
Hayes, D. (1998). Effective verbal communication. London: Hodder & Stoughton.
Heeman, P. A., Lunsford, R., Selfridge, E., Black, L., & van Santen, J. (2010).
Autism and interactional aspects of dialogue. Proceedings of SIGDIAL 2010:
the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue,
pp.249252. The University of Tokyo, September 24–25, 2010.
Heldner, M., & Edlund, J. (2010). Pauses, gaps, and overlaps in conversations.
Journal of Phonetics, 38, 555–568.
Inoue, T., Higashibara, F., Okazaki, S., & Maekawa, H. (2011). Speech
perception in noise deficits in Japanese children with reading difficulties:
effects of presentation rate. Research in Developmental Disabilities, 32, 2748–
2757.
Jaworski, A., & Sachdev, I. (1998). Beliefs about silence in the classroom.
Language and Education, 12, 273–292.
Jegede, O. J., & Olajide, J. O. (1995). Wait-time, classroom discourse, and the
influence of socio cultural factors in science teaching. Science Education, 79,
233–249.
Jusczyk, P. W. (1997). The discovery of spoken language. Cambridge, MA: MIT.
Kempe, V., Schaeffler, S., & Thoresen, J. C. (2010). Prosodic disambiguation in
child-directed speech. Journal of Memory and Language, 62, 204–225.
Kumaravadivelu, B. (1994). The post-method condition: (E)merging strategies
for second/foreign language teaching. TESOL Quarterly, 28, 27–49.
Leach, D., & LaRocque, M. (2011). Increasing social reciprocity in young
children with Autism. Intervention in School and Clinic, 46, 150–156.
Lee, S. H. (2008). Beyond reading and proficiency assessment: The rational
cloze procedure as stimulus for integrated reading, writing, and vocabulary
instruction and teacher-student interaction in ESL. System, 36, 642–660.
Ling, D. (2003). The Six-Sound Test. The Listener, Winter 2002/2003, 52–53.
Interactive Silences
69
LoCasto, P. C., Connine, C. M., & Patterson, D. (2007). The role of additional
processing time and lexical constraint in spoken word recognition. Language
and Speech, 50, 53–75.
Losey, K. M. (1997). Listen to the silences: Mexican American interaction in the
composition classroom and community. Norwood, NJ: Ablex.
Maingay, P. (1988). Observation for training, development, or assessment? In T.
Duff (Ed.), Explorations in teacher training: Problems and issues (pp. 118–131).
Harlow, UK: Longman.
Maroni, B. (2011). Pauses, gaps and wait time in classroom interaction in
primary schools. Journal of Pragmatics, 43, 2081–2093.
Maroni, B., Gnisci, A., & Pontecorvo, C. (2008). Turn-taking in classroom
interactions: Overlapping, interruptions and pauses in primary school.
European Journal of Psychology of Education, 23, 59–76.
Met, M. (1994). Teaching content through a second language. In F. Genesee
(Ed.), Educating second language children (pp. 159–182). New York: Cambridge
University.
Murray, A. D., Johnson, J., & Peters, J. (1990). Fine tuning of utterance length to
preverbal infants: Effects on later language development. Journal of Child
Language, 17, 511–525.
Nelson, C., & Harper, V. (2006). A pedagogy of difficulty: Preparing teachers to
understand and integrate complexity in teaching and learning. Teacher
Education Quarterly, Spring, 7–21. Retrieved from http://www.web.mac.com/grinell/iWeb/...files/07nelson%26harper-33_2.pdf
Nind, M. (1996). Efficacy of intensive interaction: Developing sociability and
communication in people with severe and complex learning difficulties
using an approach based on caregiver–infant interaction. European Journal of
Special Needs Education, 11, 48–66.
Norris, J. A., & Hoffman, P. R. (1994). Whole language and representational
theories: Helping children to build a network of associations. Communication
Disorders Quarterly, 16, 5–12.
Nunan, D. (1987). Communicative language teaching: making it work. English
Language Teaching Journal, 41, 136–145.
Nunan, D. (1991). Language teaching methodology. Upper Saddle River, NJ:
Prentice Hall.
Ollin, R. (2008). Silent pedagogy and rethinking classroom practice: Structuring
teaching through silence rather than talk. Cambridge Journal of Education, 38,
265–280.
Phillips, D. (1994). The functions of silence within the context of teacher
training. English Language Teaching Journal, 48, 266–271.
Pitt, M. A., & Samuel, A. G. (1995). Lexical and sublexical feedback in auditory
word recognition. Cognitive Psychology, 29, 149–188.
Rhoades, E.A. (2007). Sound-object associations. In S. Easterbrooks, & E. Estes
(Eds.), Helping children who are deaf and hard of hearing learn spoken language
(pp. 181–188). Thousand Oaks, CA: Corwin Press.
70
Rhoades
Rhoades, E. A. (2010). Revisiting labels: ‘Hearing’ or not? The Volta Review, 110,
55–67.
Rhoades, E. A. (2011). Listening strategies to facilitate spoken language
learning among signing children with cochlear implants. In R. Paludneviciene and I. W. Leigh (Eds.), Cochlear implants: Shifting perspectives (pp. 142–
171). Washington, DC: Gallaudet University.
Riley, J. P. (1986). The effects of teachers’ wait-time and knowledge
comprehension questions on science achievement. Journal of Research in
Science Teaching, 23, 335–342.
Roberts, F., Margutti, P., & Takano, S. (2011). Judgments concerning the valence
of inter-turn silence across speakers of American English, Italian, and
Japanese. Discourse Processes, 48, 331–354.
Rowe, K. (2003). The importance of teacher quality as a key determinant of students’
experiences and outcomes of schooling. Retrieved from https://www.det.nsw.edu.au/proflearn/docs/.../Rowe_2003_Paper.pdf
Rowe, K., & Rowe, K. (2006). BIG issues in boys’ education: Auditory processing
capacity, literacy and behaviour. Retrieved from http://research.acer.edu.au/
boys_edu/2
Rowe, K. S., Pollard, J., & Rowe, K. J. (2005). Literacy, behaviour, and auditory
processing: Does teacher professional development make a difference? Background
paper to Rue Wright Memorial Award presentation at the 2005 Royal
Australasian College of Physicians Scientific Meeting, Wellington NZ, May
6–11.
Rowe, M. B. (1969). Science, soul and sanctions. Science and Children, 6(6), 11–13.
Rowe, M. B. (1974). Wait-time and rewards as instructional variables, their
influence on language, logic and fate control: Parts I and II. Journal of Research
in Science Teaching, 11, 81–84 and 291–308.
Rowe, M. B. (1978). Wait, wait, wait... School Science and Mathematics, 78, 207–
216.
Rowe, M. B. (1986). Wait time: Slowing down may be a way of speeding up!
Journal of Teacher Education, 37, 43–50.
Rowe, M. B. (1996). Science, silence, and sanctions. Science and Children, 34, 35–
37.
Rowe, M. L. (2012). A longitudinal investigation of the role of quantity and
quality of child-directed speech in vocabulary development. Child Development, 83, 1762–1774.
Sacks, H., Schegloff, E. A., & Jefferson, G. (1974). A simplest systematics for the
organization of turn-taking for conversation. Language, 50, 696–735.
Saville-Troike, M. (1995). The place of silence in an integrated theory of
communication. In M. Saville-Troike and D. Tannen (Eds.), Perspectives on
silence (pp. 3–20). Norwood, NJ: Ablex.
Schaffer, H. R. (1996). Social development. New York: Routledge.
Interactive Silences
71
Sheng, L., McGregor, K. K., & Xu, Y. (2003). Prosodic and lexical-syntactic
aspects of the therapeutic register. Clinical Linguistics & Phonetics, 17, 355–
363.
Shigemitsu, Y. (2005). Different interpretations of pauses in natural conversation—Japanese, Chinese and Americans. Academic Reports: Faculty of English,
Tokyo Polytechnic University, 28, 8–14.
Skinner, C. H., Fletcher, P. A., & Henington, C. (1996). Increasing learning rates
by increasing student response rates: A summary of research. School
Psychology Quarterly, 11, 313–325.
Skinner, C. H., Pappas, D. N., & Davis, K. A. (2005). Enhancing academic
engagement: Providing opportunities for responding and influencing
students to choose to respond. Psychology in the Schools, 42, 389–403.
Stahl, R. J. (1994). Using ‘‘Think-Time’’ and ‘‘Wait-Time’’ skillfully in the
classroom. ERIC Digest, ED370885. Retrieved from http://www.ericdigests.
org/1995-1/think.htm
Stephenson, J., Carter, M., & Arthur-Kelly, M. (2011). Professional learning for
teachers without special education qualifications working with students
with severe disabilities. Teacher Education and Special Education, 34, 7–20.
Stichter, J. P., Lewis, T. J., Whittaker, T. A., Richter, M., Johnson, N. W., &
Trussell, R. P. (2009). Assessing teacher use of opportunities to respond and
effective classroom management strategies: Comparisons among high- and
low-risk elementary schools. Journal of Positive Behavior Interventions, 11, 68–
81.
Swift, J. N., & Gooding, C. T. (1983). Interaction of wait time feedback and
questioning instruction on middle school science teaching. Journal of Research
in Science Teaching, 20, 721–730.
Swingley, D., Pinto, J. P., & Fernald, A. (1999). Continuous processing in word
recognition at 24 months. Cognition, 71, 73–108.
Talarico, M., Abdilla, G., Aliferis, M., Balazic, I., Giaprakis, I., Stefanakis,
T. . .Paolini, A. G. (2007). Effect of age and cognition on childhood speech in
noise perception abilities. Audiology & Neurotology, 12, 13–19.
Tannen, D. (1981). The machine-gun question: An example of conversational
style. Journal of Pragmatics, 5, 383–397.
Taylor, I. (1969). Content and structure in sentence production. Journal of Verbal
Learning and Verbal Behavior, 8, 170–175.
Thornbury, S. (1996). Teachers research teacher talk. English Language Teaching
Journal, 50, 279–289.
Tincani, M., & Crozier, S. (2007). Comparing brief and extended wait-time
during small group instruction for children with challenging behavior.
Journal of Behavioral Education, 16, 355–367.
Titon, D. A., Koh, C. K., Kjelgaard, M. M., Bruce, S., Speer, S. R., & Wingfield, A.
(2006). Age-related impairments in the revision of syntactic misanalyses:
Effects of prosody. Language & Speech, 49, 75–99.
72
Rhoades
Tobin, K. (1986). Effects of teacher wait time on discourse characteristics in
mathematics and language arts classes. American Educational Research Journal,
23, 191–200.
Tobin, K. (1987). The role of wait time in higher cognitive level learning. Review
of Educational Research, 57, 69–95.
Tobin, K. G. (1983). The effects of wait-time on classroom learning. European
Journal of Science Education, 5, 35–48.
Tobin, K., & Capie, W. (1983). The influence of wait-time on classroom learning.
European Journal of Science Education, 5, 35–48.
Towne, R., & Anderson, K. (1997). The changing sound of education. Sound and
Vibration, January, 48–51.
Vassilopoulos, S. P., & Konstantinidis, G. (2012). Teacher use of silence in
elementary education. Journal of Teaching and Learning, 8, 91–105.
Weiss, D. J., Gerfen, C., & Mitchel, A. D. (2010). Colliding cues in word
segmentation: The role of cue strength and general cognitive processes.
Language and Cognitive Processes, 25, 402–422.
Winstone, N., Davis, A., & De Bruyn, B. (2012). Developmental improvements
in perceptual restoration: Can young children reconstruct missing sounds in
noisy environments? Infant and Child Development, 21, 287–297.
Zekveld, A. A., Kramer, S. E., & Festen, J. M. (2011). Cognitive load during
speech perception in noise: The influence of age, hearing loss, and cognition
on the pupil response. Ear & Hearing, 32, 498–510.
Zwitserlood, P., & Schriefers, H. (1995). Effects of sensory information and
processing time in spoken-word recognition. Language and Cognitive
Processes, 10, 121–136.
Interactive Silences
73
The Volta Review, Volume 113(1), Spring 2013, 75–76
Book Review
Building Comprehension in Adolescents
Building Comprehension in Adolescents
By Linda H. Mason, Robert Reid, and Jessica L. Hagaman
Paul H. Brookes Publishing
Soft cover, 2012, $34.95, 272 pages
Due to improved brain imaging and an increase in data, the neurobiological
perspective on learning has evolved over the past three decades to embrace the
broad construct of executive functioning, or cognitive control. This refers to
those deliberate top–down processes involved in the conscious, goal-directed
problem solving processes, grouped into two constellations of skills known as
meta-cognition and self-regulation. Starting in infancy and developing across
childhood, strong executive capacities facilitate the learning process, including
the acquisition of good reading and writing skills. Historically, many
adolescents with hearing loss are identified as functionally illiterate. Likewise,
data of the past two decades show that many children and adolescents with
hearing loss demonstrate weak executive capacities. In short, children and
adolescents with hearing loss are often struggling learners.
Mason, Reid, and Hagaman have co-authored a very practical book that
targets adolescents who are struggling learners. Their book, Building
Comprehension in Adolescents, aims to provide practitioners with an evidencebased instructional model that will strengthen the executive capacities of
adolescents as well as facilitate their reading comprehension and written
expression. This guidebook of systematically implemented strategies can help
students improve their vocabulary along with varied reading and writing
skills. Executive capacities, including attention, goal setting, inhibitory control,
and organization, can be considerably improved.
The chapters within this book are divided into four sections. The first section
presents the author’s Self-Regulated Strategy Development Model for
practitioners to implement. Readers will come to understand the framework
for effective student learning; that is, an evidence-based, systematic instructional approach to facilitate reading, writing, and critical goal-directed
problem solving learning skills (otherwise known as executive functioning).
The second section focuses on reading to learn, the third section on writing
to learn, and the fourth section on homework across academic content. Each of
these well-organized sections include detailed lesson plans that provide
overviews, objectives, suggested teacher scripts, explicit step-by-step instrucBuilding Comprehension in Adolescents
75
tions, detailed vignettes, and worksheets that can easily be copied for
individual or group use. A variety of self-monitoring checklists, goal charts,
self-instruction sheets, mnemonic devices, and visual clarifiers are made
available to students. Practitioners are directed to the effective use of group
collaboration and peer practice as well as the fading of instructional supports.
Furthermore, the authors present scaffolding practices on three levels:
content, task, and material; these are included across the chapters in sections
II through IV.
The authors provide strategies that enable students to effect both internal
and external control. They show practitioners how to implement personcentered supports by verbalizing the learning process and asking questions
that facilitate self-reflection, encouraging self-directed talk. The authors also
show practitioners how to implement task modifications with the use of lexical
support, behaviorally stated problems and goals, task analysis instruction,
visual imagery, and graphic organizers. Research findings supportive of these
strategies are also cited at the end of appropriate chapters.
Although the authors explicitly targeted this book for practitioners serving
students in middle/junior and secondary/high school, practitioners working
with younger children can also benefit from this model, which embraces
effective instructional strategies regardless of content area. Itinerant and
resource practitioners as well as classroom teachers can use these thoughtfully
presented plans and strategies for the individual adolescents being served.
This sort of systematic instruction will clearly benefit struggling learners,
including those with hearing loss. Those practitioners who do not fully
understand executive functioning and how to facilitate those executive
capacities would do well to use this book as a guide in serving all students
with hearing loss. Practitioners implementing this model can enable students
with hearing loss to become independent, self-regulated learners. Regardless
of educational setting and how practitioners use it, this book should serve as an
excellent resource.
Ellen A. Rhoades, Ed.S., LSLS Cert. AVT, is an auditory-verbal consultant and
mentor whose services can be accessed through www.AuditoryVerbalTraining.com.
Correspondence concerning this article may be directed to Dr. Rhoades at
[email protected].
76
Rhoades
Information for Contributors to The Volta Review
The Volta Review is a professional, peer-review journal inviting manuscripts
devoted to reporting scholarly findings that explore the development of
listening and spoken language by individuals with hearing loss. Its
readership includes teachers of students who have hearing loss; professionals
in the fields of education, speech, audiology, language, otology, medicine,
technology and psychology; parents of children who have hearing loss; and
adults who have hearing loss. Established in 1899, The Volta Review is the
official journal of the Alexander Graham Bell Association for the Deaf and
Hard of Hearing, an international nonprofit organization, based in Washington, D.C., particularly interested in the communication abilities of people
with hearing loss. The journal is published three times annually, including
two regular issues and a special, single-topic monograph issue each year.
The Volta Review currently seeks manuscripts of empirically based studies
focusing on practical or conceptual issues with the result of advancing
knowledge relevant to the communication needs and abilities of people with
hearing loss. Group and single-subject designs are acceptable.
Manuscript Style and Submission Requirements
In general, manuscripts should conform to the conventions specified in the
Publication Manual of the American Psychological Association (APA) 6th ed. (2010)
with the exceptions and considerations given below.
Submission. A cover letter and one copy of a blinded manuscript and
accompanying figures should be submitted electronically to the Managing
Editor at [email protected].
Preparation. Please double-space all materials. Number pages consecutively
with the title page as page 1. The title page should include all authors’ names
and affiliations, regular mail and email addresses, telephone and fax
numbers for the corresponding author, and a running head. No authoridentifying information should appear anywhere other than the title page
of the manuscript. Include an abstract of 100–150 words as page 2. Assemble
the rest of the manuscript in the following order, starting each part on a new
page: First and subsequent pages of the text; acknowledgements (include
citations of grant or contract support here); references; tables; figure captions;
and figures. Refer to Merriam-Webster’s Collegiate Dictionary, 10th Edition for
preferred spellings.
Length. Limitations on length of manuscripts are based on the type of
submission. The following page recommendations apply. Research papers
are subject to a page limitation of 35 pages including tables and figures.
Manuscripts exceeding the page limitations are occasionally accepted for
publication on a space-available basis.
77
References. All references should be closely checked in the text and
reference list to determine that dates and spellings are correct. References
must follow APA style and format.
Tables. Include each table on a separate page. Number tables consecutively
using Arabic numerals. Each table should be referred to in the text by its
number. Indicate where the tables should appear in consecutive numerical
order in the text, but do not insert tables into the text.
Figures. Each figure must be referred to in the main text in consecutive
numerical order using Arabic numbers. Non-graphic information should not
appear as figure artwork, but in the figure legend. Hand lettering is
unacceptable and standard abbreviations should be used. (e.g., dB not db or
dB) Lettering or symbols appearing on the figure artwork should be as large
as possible so they will still be legible when reduced. Figure legends
(captions) must be typed double-spaced on a separate sheet of paper at the
end of the manuscript. Figure legends should not appear on the artwork.
If a table or figure has been previously published, it is the author’s
responsibility to obtain written permission to adapt or reprint the figure or
table from the copyright holder, even if it is the author’s own previously to
reproduce it must be obtained from the publisher. This applies to any figure,
table or illustration or to direct quotes from another work. You will be required
to submit proof of permission to the Managing Editor. Please alert the managing
editor to potential copyright permission issues when the manuscript is submitted.
(Photocopies of the agreement are required as proof.)
High resolution figures files are necessary for final production if your article
is accepted for publication. Files can be submitted via email as .tiff, .jpg or .eps
files with a resolution of 300 dpi at size. The Volta Review does not accept art that
is in color or downloaded from the Internet. Photocopy reproductions are not
acceptable as final printer’s copy; however, photo copies should be submitted
for the reviewer’s purpose.
Photographs. Photographs of special equipment or materials are often
desirable; however, photos of standard classroom or clinical apparatus are
not instructive and should not be included with the manuscript.
Footnotes. As a general policy, using footnotes within the text is
discouraged; however, in certain circumstances (such as for limited
clarification of terminology) footnotes can be unavoidable. In such cases,
use asterisks (*) as footnotes.
Terminology. To describe individuals’ hearing status, please use the phrase
‘‘deaf or hard of hearing’’ or ‘‘with hearing loss’’ instead of ‘‘hearing
impaired’’. In addition, please use ‘‘people first’’ language (i.e., ‘‘students who
are deaf or hard of hearing’’ or ‘‘students with hearing loss’’ instead of
‘‘hearing-impaired students’’). If authors choose not to follow this style,
provide a rationale for using the chosen terminology. The rationale may be
provided in the form of an author’s note or may be integrated into the text of
78
the manuscript. The journal reserves the right to change terminology for
readability purposes with the consent of the author.
Review. All manuscripts are subject to blind review by three or more
members of the Review Panel. No author-identifying information should
appear anywhere other than on the title page of the manuscript. The review
process takes three to four months to complete. Reviewers comments are
shared with author(s).
Editing. The Editor, Associate Editors and the Managing Editor of the
journal will edit your manuscript to verify content and to enhance
readability, as well as for consistency of style and correctness of grammar,
spelling and punctuation. Authors will be given an opportunity to review
their edited manuscripts before they are set in type. Also, if the publication
schedule permits, authors will be sent page proofs of their typeset articles for
final review. Authors should mark only typesetter’s errors and/or answer
any questions directed to them by the Managing Editor. No copy changes
should be made at this time.
Use of Word Processing. Authors must submit electronic files of their
manuscript and accompanyng figures and table files to the Managing Editor
once the article is approved for publication. Please use Microsoft Word only
and supply a hard copy of the figures and tables. You must submit your
manuscript via email to [email protected].
Transfer of Copyright. The revised copyright law, which went into effect in
January 1978, provides that from the time a manuscript is written, statutory
copyright is vested with the author(s). This copyright can be transferred only
by written agreement. Without copyright ownership, the Alexander Graham
Bell Association for the Deaf and Hard of Hearing cannot issue or
disseminate reprints, authorize copying by individuals and libraries, or
authorize indexing and abstracting services to use material from the journal.
Therefore, all authors whose articles have been accepted for publication in
The Volta Review are requested to transfer copyright of their articles to the
association before the articles are published.
Categorizing for Index. Authors are required to designate, in the cover letter
accompanying the manuscript, the primary audience to whom the
information is of most use and interest. When a manuscript is accepted for
publication, its corresponding author will be asked to fill out a more detailed
categorization form to be used in preparing an index published in the winter
issue of each year.
Reprints. At the time of publication of the journal, each contributing author
is sent three complimentary copies of the issue. Additional reprints of
published articles are available at cost to the author. These must be ordered
through the Periodicals Department, The Volta Review, 3417 Volta Place, NW,
Washington, DC 20007. Order forms are sent to authors with initial
79
complimentary copies. Authors are not permitted to sell reprints themselves
once copyright has been transferred.
Author’s Responsibility
Author guarantees, when signing a contract with a publisher, that the work is
original, that the author owns it, that no part of it has been previously
published, and that no other agreement to publish it or part of it is outstanding.
If a chapter or other significant part by the same author has been published
elsewhere, written permission to reprint it must be secured from the copyright
holder of the original publication and sent to the publisher.
It is the author’s responsibility to request any permission required for the
use of material owned by others. When the author has received all
permissions, the author should send them, or copies of them, to the publisher.
The author must provide accurate information regarding the source of any
such material in their work.
Permission for the use of such entities as poems, musical works, or
illustrations, even when no fee is charged, is normally granted only for the
first edition of a book. New editions, paperback reprints, serialization in a
periodical, and so forth, will require renewed permissions from original
copyright holder.
The author is responsible for any fees charged by grantors of permission to
reproduce, unless other arrangements are made, in writing, with the publisher.
Fees paid for reproducing material, especially illustrations procured from a
picture agency, normally cover one-time use only.
Whether or not the author needs permission to use any material not their
own, an author should give the exact source of such material: in a footnote or
internal reference in the text, in a source note to a table, in a credit line with an
illustration. Where permission has been granted, the author should follow any
special wording stipulated.
Material Requiring Permission
Copyrighted Material: The author of an original book must have written
permission to use any copyrighted material that is complete in itself: short
story, essay, chapter from a book, etc. The author should also seek permission
to use more than one line of a short poem still in copyright or any words or
music of a popular song. No permission is required for quoting from works
in the public domain.
‘‘Fair Use’’: The ‘‘Fair Use Clause’’ generally allows copying without
permission from, or payment to, the copyright owner where the use is
reasonable and not harmful to the rights of the copyright owner.
Without prescribing precise rules to cover all situations, the clause refers to
‘‘purposes such as criticism, comment, news reporting, teaching (including
multiple copies for classroom use), scholarship, or research,’’ and sets out
four factors to be considered in determining whether or not a particular use
is fair:
80
The purpose and character of the use, including whether or not such use is
of commercial nature or is for nonprofit, educational purposes;
The amount and substantiality of the portion used in relation to the
copyrighted work as a whole;
The nature of the copyrighted work;
The effect of the use upon the value or potential market of the copyrighted
work.
The Alexander Graham Bell Association for the Deaf and Hard of Hearing
encourages the author to obtain written permission for any quotation of 150 or
more copyrighted words. The author may not copy any part of a copyrighted
work unless he or she used quotation marks, indicates the source of the
quotation, and if 150 of more words in length, obtains written permission from
the copyright holder.
Authors are required to designate, in the cover letter accompanying the
manuscript, the primary audience to whom the information is of most use and
interest. When a manuscript is accepted for publication, its corresponding
author will be asked to fill out a more detailed categorization form to be used in
preparing an index published in the winter issue of each year.
81