Speech Sound Disorders, 7550.indd

Speech Sound Disorders and Treatment Outcomes
Contents
Speech Sound Disorders and Treatment Outcomes: Introduction ............................................ 2
Stimulability and Treatment Outcomes ...................................................................................... 3
Oral Motor Exercises and Treatment Outcomes ....................................................................... 7
Target Selection and Treatment Outcomes ............................................................................. 11
Computer Applications and Treatment Outcomes ................................................................... 17
Phonological Awareness and Treatment Outcomes ................................................................ 21
Nonlinear Phonology: Application and Outcomes Evaluation ................................................. 25
Evidence-Based Practice
It is the position of the American Speech-Language-Hearing Association that audiologists and
speech-language pathologists incorporate the principles of evidence-based practice in clinical
decision making to provide high quality clinical care. The term evidence-based practice refers
to an approach in which current, high-quality research evidence is integrated with practitioner
expertise and client preferences and values into the process of making clinical decisions.
Participants are encouraged to actively seek and critically evaluate the evidence basis for clinical procedures presented in this and other educational programs.
Adopted by the Scientific and Professional Education Board
April, 2006
Please note: ASHA CEUs may not be earned more than once for the same program. If you completed this product when it was initially published in the Division newsletter, you are not eligible to
take it again. Call the Action Center at 800-498-2071 to check your eligibility.
ASHA Self-Study 7550
1
Speech Sound Disorders and Treatment Outcomes
Peter Flipsen Jr., Guest Editor
Speech Sound Disorders and Treatment Outcomes: Introduction
The phrase evidence-based practice seems to be on everyone’s lips these days. Whether we are practicing speechlanguage pathologists or university researchers, we are now
all being asked to carefully examine what we do (or teach
our students to do) to be sure that there is rational evidence
that it actually works. As appealing as it might be, the retort
“I know that it works because I see changes happening” is
simply not sufficient anymore. While it is true that change
usually accompanies the work we do with our clients, can
we really be certain that the change was due to the specific
things that we did? Can we say that it wasn’t due to development or the normal recovery process or due to some other
influence? Without evidence from controlled studies we can
never be sure.
Do we have enough evidence yet to engage in truly evidence-based practice? The simple answer is no. There are
a lot of things that we do clinically that are founded simply
in our best guesses about what we think should work...but
we don’t actually have any evidence that these things work.
Fortunately, the situation is changing. As I hope you will see
in this forum, there is a body of evidence emerging on several
clinical fronts; the authors herein were approached because
they are among the group of clinicians and researchers who
have been asking the hard questions.
This forum covers a broad range of topics related to speech
sound disorders. The topics themselves were chosen because they represent what clinicians do to greater and lesser
extents. Although theories are discussed, the focus is on
some common clinical practices and on the evidence that
we have available to us about them.
The first two papers focus on long-standing areas of interest to
most speech-language pathologists, but are topics that have
also long been controversial. In his discussion of stimulability,
Tom Powell reminds us of the ongoing controversy about what
it represents and what we should be doing with the information
we gain from it. His conclusion that we should probably focus
on non-stimulable sounds may seem almost as controversial
to some as the topic itself.
Speaking of controversy, Greg Lof pulls no punches in his
discussion of using oral motor exercises to treat speech
sound errors. Having heard Dr. Lof discuss this topic at ASHA
Conventions and informally for the last few years, I felt certain he was the right person to tackle it here. His conclusion,
that there is no valid justification for their use, is sure to raise
many eyebrows. Like all the other papers in this forum, Dr.
Lof’s sticks to the evidence, and it is difficult to argue with
hard data.
2
The remaining four papers in this forum are likely less controversial, but are of interest because of their relevance and
timeliness. Lynn Williams addresses a question that is always
on the minds of clinicians: how one selects targets for intervention for children who present with multiple sound errors.
She reminds us that we need to focus on the function of the
sound in the child’s language system. She also notes that
intervention is likely to be most successful if we select what
she calls “non-proportional” contrasts. Finally, she introduces
us to her “distance metric” and describes a rationale for selecting specific target words to use in treatment, in the process
drawing on the literature from cognitive science.
Susan Rvachew’s paper on computer applications examines
some of the available digital solutions to speech sound assessment and intervention. Of particular interest to some
may be her brief discussion of the literature related to electropalatography, a technology long available but still not widely
applied. Her discussion of the effectiveness of all of these
technological solutions will, I hope, motivate more speechlanguage pathologists to try them out.
No discussion of speech sound disorders today would be
complete without dealing with phonological awareness. Heidi
Harbers takes an interesting approach to this issue. We know
that children with speech sound disorders are at increased
risk for reading failure, and we know that phonological awareness skills can help us predict reading outcomes. But isn’t
some of what we do in treating speech sound errors a form
of phonological awareness training? Dr. Harbers asks the
question: Does speech sound intervention indirectly improve
phonological awareness? Put another way, she is asking: Is it
necessary to teach phonological awareness to these children
directly or not? Dr. Harbers addresses this question head on,
but her paper will probably have its greatest appeal in the
series of very specific suggestions she makes for assessment
and intervention related to phonological awareness.
Finally, Barbara Bernhardt provides us with a glimpse into
her research program on the application on nonlinear phonological theory to clinical speech sound issues. Although
this theoretically rich topic requires the reader to pay very
close attention to detail to reap the full benefit, the evidence
Dr. Bernhardt presents for the effectiveness of using this approach is well worth the effort.
All of the authors in this forum have taken their task very seriously. They all demonstrate that there is much we do know
about what does and doesn’t work in treating speech sound
disorders. They also remind us of what is yet to be learned.
ASHA Self-Study 7550
Speech Sound Disorders and Treatment Outcomes
Stimulability and Treatment Outcomes
Thomas W. Powell
Department of Rehabilitation Sciences, L.S.U. Health Sciences Center
Shreveport LA
Virtually every speech-language pathologist has assessed
stimulability at one time or another. Procedures for stimulability testing are described routinely in textbooks on speech
sound disorders in children, and stimulability has been the
subject of scientific study for about 50 years (See Powell &
Miccio, 1996 for a review). Given the long history and popularity of the procedure, it may be somewhat surprising that
considerable controversy exists on how stimulability is to be
defined, measured, interpreted, and applied in the clinical setting. This article attempts to synthesize key research findings
regarding stimulability, with an emphasis on its relationship
to treatment outcomes.
Defining Stimulability
Stimulability assessment seeks to determine whether production of an erred sound is enhanced when elicitation conditions are modified (i.e., simplified). Given auditory and visual
models, if a child produces a target sound correctly when
linguistic demands are minimal (as in isolation or in CV, VCV,
or VC syllables), then the child is said to be stimulable for
that sound. Thus, “stimulability” refers to improvements in
performance under controlled imitative conditions vis-à-vis
non-imitative conditions.
Over the years, the construct of stimulability has evolved.
Most research published prior to 1970 operationally defined
stimulability as the percentage of improvement averaged
over all sounds in error (Sommers, 1983). It was viewed as a
generalized ability, reflecting the fact that some children profit
more from visual and auditory models than others.
Today, however, most speech-language pathologists view
stimulability as a sound-specific construct (Powell & Miccio,
1996). This viewpoint acknowledges that a child may improve
production of certain sounds differentially under imitative conditions (e.g., a child may be stimulable for target interdentals
// and //, but not for dorsals /k/ or /g/).
There has been much debate regarding the implications
of stimulability in terms of motor, perceptual, and linguistic
abilities. Most seem to agree that stimulability provides evidence of the structural and functional integrity of the speech
production mechanism under certain (limited) conditions
(Powell & Miccio, 1996). For this reason, stimulability data
are deemed of greatest interest and utility when the client’s
phonetic inventory is severely restricted. Accordingly, most of
the more recent research in this area has studied the role of
stimulability in treating speech disorders of preschool children.
Older children typically have fewer sounds in error and are
more likely to be stimulable (Lof, 1996).
The relationship between stimulability and perceptual skills
has been a topic of considerable debate and research. It
has been argued that stimulability entails a certain degree
of perceptual integrity. To replicate a sound that is not part of
ASHA Self-Study 7550
one’s phonetic repertoire, one must perceive the signal (via
the auditory or visual modality) and recognize it as unique
(Powell & Miccio, 1996). This statement does not, however,
imply that children with high scores on perceptual tests will
necessarily do well on a stimulability task (or, indeed, the reverse). Instead, research to date indicates that performance
on perceptual tasks and stimulability tasks are independent
of one another (Lof, 1996; Rvachew, Rafaat, & Martin, 1999).
Lof (1996) also provided evidence that highly visible sounds
(e.g., []) are more likely to be stimulable than sounds where
the articulators are less visible (e.g., [k]). Some children with
auditory perceptual problems may exploit visual cues to
compensate for their auditory problems.
The linguistic implications of speech sound stimulability have
also been a source of debate. Dinnsen and Elbert (1984)
suggested that stimulability reflects some degree of linguistic
knowledge. Implicit in the production of a novel sound is the
recognition that the sound is independent and in contrast
with the habitual (incorrect) form. A similar viewpoint was
presented by Edwards, Fourakis, Beckman, and Fox (1999)
who noted, “...if the child who ‘substitutes’ [] for /s/ can be
stimulated to produce the errored sound (i.e., if the child can
make an imitative production of [s] in isolation that is perceived as [s] by the clinician), the child must at some level
‘know’ how to produce the feature [strident] that distinguishes
these two fricatives” (p. 171). Lof (1996), however, reported
that stimulability is unrelated to language skills. He argued
that stimulability provides information about phonetic (but not
phonemic) abilities and thus provides no information regarding
the child’s underlying (phonemic) representation.
Measuring Stimulability
Stimulability data are most useful when they are collected
for sounds that are missing from the child’s phonetic inventory—that is, sounds that were never produced by the child
during the collection of spontaneous speech samples or
standardized single-word articulation tests. In published reports, there has been diversity in the stimulus and response
paradigms that were used to measure stimulability. Some of
these variations will be reviewed here.
Stimuli. Stimulability assessment, by definition, involves imitation. Typically, the child is asked to look at the examiner’s
mouth and to listen as the examiner produces the target
sound in isolation and in syllables; thus, paired auditory
and visual stimuli are provided. An adaptation of Carter and
Buck’s (1958) nonsense syllable task has often been used to
assess stimulability (Farquhar, 1961). This procedure elicits
the target sound in isolation and in nine nonsense syllables.
For example, stimulability of target // would be evaluated with
the following items: /u:/, /i/, /ii/, /i/, æ/, /ææ/, / æ/, /a/,
/aa/, /a/. This procedure is time efficient, typically taking less
than one minute per phoneme to collect. Some examiners
3
Speech Sound Disorders and Treatment Outcomes
augment this basic procedure by assessing production of the
target sound in words, phrases, and/or sentences.
By design, the basic stimulability paradigm involves presentation of a decontextualized stimulus; however, alternative
procedures have been described that employ stimuli that
are presented in the context of a storybook or play activity
(Hoffman & Norris, 2002; Tyler, 1996). These procedures
may be especially useful when assessing the stimulability
skills of toddlers or children with more generalized disorders
of language.
Responses. As noted above, multiple imitative responses
(typically 10) are elicited for each target sound. Credit is
given for responses that are consistent with the response
definition for that target sound, and a percentage of correct
productions can be computed. Some examiners allow two or
more attempts per target production, whereas others allow
only one.
Interpretation of Stimulability Data
Production of a sound during stimulability testing attests, to
some degree, to the child’s ability to perceive, to recognize
as different, and to produce the sound in question (Miccio,
2002). If the child is not stimulable for a sound, then one
might question the child’s motoric, perceptual, or linguistic
abilities. In addition, other conditions (such as poor attending skills or non-compliance) are likely to impact stimulability
scores negatively.
To date, few prospective research studies have examined the
relationship between sound-specific stimulability and speech
sound learning in preschool children (Miccio, Elbert, & Forrest,
1999; Powell, Elbert, & Dinnsen, 1991; Rvachew et al., 1999).
Despite variations in methodology, these studies collectively
suggest a strong, but imperfect, relationship between stimulability and sound learning. If a child is stimulable for a sound,
then that sound is likely to be added to the child’s phonetic
inventory, even without direct treatment on that sound (Miccio
et al., 1999; Powell et al., 1991). In contrast, if a child is not
stimulable for a target sound, then the likelihood of short-term
gains is poor (Miccio et al., 1999; Powell et al., 1991; Rvachew
et al., 1999). The prognosis for short-term normalization of
sounds that are not stimulable appears much poorer than for
sounds that are stimulable.
Application of Stimulability Data
In the clinical setting, stimulability data have clear utility.
Stimulability appears to be a requisite, although not sufficient,
factor in the generalization of correct sound production. Many
treatment approaches encourage the speech-language
pathologist to focus treatment first on sounds for which the
child is stimulable. While gains in the production of stimulable
sounds are anticipated following the initiation of treatment, the
available research suggests that those gains will be limited in
nature and are unlikely to affect other aspects of the child’s
phonological system. Indeed, stimulable sounds are likely
(but not guaranteed) to improve regardless of what is taught
(Powell et al., 1991).
To maximize treatment effects, speech-language pathologists
4
should consider addressing sounds that are not stimulable
early in their treatment program (Powell & Miccio, 1996).
Sounds that are not stimulable are unlikely to change without
direct treatment, and the response evocation phase of treatment is likely to be extended. One strategy that can be used
by speech-language pathologists is to encourage exploratory
sound productions and provide phonetic placement or other
types of cues to effect stimulability skills (Rvachew et al.,
1999). Once stimulability has been achieved, generalization
is more likely to be observed.
One argument against giving priority to treatment targets that
are not stimulable has been the presumed greater potential
for frustration on the part of the client (Bleile, 1994, 2002).
To minimize frustration, speech-language pathologists may
find less directive sound play activities to be an enjoyable
way to foster production of sounds that were not stimulable
(Miccio, 2002; Miccio & Elbert, 1996; Powell, 1996). The
speech-language pathologist can provide the nurturing and
the supportive environment necessary for the child to expand
his or her phonetic repertoire.
Conclusion
Despite methodological and theoretical differences, research
findings regarding the clinical utility of stimulability assessment are quite consistent: sounds that are not stimulable have
a poorer short-term prognosis than those that are stimulable.
Consequently, treatment outcomes are likely to be enhanced
when speech-language pathologists use their skills to address
production of sounds that are not stimulable; the stimulable
sounds are likely to improve, even if they are not targeted
directly for treatment.
Although robust findings regarding stimulability and speech
sound learning are beginning to accrue, there remain many
unanswered questions. Stimulability assessment procedures
clearly need to be refined and standardized. Predictive validity
studies would be especially useful in this regard.
Systematic replications are needed to extend our knowledge
of interactions between stimulability and speech sound learning to include children of different ages, different diagnoses,
and different backgrounds. For example, the implications of
stimulability for childhood apraxia of speech (CAS) have not
been adequately investigated (Powell, 1996).
The utility of stimulability assessment with children speaking
languages other than English also deserves additional study
(Goldstein, 1996). The relationship between stimulability and
phonological differences across languages in bilingual and
polyglot speakers would be an especially interesting topic
for study.
Stimulability may also provide a useful metric for helping
to distinguish dialectal differences and speech production
disorders. For example, [] may be excluded from the phonetic inventories of speakers of some non-standard dialects
of North American English (e.g., African American English
Vernacular, Cajun-influenced English). Stimulability for production of // (especially at at the phrase or sentence level)
might provide some evidence for the integrity of the motoric
systems. If a child were not stimulable for //, then one might
ASHA Self-Study 7550
Speech Sound Disorders and Treatment Outcomes
wish to evaluate motoric and perceptual factors in greater
detail before attributing the sound’s absence to a dialectal difference. Clearly, there is a need for much additional research
on this topic to help define best practice.
In a recent forum, approaches to the assessment of a preschool child with poor intelligibility were explored in a series
of articles. Although the authors were chosen to represent a
wide range of approaches, it is noteworthy that all included
some type of stimulability task as a part of the assessment
approach (Williams, 2002). Stimulability assessment has had
widespread clinical implementation for many years and appears to provide a great deal of clinically relevant information
for a modest investment of assessment time.
References
Bleile, K. M. (1994). Manual of articulation and phonological disorders:
Infancy through adulthood. San Diego, CA: Singular.
Powell, T. W., Elbert, M., & Dinnsen, D. A. (1991). Stimulability as
a factor in the phonological generalization of misarticulating
preschool children. Journal of Speech and Hearing Research,
34, 1318-1328.
Powell, T. W., & Miccio, A. W. (1996). Stimulability: A useful clinical
tool. Journal of Communication Disorders, 29, 237-278.
Rvachew, S., Rafaat, S., & Martin, M. (1999). Stimulability, speech
perception skills, and the treatment of phonological disorders.
American Journal of Speech-Language Pathology, 8, 33-43.
Sommers, R. K. (1983). Articulation disorders. Englewood Cliffs,
NJ: Prentice-Hall.
Tyler, A. A. (1996). Assessing stimulability in toddlers. Journal of
Communication Disorders, 29, 279-297.
Williams, A. L. (2002). Epilogue: Perspectives in the assessment
of children’s speech. American Journal of Speech-Language
Pathology, 11, 259-263.
Bleile, K. (2002). Evaluating articulation and phonological disorders
when the clock is running. American Journal of Speech-Language Pathology, 11, 243-249.
Carter, E. T., & Buck, M. W. (1958). Prognostic testing for functional
articulation disorders among children in the first grade. Journal
of Speech and Hearing Disorders, 23, 124-133.
Dinnsen, D. A., & Elbert, M. (1984). On the relationship between phonology and learning. In M. Elbert, D.A. Dinnsen, & G. Weismer
(Eds.), Phonological theory and the misarticulating child (ASHA
Monograph No. 22, pp. 59-68). Rockville, MD: ASHA.
Edwards, J., Fourakis, M., Beckman, M. E., & Fox, R.A. (1999).
Characterizing knowledge deficits in phonological disorders.
Journal of Speech, Language, and Hearing Research, 42,
169-186.
Farquhar, M. S. (1961). Prognostic value of imitative and auditory
discrimination tests. Journal of Speech and Hearing Disorders,
26, 342-347.
Goldstein, B. A. (1996). The role of stimulability in the assessment
and treatment of Spanish-speaking children. Journal of Communication Disorders, 29, 299-314.
Hoffman, P. R., & Norris, J. A. (2002). Phonological assessment as
an integral part of language assessment. American Journal
of Speech-Language Pathology, 11, 230-235.
Lof, G. L. (1996). Factors associated with speech-sound stimulability.
Journal of Communication Disorders, 29, 255-278.
Miccio, A. W. (2002). Clinical problem solving: Assessment of phonological disorders. American Journal of Speech-Language
Pathology, 11, 221-229.
Miccio, A. W., & Elbert, M. (1996). Enhancing stimulability: A treatment
program. Journal of Communication Disorders, 29, 335-351.
Miccio, A. W., Elbert, M., & Forrest, K. (1999). The relationship between stimulability and phonological acquisition in children with
normally developing and disordered phonologies. American
Journal of Speech-Language Pathology, 8, 347-363.
Powell, T.W. (1996). Stimulability considerations in the phonological treatment of a child with a persistent disorder of speechsound production. Journal of Communication Disorders, 29,
315-333.
ASHA Self-Study 7550
5
Speech Sound Disorders and Treatment Outcomes
6
ASHA Self-Study 7550
Speech Sound Disorders and Treatment Outcomes
Oral Motor Exercises and Treatment Outcomes
Gregory L. Lof
Graduate Program in Communication Sciences and Disorders, MGH Institute of Health Professions
Boston
Some speech-language pathologists use oral motor exercises
because they believe these exercises will facilitate speech
sound acquisition in children who have articulation/phonological disorders and/or developmental apraxia of speech (DAS).
They also use them to stimulate language development in
those with late onset of speech. The use of these exercises
has been the subject of considerable debate because of
theoretical, empirical, and/or philosophical reasons. This
paper will:
1. Describe the functioning of the oral structures during
speech and nonspeech activities,
2. Examine the value of strengthening exercises for
speech,
3. Evaluate whether non-speech tasks are relevant to improving speech, and
4. Review treatment studies that have used oral motor
exercises for speech sound production improvements.
Same Structures, Different Functions?
One assumption made by those who advocate the use of
oral motor exercises is that the same oral structures used
for speech perform the same way for nonspeech gestures.
There is considerable evidence, however, that the organization of these movements within the nervous system is not the
same. Love (2000) stated that, “…speech movement control
was mediated at a different level in the nervous system than
was nonspeech movement control” (p. 142). A recent study
by Watson and Montgomery (2002) supports this. These investigators examined neuronal activity within the subthalamic
nucleus and found differences in activation for speech and
nonspeech activity.
Differences in neural organization between speech and
nonspeech activities are most easily illustrated by the many
examples of “task specificity” in which the same structures
are used for both nonspeech activities and for speech, but
they function differently for each specific task. For example,
it is well known that there are individuals with swallowing
difficulties in the absence of speech problems; and, just
as likely, there are those who have speech difficulties without swallowing problems. This suggests that the common
structures function differently for each task (Martin, 1991).
Providing therapy for the structures for swallowing probably
will not bring about changes in speech functioning (Pauloski,
Rademaker, Logemann, & Colangelo, 1998) because of the
neuronal differences for nonspeech and speech tasks (Hardy,
1983; Love, Hagerman, & Tiami, 1980).
Additional evidence of task specificity comes from patients
who have had some form of brain damage (e.g., CVA). It is
not uncommon for some of these patients to have oral apraxia
(a.k.a., nonverbal oral apraxia), but no problem with speech
(i.e., no verbal apraxia). This can only happen if the different
ASHA Self-Study 7550
tasks were mediated at different locations in the brain (Duffy,
1995).
The literature on cleft palate also provides evidence of task
specificity. Many studies (e.g., Calnan & Renfrew, 1961; McWilliams & Bradley, 1965; Powers & Starr, 1974) document
that blowing exercises can aid in velopharyngeal closure
during other blowing tasks, but that this closure is not maintained for speaking; these patients still produce nasalized
speech. Again, the same structures are used (i.e., the velopharngeal complex), but they function differently depending
on the task.
One final example comes from the infant chewing literature.
Moore and his colleagues (e.g., Moore & Ruark, 1996; Moore,
Smith & Ringel, 1988) report that for babbling, the mandibular
muscle activation patterns “…and the coordinative organization for speech and nonspeech behaviors is task-specific and
distinct” (Moore & Ruark, 1996; p. 1045).
Due to task specificity, therapy targeting the oral structures
with nonspeech activities would appear to be of questionable value. Although the identical structures are used, these
structures function differently for speech and for non-speech
activities, precluding the transfer of skills.
Muscle Strengthening?
Oral motor exercises are often justified because they purport
to increase the strength of the articulators. But this leads to
two questions: How much strength is necessary for speaking?
Do these exercises actually increase strength?
It turns out that very little strength is needed for speech (Forrest, 2002; Love, 2000). For example, the lip muscle force for
speaking is about 10-20% of the maximal lip force capabilities
(Barlow & Abbs, 1983; Forrest, 2002). For the jaw, similar low
force levels (i.e., 11%-15%) are used during speaking when
compared to the maximum amount of force available (Forrest,
2002). It also appears to make little difference if the children
have speech sound difficulties or not. For lingual pressure
against the alveolar ridge, Robin, Somodi, and Luschei (1991)
found there was no difference in the force generated by
children with DAS and typically developing children without
speech sound problems. Taken as a whole, strength is probably not all that important for speaking.
Even if strength were an issue for speaking, the question
remains whether oral motor exercises will actually strengthen
the articulators. A non-speaking example may help illustrate
this point. Weight training to build muscles (e.g., the biceps)
involves building strength and mass (Love, 2000). This entails multiple repetitions at maximum potential against the
resistance that the weights offer until no more repetitions
can be achieved (i.e., “lifting to failure”). After a few minutes
of rest, the process is repeated. Relative to speech and
oral motor exercises, it is doubtful if any of the exercises
7
Speech Sound Disorders and Treatment Outcomes
being used follows this muscle-strengthening standard. For
example, are repetitions of “tongue wagging” ever done to
failure? How many minutes of wagging are actually used in
clinical practice? In reality, probably only a very few wags
are actually performed. In addition, these waggings are usually not conducted against resistance (Duffy, 1995). Lacking
resistance, the process becomes analogous to curling the
arm toward the shoulder multiple times without anything in
the hand. This simple repeated arm movement will not build
up bicep strength, any more than tongue wagging will build
up tongue strength.
Agility and range of movement are probably more important
for speaking than is strength (Duffy, 1995). What speakers
need is the ability to move the articulators in rapid and ballistic ways, movements that not only require little strength, but
skills that are not gained through strengthening exercises.
Duffy and others (e.g., Yorkston, Beukelman, & Bell, 1986)
report that articulator strengthening may be possible, but is
probably unhelpful even for individuals with dysarthria, the
clients that one would assume would benefit the most from
it. (For a comprehensive review of nonspeech oral exercises
for dysarthria, see Hodge, 2002.) Recall also the previous
example of velopharyngeal strengthening where a stronger,
more compliant closure does not help with correcting nasalized speech because of lack of task specificity.
In sum, oral motor exercises probably do not actually increase the strength of the articulators. But if they do increase
strength, they most likely will not help with speech production
because of their lack of specificity for speaking.
Relevance to Speech?
The use of nonspeech activities to improve speech also raises
the question of their relevance. Data on neural control show
that relevant behaviors must be used in order for change to
occur (Forrest, 2002). The context of the activity is crucial.
For sensory motor stimulation to improve articulation, the
stimulation must be done with relevant behaviors (Forrest,
1998), with the goal clearly in mind.
Oral motor exercises lack relevance to speaking, because
they are typically “disintegrated” from the goal of talking;
talking is a highly integrated task. To understand how disintegration of an integrated task is not appropriate and how the
final goal must be clearly established, let’s examine another
exercise analogy, this time shooting a free throw in basketball
(Weismer, 1996). In preparation to shoot, the player’s knees
bend, her shoulders drops slightly, and she rocks slowly onto
her toes. She then springs upwards from her toes, using her
knees like springs. As her jump progresses upward using
her legs, her arms that were against her chest, now fluidly
move above her head, with one arm pushing the ball forward.
These are just a small number of the steps that are actually
used to effectively propel the ball toward the basket. To help
learn to shoot, no coach would ever segment this myriad of
muscle movements and train just one movement separate
from the other movements. There would be no “exercises”
that disintegrated the leg movements, so that there is isolated
practice on just the heel elevation, or just the knee bends,
or just the slow rocking onto the toes. The many leg mo8
tions are not separated from the shoulder or arm motions.
Shooting the basketball involves an integration of many fluid
motions that must be taught as a whole, not separated into
bizarre subparts. For training to be effective, there cannot be
disintegration of the muscle movements that need to occur in
smooth concert with each other (Forrest, 2002).
Unlike the coach training the basketball player, when oral
motor exercises are used, there is an immediate disintegration of the highly integrated task of speaking, breaking it into
smaller sequences that are not relevant to the end goal of
speaking. It is doubtful that multiple repetitions of lip puckering
will help children produce the lip rounding that accompanies
the /o/ vowel any more than practicing the wrist movement
for the forward thrust will aid in better free throw shooting.
All highly integrated tasks must be taught as a whole, not as
isolated parts.
One other basketball example deals with the goal of practice
(Weismer, 1996). A coach would never ask players to practice
flapping their empty hands to simulate dribbling a basketball.
Using an imaginary ball would not be effective in teaching
how to actually bounce a real ball. Likewise, practicing biting the lower lip is unlikely to be effective in producing an /f/
sound. The lip-biting movement is not related to the end goal
of speaking. And there is no relevance to the end product of
speaking by using an exercise of tongue wagging, because
there are no speech sounds that require tongue wagging.
Typical articulation therapy involves a focus on production
of individual speech sounds; although some researchers go
beyond the training of individual speech sounds and advocate
training even larger end products during the initial stages
of therapy. Ingram and Ingram (2001) offer an innovative
perspective on how whole words should be taught, with the
individual sounds never separated from the word. They believe
that there should be emphasis on training the whole, not the
parts, because the word is the end goal that is relevant to
communication. The concept of the lack of relevancy created
by training parts of the whole is best summarized by Forrest
(2002) who stated, “…part training is not a cost effective
or time-effective means of enhancing the development of
speech” (p. 19).
The Clinical Evidence?
Theoretical and philosophical objections aside, the question
remains as to whether or not such exercises work in practice.
Unfortunately, limited efficacy research exists on the use of
oral motor exercises to bring about speech sound change.
However, a review of the few available studies overwhelmingly
demonstrates that such exercises do not have an impact on
improving speech sound production (Lof, 2002).
One of the earliest studies of oral motor exercise (for /s/
misproduction and tongue thrusting) was conducted by Christensen and Hanson (1981) who treated 10 children aged
5;8 to 6;9. For 14 weeks, half of the children received only
articulation therapy while the other half received articulation
treatment along with therapy that used neuromuscular facilitation techniques. They found that both groups made equal
speech improvements. This means that these exercises did
ASHA Self-Study 7550
Speech Sound Disorders and Treatment Outcomes
not provide children with any therapeutic advantage leading
to better speech sound production. Interestingly the exercises
were effective in remediating tongue-thrusting behaviors during swallowing (probably due to task specificity).
Colone and Forrest (2000; Forrest & Peabody, 2003) conducted a well-controlled single subject treatment study using
two 8;11 year old monozygotic twin boys with similar phonetic
inventories. They provided oral motor exercise treatment to
one child and a phonological treatment to the other child for
seven sessions. They report that oral motor exercises were
not useful in improving speech sound production skills, but a
phonological approach was effective for positive changes in
the treated sound and for generalization to untreated words
and word positions. Similar learning patterns then occurred
when using the same phonological approach for the child who
had previously been administered the oral motor approach.
Occhino and McCann (2001) used an alternating treatment
single-subject research design to evaluate the effectiveness of
oral motor training on speech production for a 5-year-old boy.
The treatment systematically alternated between oral motor
exercises (e.g., blowing, brushing, sticky food chewing, etc.)
and an articulation approach to sound correction. They found
that the exercises were not helpful for the child to produce
more intelligible speech and they even were not beneficial in
changing any of the child’s oral motor skills.
Abrahamsen and Flack (2002) also implemented a single
subject multiple baseline design for a 4-year-old child with
suspected DAS. They provided 10 hours of individual treatment using blowing, licking, and oral stimulation techniques.
There was no evidence of effectiveness of these oral motor
treatments for changing speech sound production.
Only one study (Fields & Polmanteer, 2002) has shown that
oral motor exercises were effective in improving speech sound
production skills in children. Eight 3- to 6-year-old children
were randomly assigned to one of two groups: four children
received 10 minutes of oral motor treatment and 10 minutes
of speech therapy, and four children received 20 minutes
of only speech therapy. Fewer errors at the end of 6 weeks
of treatment were reported for the children who received
the combination of treatments. However, because of many
methodological and statistical issues, the merit of this study
must be questioned. In particular, the severity and gender
distribution of the two groups did not appear to be equal;
the group who received speech therapy only appeared to
be more severely involved. As well, the treated sounds and
the equivalency of the sounds being treated between groups
were not reported.
Conclusion
The above discussion clearly indicates that there is little, if
any, theoretical, philosophical, or clinical justification for using oral motor exercises to improve speech sound production
skills. It is puzzling why clinicians continue to use these exercises. It is especially puzzling why they would use them with
the population that is often mentioned with these exercises,
children with DAS. By definition, children with DAS have adequate oral structure movements for nonspeech activities,
ASHA Self-Study 7550
but not for volitional speech (Caruso & Strand, 1999). These
points were emphasized by Davis and Velleman (2000) who
stated, “…there is presently no research available to support
the efficacy of oral-motor therapy for improvement of speech
production skills. Thus, it is appropriate to work with children
with DAS on nonspeech oral-motor skills themselves, but
improvement in speech should not be expected as a result”
(p. 187).
Oral motor exercises are currently widely used in clinical
practice (Pannbacker & Lass, 2002). Lacking evidence for
the effectiveness of these exercises, however, clinicians often resort to “because it works” observations (Kamhi, 1999).
This is what happened when speech-language pathologists
wrongfully embraced facilitated communication as a treatment
approach. Many clinicians observed what they thought were
treatment effects, but when put to scientific scrutiny, these
effects had nothing to do with treatment (Shane, 1994).
Gierut (1998) presents an excellent review of phonological/
articulation treatment efficacy. She reports on effective treatment approaches for children with speech sound disorders,
but none of these approaches include the use of oral motor
exercises. Until treatment outcome data demonstrate the validity of nonspeech oral motor exercises, clinicians should be
very cautious in their application and especially cautious about
making claims about treatment efficacy [Ed.: See also articles
by Williams, Rvachew, and Bernhardt in this manual].
References
Abrahamsen, E. P., & Flack, L. (November, 2002). Do sensory and
motor techniques improve accurate phoneme production? Paper presented at the annual ASHA Convention, Atlanta, GA.
Barlow, S., M., & Abbs, J. H. (1983). Force transducers for the evaluation of labial, lingual and mandibular function in dysarthria.
Journal of Speech and Hearing Research, 26, 616-621.
Calnan, J., & Renfrew, C. E. (1961). Blowing tests and speech. British Journal of Plastic Surgery, 14, 340-346.
Caruso, A. J., & Strand, E. A. (1999). Clinical management of motor
speech disorders in children. New York: Thieme.
Christensen, M., & Hanson, M. (1981). An investigation of the efficacy
of oral myofunctional therapy as a precursor to articulation
therapy for pre-first –grade children. Journal of Speech and
Hearing Disorders, 46, 160-167.
Colone, E., & Forrest, K. (Nov., 2000). Comparison of treatment efficacy for persistent speech sound disorders. Poster presented
at the annual ASHA Convention, Washington, D.C.
Davis, B. L., & Velleman, S. L. (2000). Differential diagnosis and
treatment of developmental apraxia of speech in infants and
toddlers. Infant-Toddler Intervention, 10, 177-192.
Duffy, J. R. (1995). Motor speech disorders: Substrates, differential
diagnosis, and management. St. Louis: Mosby.
Fields, D., & Polmanteer, K. (November, 2002). Effectiveness of oral
motor techniques in articulation and phonology treatment. Paper
presented at the annual ASHA Convention, Atlanta, GA.
Forrest, K. (November, 1998). Non-speech oral-motor exercises: Is
it part of the solution? Paper presented at the annual ASHA
Convention, San Antonio, TX.
9
Speech Sound Disorders and Treatment Outcomes
Forrest, K. (2002). Are oral-motor exercises useful in the treatment
of phonological/articulatory disorders? Seminars in Speech
and Language, 23 (1), 15-25.
Forrest, K., & Peabody, E. (2003). Comparison of treatment efficacy in childhood apraxia of speech. Manuscript submitted
for publication.
Gierut, J. A. (1998). Treatment efficacy: Functional phonological
disorders in children. Journal of Speech, Language, Hearing
Research, 41, S85-S100.
Hardy, J. (1983). Cerebral palsy. Englewood Cliffs, NJ: Prentice
Hall.
Hodge, M. M. (2002). Nonspeech oral motor treatment approaches
for dysarthria: Perspectives on a controversial clinical practice.
Perspectives on Neurophysiology and Neurogenic Speech and
Language Disorders, 12 (4), 22-28.
Ingram, D., & Ingram, K. D. (2001). A whole-word approach to phonological analysis and intervention. Language, Speech, and
Hearing Services in Schools, 32, 271-283.
Robin, D. A., Somodi, L. B., & Luschei, E. S. (1991). Measurement
of tongue strength and endurance in normal and articulation
disordered subjects. In C. Moore, K, Yorkston, & D. Beukelman (Eds.), Dysarthria and apraxia of speech: Perspectives
on management (pp. 173-184). Baltimore: Brookes.
Shane, H. C. (1994). Facilitative communication: The clinical and
social phenomenon. San Diego: Singular.
Watson, P. J., & Montgomery, E. B. (November, 2002). The relationship of neuronal activity and speech. Paper presented at the
annual ASHA Convention, Atlanta, GA.
Weismer, G. (1996). Assessment of non-speech gestures in speechlanguage pathology: A critical review [Videotape Recording
Teleround 35]. Tucson, AZ: National Center for Neurologic
Communication Disorders at the University of Arizona.
Yorkston, K. M., Beukelman, D. R., & Bell, K. R. (1986). Clinical
management of dysarthric speakers. Boston: Little, Brown.
Kamhi, A. G. (1999). To use or not to use: Factors that influence the
selection of new treatment approaches. Language, Speech,
and Hearing Services in Schools, 30, 92-98.
Lof, G. L. (2002). Two comments on this assessment series. American
Journal of Speech-Language Pathology, 11, 255-256.
Love. R. J. (2000). Childhood motor speech disability (2nd ed.).
Boston: Allyn & Bacon.
Love, R. J., Hagerman, E. L., & Tiami, E. G. (1980). Speech performance, dysphagia, and oral reflexes in cerebral palsy. Journal
of Speech and Hearing Disorders, 45, 59-75.
Martin, R. E. (1991). A comparison of lingual movement in swallowing and speech production. Unpublished doctoral dissertation,
University of Wisconsin-Madison.
McWilliams, B. J., & Bradley, D. P. (1965). Ratings of velopharyngeal
closure during blowing and speech. Cleft Palate Journal, 2,
46-55.
Moore, C., & Ruark, J. (1996). Does speech emerge from earlier appearing oral motor behaviors? Journal of Speech and Hearing
Research, 39, 1034-1047.
Moore, C., Smith, A., & Ringel, R. L. (1988). Behavior specific organization of activity in human jaw muscles. Journal of Speech
and Hearing Research, 31, 670-680.
Occhino, C., & McCann, J. (November, 2001). Do oral motor exercises affect articulation? Paper presented at the annual ASHA
Convention, New Orleans, LA.
Pannbacker, M. D., & Lass, N. (November, 2002). Use of oral motor
treatment in speech-language pathology. Paper presented at
the ASHA annual Convention, Atlanta, GA.
Pauloski, B. R., Rademaker, A. W., Logemann, J. A., & Colangelo, L.
A. (1998). Speech and swallowing in irradiated and nonirradiated postsurgical oral cancer patients. Otolaryngology Head
and Neck Surgery, 118, 616-624.
Powers, G. L., & Starr, C. D. (1974). The effects of muscle exercises
on velopharyneal gap and nasality. Cleft Palate Journal, 11,
28-35.
10
ASHA Self-Study 7550
Speech Sound Disorders and Treatment Outcomes
Target Selection and Treatment Outcomes
A. Lynn Williams
Department of Communicative Disorders, East Tennessee State University
Johnson City, TN
Recent research has demonstrated that target selection is
an important link between phonological assessment and
intervention. It is a significant variable in treatment efficacy
because, as suggested by Camarata and Nelson (1992),
acquisition efficiency is at least predicated on the selection
of targets that are addressed in intervention.
Typically, speech-language pathologists have relied on phonetic factors that were based on developmental norms and/or
stimulability. Specifically, those who adhered to a traditional
approach to target selection chose sounds that were stimulable and early developing. This traditional approach to target
selection was based on the assumption that earlier, stimulable
sounds were easier to produce and followed a developmental
sequence of acquisition.
New Perspectives on Old Procedures:
Phonological Complexity
Currently, new methods of target selection examine the role
that phonological complexity has on treatment outcomes.
These recent methods confront clinicians with new perspectives on old procedures that lead to the following questions
in selecting treatment targets:
• Earlier or later developing sounds?
• Stimulable or non-stimulable sounds?
• Are sounds consistently or inconsistently absent?
• One target or more than one target sound?
• Targets from the same or different classes?
• Clusters or singletons?
In a “non-traditional” approach to target selection, it is recommended that speech-language pathologists choose sounds
that are non-stimulable, later-developing, phonetically more
complex, linguistically marked, and represent least phonological knowledge. Several studies (Gierut, Elbert, & Dinnsen,
1987; Gierut, Morrisette, Hughes, & Rowland, 1996; Miccio,
Elbert, & Forrest, 1999; Powell, Elbert, & Dinnsen, 1991; Tyler
& Figurski, 1994) have reported that teaching more complex
sounds results in greater overall change in the child’s phonological system. It should be noted, however, that some of the
findings related to greater phonological complexity have not
been supported in other studies. See Rvachew and Nowak
(2001) and Rvachew, Rafaat, and Martin (1999) for alternative findings.
A closer examination of the rationales underlying the traditional and non-traditional approaches to target selection is
provided in Table 1.
Notice that the rationale provided for the target selection
factors within the non-traditional approach reflects the shift
in treatment outcomes from the learnability of the sound to
the phonological restructuring of the system. The focus of
treatment outcomes within the newer methods of target selection is on system-wide change, rather than sound learning.
Consequently, the rationales provided under the traditional
approach involving phonologically less complex targets are
still valid when ease of learning an individual sound is considered. This change in treatment outcomes from sounds to
system is a central difference between the traditional and
non-traditional approaches to target selection.
Table 1. Rationales underlying traditional and nontraditional approaches to target selection.
Selection
Traditional Approach
Non-Traditional Approach
Factor
Stimulability
Developmental
Norms
Consistency
Knowledge
ASHA Self-Study 7550
Select sounds that are stimulable.
Rationale: Sounds that are stimulable
are easier to learn (Hodson & Paden,
1991; Winitz, 1975)
Select early developing sounds.
Rationale: Early developing sounds
are acquired first (Khan & Lewis,
1990; Shriberg & Kwiatkowski, 1982)
Select sounds that are inconsistently
produced in error.
Rationale: Variability may be an
important indicator of flexibility,
change, and potential growth (Forrest
et al., 1994; Tyler & Saxman, 1991)
Select sounds for which the child has
most knowledge.
Rationale: Sounds for which the child
has some knowledge will be easier to
learn.
Select sounds that are not stimulable.
Rationale: Stimulable sounds will
emerge without direct intervention
(Miccio et al., 1999)
Select later developing sounds.
Rationale: Training later developing
sounds will result in greater system-wide
change (Gierut et al., 1996)
Select sounds that are consistent in their
error productions.
Rationale: Consistent errors represent
stable underlying representations, which
will result in across-the-board change
(Forrest et al., 2000)
Select sounds for which child has least
knowledge.
Rationale: Training least knowledge
results in greater system-wide change
(Gierut et al., 1987)
11
Speech Sound Disorders and Treatment Outcomes
Additional Factors in Target Selection
In addition to phonological complexity, there are other factors that can be considered in selecting treatment targets.
Although these factors can be considered jointly or separately,
they represent opposite ends of a spectrum: One factor
involves linguistic principles that are universal to all human
languages; the other factor is specific to the phonological
organization of each child’s sound system. Each of these
factors are discussed below.
Markedness/Implicational Relationships
Markedness represents implicational relationships among
classes of sounds within natural languages of the world.
Specifically, a marked property of a language refers to the
presence of a particular feature in a given language in which
another feature is necessarily implied by the occurrence of
the marked feature. For example, there are languages that
have both stops and fricatives, and there are languages that
have stops without fricatives, but there are no languages
that have fricatives without stops. Therefore, fricatives are a
marked class of sounds in which the presence of fricatives
necessarily implies the presence of stops in a given language.
Other marked classes of sounds include the following:
• Voiced implies voiceless consonants
• Affricates imply fricatives
• Clusters imply singletons
The Target Selection Principle is: Treat marked properties in order to facilitate the acquisition of unmarked
aspects.
Systemic Factors
Williams (2000a, 2000b) recently described a systemic
approach to target sound selection as part of the multiple
oppositions intervention approach. The systemic approach
is based on the function of the sound in the child’s rule set
and its potential for having the greatest impact on phonological restructuring. The function of a sound is a concept that
assumes that the importance of target sounds is broader
than the characteristics of the sound itself. Sounds can be
characterized as early or later developing, stimulable or nonstimulable, most or least knowledge. However, the function of
a sound is dependent on the role it plays in a particular child’s
unique phonological system and; therefore, it will vary from
child to child. A sound might function broadly as an obstruent, continuant, anterior non-continuant, voiceless fricative,
and so forth, depending on the child’s unique phonological
organization that was constructed to compensate for a limited
sound system relative to the larger ambient system. Thus, the
function of a particular sound is independent of the characteristics of a sound.
This approach to target selection is based on three premises:
1. Knowledge of the child’s unique phonologic organization
relative to the adult sound system;
2. Selection of targets that will focus the new information
for maximal restructuring of the child’s original organization; and
12
3. Selection of specific targets that are based on a distance
metric (maximal distinction and maximal classification).
Maximal distinction is the selection of target sounds that are
maximally distinct from the child’s error in terms of place,
voice, and manner. Maximal classification involves selection
of sound members from different manner classes, different
places of production, and different voicing. The goal is to
achieve maximum phonological reorganization with the least
amount of intervention. Selection of targets that are more
distinct from the child’s error (maximal distinction) makes
them more salient and, therefore, presumably more learnable. Further, selection of targets that are representative of
the sound classes collapsed across a phoneme collapse
(maximal classification) provides focused training across a
rule set. Together, these two components of the distance
metric provide the critical “corner pieces” of a “puzzle” using
salient, focused input that will facilitate the child’s phonologic
learning and reorganization.
As indicated above, guidelines for target selection using this
approach include selecting sounds that represent the extremes of the child’s phonologic organization. For example,
a child collapsed voiceless non-labial obstruents and clusters
to [t] word-initially, which resulted in a 1:7 extensive collapse,
as diagrammed below:
k
t∫
t
s
∫
st
sk
tr
Based on the guidelines described above, the target sounds
selected using a systemic approach might include [t] ~ [k, t∫,
s, tr]. The sounds are maximally classified and maximally
distinct from [t] in terms of place (alveolar ~ velar and palatal); manner (stop ~ affricate and fricative); and linguistic unit
(singleton ~ cluster). Notice that selection of these targets
addresses the function of the sounds by expanding the
child’s repertoire of voiceless non-labial obstruents from the
single phoneme of /t/. Thus, the target sounds functioned as
voiceless non-labial obstruents in this child’s system, yet the
characteristics of the target sounds themselves might include
properties that could be dichotomized according to early and
later developing sounds, stimulable and non-stimulable, most
or least knowledge.
In sum, the systemic approach to target selection suggests
that the importance of target sounds is broader than the nature
of the sound itself, but encompasses the function that a given
sound has within the child’s system in its potential to impact
phonologic learning.
The Target Selection Principle is : Select targets on the baASHA Self-Study 7550
Speech Sound Disorders and Treatment Outcomes
sis of the function of the sound within the child’s unique
organization of his/her sound system using a distance
metric.
Other Factors: Non-Proportional Contrasts and
Lexical Properties
In addition to the selection of treatment targets, other variables
have recently been identified as important factors in treatment
outcomes. These include the selection of contrastive sounds
in constructing non-proportional minimal pair treatment sets
and selection of treatment words to be used as stimuli. Each
of these factors is
described below.
Non-proportional
Contrastive Pairs
Tw o i n t e r v e n t i o n
models (maximal oppositions, cf., Gierut,
1990; and treatment
of the empty set, cf.,
Gierut, 1992) specify
the selection of nonproportional contrasts
in constructing treatment sets. The construct of proportionality refers to the redundancy and saliency of
particular contrasts.
For example, t ~ s is
a proportional contrast because it is a
frequent and nonsalient contrast (t ~ s
only differs in terms
of manner and the
contrast does not involve a major class
feature distinction).
Whereas, m ~ s is
a non-proportional
contrast because the
contrast is not frequent, but it is salient
(m ~ s differs in place,
voice, and manner
and it involves a major class feature distinction, i.e., sonorant
~ obstruent).
The Selection of
Contrastive Sound(s)
Principle is: Select
non-proportional contrasts to facilitate system-wide change.
ASHA Self-Study 7550
Lexical Properties
Recently, several investigations have examined the lexical
properties of treatment words on treatment outcomes in
phonological intervention (cf., Gierut & Morrisette, 1998;
Gierut, Morrisette, & Champion, 1999; Gierut & Storkel,
2001; Morrisette, 1999; Morrisette & Gierut, 2002; Storkel
& Rogers, 2000). Two aspects of the lexical properties of
treatment words have been identified to impact phonologic
learning: frequency and density. Frequency refers to the
number of occurrences of a given word in a language. High
frequency words are recognized faster than low frequency
words. Neighborhood density refers to the number of phoTable 2. Comparison of Target Selection Factors.
Treatment
Variables
Target Selection:
Traditional
Examples of Intervention
Approaches
Minimal Pairs
Target Selection:
Non-traditional
Minimal Pairs (MP);
Maximal Oppositions (MO)
Empty Set (ES)
Multiple Oppositions (Mult Opp)
Examples of Treatment Targets
(target sound is underlined)
t ~ f / # ___
• early developing
• stimulable
• most knowledge
t ~ t∫ / # ___ (MP)
w ~  / # ___ (MO)
r ~ z / # ___
(ES)
k
∫ / # ___ (Mult Opp)
t∫
tr
• later developing
• non-stimulable
• least knowledge
∅ ~ z / ___ # (MP; MO)
• fricatives imply stops
t
Target Selection:
Markedness
Minimal Pairs;
Maximal Oppositions
Multiple Oppositions
f
z / ___ # (Mult Opp)
t∫
• fricatives imply stops
• voiced implies voiceless
• affricates imply stops and fricatives
d
f
g
t∫ # ___
st
• distance metric (maximally distinct
and maximally classified)
m ~ s / #___ (MO)
• maximally distinct
• non-proportional contrast
• one target sound
∅
Target Selection:
Systemic
Approach
Multiple Oppositions
Selection of
Contrastive
Sounds (nonproportional
contrasts)
Maximal Oppositions;
Empty Set
Selection of
Treatment
Words (lexical
properties)
Any treatment approach
r ~ s / # ___ (ES)
• maximally distinct
• non-proportional contrast
• two target sounds
t ~ t∫ / # ___
tech ~ check (low density/hi frequency)
13
Speech Sound Disorders and Treatment Outcomes
netically similar counterparts to a word based on one sound
substitution, deletion, or addition. For example, “feet” is in
the neighborhood with “fleet,” “meet,” “fee,” and “eat.” Based
on the research by Gierut and her colleagues, treatment
words should be selected that are either high frequency or
low density. Treatment words that have either of these characteristics are more facilitative of generalization than words
that are low frequency or high density words. An example of
a good treatment word would be the word “these,” which is a
low density and high frequency word. An example of a poor
treatment word would be “cheese,” which is a high density
and low frequency word.
The Selection of Treatment Words Principle is: Select treatment words that are either high frequency or low density
to facilitate generalization.
Pulling it All Together
A comparison of the target selection factors is summarized in
Table 2 on page 13. This table provides examples of treatment
targets within specific intervention approaches for the six different selection variables that were presented in this article
(traditional, non-traditional, markedness, systemic approach,
non-proportional contrasts, and lexical properties).
In the first row of this table, an example of a traditional target
selected for a minimal pair contrastive approach would be t ~
f /# ___ (word-initially) in which /f/ was selected as the target
because it is an early developing sound; we will assume that
the child is stimulable for /f/ and further that /f/ represents
“most knowledge” in the child’s sound system. In contrast, the
next row provides examples for several different intervention
approaches under a non-traditional approach to target selection. Following the previous example with the error substitution
of [t], a minimal pair approach would select /t∫/ as the target
because it is a later developing, presumably non-stimulable
sound of which the child has least knowledge. A maximal
oppositions approach would utilize a non-proportional
contrast that would pair a sound that is independent of the
child’s error, that is, [w], with a target sound that is maximally
different, later developing, non-stimulable, and represents
least knowledge. The empty set approach would also utilize
non-proportional contrasts by pairing two unknown sounds
(i.e., two target sounds) that are maximally different, later
developing, non-stimulable, and represent least knowledge. A
further example of non-proportional contrasts is shown in the
fifth row with examples of treatment targets for the maximal
opposition and empty set intervention approaches. Finally,
the multiple oppositions approach would select several target sounds to contrast with the child’s error substitution, [t],
utilizing a distance metric. Based on the distance metric, the
targets could be either early or later developing, stimulable
or non-stimulable, and represent most or least knowledge.
Another example of the systemic approach to target selection is shown in the fourth row with the multiple oppositions
intervention approach.
Markedness is represented in the third row for minimal pairs,
maximal oppositions, and multiple oppositions approaches in
which a fricative, /z/, was selected to contrast with null wordfinally because the presence of fricatives implies stops. Thus,
14
training fricatives word-finally should result in the inclusion of
not only the trained class of fricatives, but also the untrained
class of stops word-finally. The markedness construct is further utilized in the multiple oppositions approach by selecting
a voiced fricative (voiced implies voiceless consonants) and
an affricate (affricates imply stops and fricatives).
The final row represents the influence of lexical properties on
the selection of treatment words to be used in intervention.
Lexical properties can be used in selecting treatment words
that will be used within any treatment approach. An example
of word pairs for the contrast t ~ t∫/ # ___ would be “tech”
~ “check” in which the target sound /t∫/ is trained in a high
frequency word that has a low neighborhood density.
Conclusion
In conclusion, it is clear that selection of treatment targets
is the critical link between assessment and treatment in
obtaining effective and efficient treatment outcomes. In this
article, differences between traditional and non-traditional
methods of target selection were discussed as they differ
in terms of phonological complexity, as well as in their perspective of phonologic learning, (i.e., sound learning versus
system-wide change). Although several studies were cited that
provide evidence that teaching more complex sounds results
in greater overall change in a child’s sound system, recent
studies have challenged some of these findings. Additional
factors were described for selection of treatment targets,
including markedness and the systemic function of sounds
within a given sound system. Finally, construction of non-proportional contrastive sound pairs and lexical properties of the
treatment words were discussed as additional variables that
influence treatment outcomes. Clearly, additional research is
needed to further examine the comparative efficacy of different target selection criteria on treatment outcomes. Further,
the possibility of additive relationships between the different
variables of the treatment input on treatment outcomes needs
to be examined.
References
Camarata, S. M., & Nelson, K. E. (1992). Treatment efficiency as a
function of target selection in the reme-diation of child language
disorders. Clinical Linguistics & Phonetics, 6(3), 167-178.
Forrest, K., Elbert, M., & Dinnsen, D. A. (2000). The effect of substitution patterns on phonological treatment outcomes. Clinical
Linguistics & Phonetics, 14, 519-531.
Forrest, K., Weismer, G., Dinnsen, D. A., & Elbert, M. (1994). Spectral
analysis of target appropriate /t/ and /k/ produced by phonologically disordered and normally articulating children. Clinical
Linguistics & Phonetics, 8, 267-282.
Gierut, J. A. (1990). Differential learning of phonological oppositions.
Journal of Speech and Hearing Research, 33, 540-549.
Gierut, J. A. (1992). The conditions and course of clinically induced
phonological change. Journal of Speech and Hearing Research,
35, 1049-1063.
ASHA Self-Study 7550
Speech Sound Disorders and Treatment Outcomes
Gierut, J. A., Elbert, M., & Dinnsen, D. A. (1987). A functional analysis of phonological knowledge and generalization learning
in misarti-culating children. Journal of Speech and Hearing
Research, 30, 462-479.
Williams, A. L. (2000b). Multiple oppositions: Case studies of variables
in phonological intervention. American Journal of SpeechLanguage Pathology, 9, 289-299.
Gierut, J. A., & Morrisette, M. L. (1998). Lexical properties in implementation of sound change. In A. Green-hill, M. Huges, H.
Littlefield, & H. Walsh (Eds.), Proceedings of the 22nd annual
Boston University conference on language development (pp.
257-268).
Gierut, J. A., Morrisette, M. L., Hughes, M. T., & Rowland, S. (1996).
Phonological treatment efficacy and developmental norms.
Language, Speech, and Hearing Services in Schools, 27,
215-230.
Gierut, J. A., & Storkel, H. L. (2002). Markedness and the grammar
in lexical diffusion of fricatives. Clinical Linguistics & Phonetics, 16(2), 115-134.
Hodson, B. W., & Paden, E. P. (1991). Targeting intelligible speech:
A phonological approach to remediation (2nd ed.). Austin,
TX: PRO-ED.
Khan, L. M., & Lewis, N. P. (1990). Phonological process therapy in
school settings: The bare essentials to meeting the ultimate
challenge. National Student Speech Language Hearing Association Journal, 17, 50-58.
Miccio, A. W., Elbert, M., & Forrest, K. (1999). The relationship between stimulability and phonological acquisition in children with
normally developing and disordered phonologies. American
Journal of Speech-Language Pathology, 8, 347-363.
Morrisette, M.L. (1999). Lexical characteristics of sound change.
Clinical Linguistics & Phonetics, 13(3), 219-238.
Morrisette, M. L., & Gierut, J. A. (2002). Lexical organization and
phonological change in treatment. Journal of Speech, Language, Hearing Research, 45, 143-159.
Powell, T. W., Elbert, M., & Dinnsen, D. A. (1991) Stimulability as
a factor in the phonological generalization of misarticulating
preschool children. Journal of Speech and Hearing Research,
34, 1318-1328.
Rvachew, S., & Nowak, M. (2001). The effect of target-selection strategy on phonological learning. Journal of Speech, Language,
and Hearing Research, 44, 610-623.
Rvachew, S., Rafaat, S., & Martin, M. (1999). Stimulability, speech
perception skills, and treatment of phonological disorders.
American Journal of Speech-Language Pathology, 8, 33-43.
Shriberg, L. D., & Kwiatkowski, J. (1982). Phonological disorders II:
A conceptual framework for management. Journal of Speech
and Hearing Disorders, 47, 242-256.
Storkel, H. L., & Rogers, M .A. (2000). The effect of probabilistic
phonotactics on lexical acquisition. Clinical Linguistics &
Phonetics, 14(6), 407-425.
Tyler, A. A., & Figurski, G .R. (1994). Phonetic inventory changes
after treating distinctions along an implication hierarchy. Clinical
Linguistics & Phonetics, 8, 91-107.
Tyler, A. A., & Saxman, J. H. (1991). Initial voicing contrast acquisition in normal and phonologically disordered children. Applied
Psycholinguistics, 12, 453-479.
Williams, A L. (2000a). Multiple oppositions: Theoretical foundations
for an alternative contrastive intervention approach. American
Journal of Speech-Language Pathology, 9, 282-288.
ASHA Self-Study 7550
15
Speech Sound Disorders and Treatment Outcomes
16
ASHA Self-Study 7550
Speech Sound Disorders and Treatment Outcomes
Computer Applications and Treatment Outcomes
Susan Rvachew
School of Communication Sciences and Disorders, McGill University
Montreal, Quebec, Canada
Treatment of developmental speech sound disorders requires
that the speech-language pathologist identify the source of
the problem and then design an effective treatment program.
Computer software can help with this process, but one must
ask whether there is research evidence that the tools chosen
are effective and/or that they can be effectively and efficiently
applied with a given client in a particular clinical setting. This
article will examine some of the available software with these
questions in mind.
The applications chosen were consistent with the theoretical
perspective articulated by Rvachew and Jamieson (1995).
We proposed that all speech sound errors have the same
proximal causes, either (a) a mismatch between the standard
underlying representation (UR) for a given phoneme contrast
and the learner’s UR for that same phoneme contrast, and/or
(b) a mismatch between the learner’s UR and the learner’s
surface form.
Assessment
A detailed description of the child’s underlying phonology
is a critical first step to the success of any intervention. The
Computerized Articulation and Phonology Evaluation System
(CAPES; Masterson & Bernhardt, 2002) provides such a description quickly and easily. CAPES elicits a speech sample
by presenting the child with digitized still photographs for
labeling. The child’s responses can be recorded and a transcription of the responses entered quickly with a user-friendly
interface. The software generates an initial description of the
resulting data in the form of the traditional chart showing the
child’s omission, substitution, and distortion errors for each
consonant and consonant sequence in the initial, medial, and
final positions of words. Analyses that describe the child’s
productions in terms of word length, syllable stress, word
shape, place, manner, and voice features, and phonological
processes can also be generated. Presumably, these detailed
analyses would lead to the implementation of a linguistic intervention approach. Experimental studies have demonstrated
the effectiveness of phonological process (Almost & Rosenbaum, 1998) and minimal pair approaches (Weiner, 1981),
and many case studies are appearing that suggest that approaches based on nonlinear phonology (Bernhardt & Gilbert,
1992; Ed: see also Bernhardt, this issue) and optimality theory
(Barlow, 2001) may also be effective. I am not aware of any
studies that show that access to detailed linguistic analyses
leads to more effective treatment, but this gap could easily
be filled now that this tool is available for clinical use.
CAPES provides a detailed description of the child’s error patterns and points to reasonable hypotheses about the nature
of the child’s URs. Perceptual analyses (i.e., transcription) are
often not definitive in this regard however. For example, Tyler,
Edwards, and Saxman (1990) conducted acoustic analyses of
children’s efforts to produce /t/ and /k/. Some children whose
ASHA Self-Study 7550
productions of these targets were consistently perceived to
be [t] articulated target /t/ and target /k/ with consistent but
subperceptual acoustic differences, indicating that a contrast
between coronal and dorsal place was represented underlyingly. Baum and McNutt (1990) and Forrest, Weismer, Hodge,
Dinnsen, and Elbert (1990) reported similar findings for stop
and fricative place contrasts. The existence of such “covert
contrasts” has important implications for the choice of treatment approach and for the child’s rate of progress in speech
therapy (specifically, progress in speech therapy is superior for
children with adult-like URs). Covert contrasts can be identified using speech analysis software or electropalatography,
tools that will be discussed in more detail below in the context
of treatment procedures.
Information about the child’s ability to perceive phonemic
contrasts may also reveal a primary problem with the nature
of the child’s URs (Lof, 1996). The Speech Assessment and
Interactive Learning System (SAILS, 1997) is a tool for assessing young children’s perceptual phonological knowledge
using a computer game format. Rvachew, Rafaat, and Martin
(1999) found that phonological process therapy was more
effective when the child had good perceptual phonological
knowledge of their target phonemes prior to treatment, relative to progress for target sounds that were associated with
poor pretreatment perceptual phonological knowledge. This
result parallels the finding of Tyler and colleagues (1990) with
respect to the impact of the child’s underlying knowledge on
treatment success. SAILS will be discussed in more detail
as a treatment tool below.
A systematic oral-peripheral examination can provide evidence that the child’s speech errors are based at the articulatory level, rather than (or in addition to) the phonological level.
One element of the oral examination that is frequently and
regrettably omitted is the precise measurement of the child’s
alternate motion and diadochokinetic rates. Williams and
Stackhouse (2000) found that even 3-year-olds can imitate
multisyllabic real words and nonwords at a rate of 3 syllables
per second without any consonant omission errors. Although
these tasks are not difficult for preschool age children, accurate measurement of repetition rates can be tricky for the
speech-language pathologist using the tally-and-stopwatch
method (Rvachew, Ohberg, & Savage, 2006). Very precise
measurement of repetition rates can be easily accomplished
using any software that provides a waveform display such as
those discussed below under treatment: articulation. Software
to facilitate the recording of syllable repetitions from young
children has recently been made freely available (Hodge &
Daniels, 2004; for a tutorial on the use of the TOCS+™ MPT
Recorder© ver. 1, see Rvachew, Hodge, & Ohberg, 2005).
17
Speech Sound Disorders and Treatment Outcomes
Treatment: Underlying Representations
Treatment: Articulation
Many children with speech sound errors have poor perceptual
knowledge of the acoustic cues that define the misarticulated
phoneme in contrast to the child’s substitution for that phoneme (see Rvachew & Jamieson, 1995 for a review). SAILS
was designed to directly address these difficulties by teaching
the child to identify well-produced and misarticulated versions
of the target phoneme. The program consists of two different
word modules for each of the following consonants: /k, l, r, f,
, s, ∫, t∫/. Each module is comprised of natural speech tokens
of a single word. All misarticulated words were authentically
produced by a child with a phonological delay (i.e., the test
does not involve simulated speech sound errors). The basic
procedure is common to several well-known phonological
interventions (e.g., sensory-perceptual training—traditional
approach, Van Riper & Emerick, 1984; auditory bombardment—cycles approach, Hodson & Paden, 1983; developing awareness of the properties of phonemes—metaphon
approach, Dean & Howell, 1986). However the stimuli themselves are unique to the computer program and could not
be implemented without such technology—specifically, the
program allows the clinician to provide the child with exposure
to multiple talkers and to stimuli that range in quality from
marginal to prototypical exemplars of the targeted sound
category. The results of a randomized-control study suggest
that these particular aspects of the program may be critical to
its success (Rvachew, 1994). In this study, 4-year-old children
were randomly assigned to three treatment conditions, each
of which included 6 weeks of traditional articulation therapy
targeting /∫/, as well as concurrent experience with the SAILS
computer game. Each of the three treatment groups listened
to different stimuli, however: (a) multiple correct and incorrect
recordings of the word shoe; or (b) a single correct version
of the word shoe contrasted with moo; or (c) a single version
of the word cat contrasted with Pete. The results indicated
superior gains in articulation accuracy for the group that received the standard SAILS shoe module. The children who
listened to “shoe” and “moo” also made better progress than
the control group, but their gains were not as great as those
of the experimental group. Subsequent to this initial treatment
efficacy study, it has been demonstrated that the use of this
program enhances the effectiveness of the cycles approach
to treatment (Rvachew et al., 1999). Clinical effectiveness of
the program has also been demonstrated in a study in which
undergraduate students provided either SAILS or a literacyfocused intervention to children who continued to receive
speech therapy from their speech-language pathologists
(Rvachew, Nowak, & Cloutier, 2004). In contrast to previous studies, the clinicians were free to provide the type and
amount of speech therapy that they judged to be appropriate
and the SAILS program was not coordinated with each child’s
specific therapy goals. Statistically significant differences
in post-treatment articulation ability were observed during
picture naming and narrative tasks for the group whose
speech therapy program was supplemented with the SAILS
intervention. This series of studies establishes the efficacy
of linguistic approaches to speech therapy in general, and of
the SAILS program in particular.
Articulatory learning requires that the child repeatedly practice the achievement of specific targets with feedback and
knowledge of results (Ruscello, 1993). Computer technology
can assist with all of these aspects of motor skill learning.
Repeated practice is facilitated by products such as LocuTour
Articulation (Scarry & Scarry-Larkin, 1997). Achievement of a
specified target is facilitated if the child has an adult-like UR,
a requirement addressed by SAILS as discussed above. The
natural auditory, tactile, and kinesthetic feedback that occurs
during speech production can be supplemented with feedback
from electropalatography, ultrasound, or speech analysis
software. Knowledge of results is typically provided in the
form of clinician- or child-judgments about the accuracy of
the child’s speech sound production. However, IBM’s Speech
Viewer III (2001) contains several modules that attempt to
provide information about the child’s success in achieving a
given target.
18
Electropalatography (EPG) provides a computer display of the
position and timing of tongue contacts with a custom-made
artificial palate into which is embedded rows of electrodes
(Scobbie, Wood, and Wrench, 2004). Descriptive EPG studies of children with speech delay have yielded three general
findings:
1. Children who produce the same error type (e.g., lateral
distortion of fricatives) can demonstrate widely varying
articulatory patterns when producing perceptually similar
speech sound errors;
2. Some children produce “undifferentiated lingual gestures”
in which large parts of the palate are contacted by the
tongue during the articulation of sounds that would normally be associated with discrete areas of contact; and
3. Some children demonstrate covert contrasts, that is,
varying patterns of articulation for different target sounds
that result in perceptually similar phones.
See Gibbon, Stewart, Hardcastle, and Crampin, 1999, for a
review. Many case studies (e.g., Carter & Edwards, 2004;
Dagenais, Critz-Crosby, & Adams, 1994; Dent, Gibbon, &
Hardcastle, 1995; Howard, 1998) have described a positive
response by children who have failed to respond to conventional forms of speech therapy, and, thus, the results are very
encouraging, even in the absence of randomized-control
trials. However, a great deal more study is required in order
to establish the limits of normal variability in articulatory patterns, to identify those children who are most likely to benefit
from this approach, and to evaluate clinical effectiveness and
cost efficiency.
EPG can provide visual feedback only about the articulation
of sounds that involve lingual-palatal contact. Ultrasound has
recently been adapted for the purpose of providing visual
feedback about tongue shape and movement (Stone, 2005).
Ultrasound can be used to visualize vowel and liquid articulation as well as some aspects of obstruent articulation, such
as the grooving of the tongue for the production of coronal
sibilants. Some case studies in which ultrasound was used
to improve speech articulation in hearing and hearing-imASHA Self-Study 7550
Speech Sound Disorders and Treatment Outcomes
paired adolescents have been published (Bernhardt, Gick,
Bacsfalvi, & Adler-Bock, M., 2005; Bernhardt, Gick, Bacsfalvi,
& Ashdown, 2003).
A more readily accessible tool for providing feedback is
spectrographic analysis, a technology that provides a visual
display depicting formant frequencies over time. Formant
frequencies reflect the shape of the vocal tract, and changes
in formant frequencies over time reflect changes in vocal tract
shapes as speech is articulated. ProTrain 2000 (2000) was
developed specifically for clinical applications and can be
implemented on almost any computer that has a sound card
(to which a microphone can usually be connected). ProTrain
2000 displays two spectrograms at once, the clinician’s model
of a correct production and the child’s attempt to match the
clinician model. The two productions can then be directly
compared, both in terms of the sound of the two productions
and the formant frequency patterns that are shown in the
spectrograms. The child’s task during therapy is to match the
relative positions of certain formants and the way in which
they change with time. For example, in the case of /∫/, the
child is usually required to lower the frequency of the third
formant frequency. Case studies, single-subject experiments,
and larger sample descriptive studies have reported positive
results (e.g., Becker, 1998; Ertmer & Maki, 2000; Masterson &
Rvachew, 1999; Shuster, Ruscello, & Toth, 1995) with the use
of such visual feedback. There are two primary advantages
to this approach:
1. The child is free to discover his or her own articulatory
solution to the challenge of producing the desired formant
changes; and
2. The child is rewarded for producing gradual articulatory
changes that are in the right direction even before they
result in a perceptually correct speech sound.
Speech Viewer III (2001) was developed to address the need
for knowledge of results when attempting to achieve a given
target. For example, the vowel module presents a visual
display that shows a monkey advancing up a tree toward a
coconut. The distance between the coconut and the monkey
is intended to represent the closeness of the match between
the acoustic properties of the child’s production and an acoustic template for the target vowel. Modules that target certain
sustained consonants and games that encourage the child
to contrast two phonemes are also available. Unfortunately,
the feedback provided by the software is often inaccurate
or inconsistent. Two studies with subjects who have hearing
impairment have suggested that this software does not offer
any advantages in comparison with traditional interventions
(Pratt, Heintzelman, & Deming, 1993; Ryalls, Michallet, &
Le-Dorze, 1994).
Summary
This brief article shows that there are computer-based tools
available for all phases of phonological intervention. The tools
discussed here are firmly grounded in current knowledge
and theory about the nature of speech sound disorders. The
SAILS approach has been subjected to randomized control
trials and has proven to be effective in the remediation of
ASHA Self-Study 7550
concomitant speech perception and speech production difficulties. The use of EPG, ultrasound, and spectrographic
displays to provide feedback about articulation has been
examined in descriptive studies with encouraging results.
Thus far, clinicians are better able than machines to provide
accurate information about the success of the child’s efforts
to produce specific speech sounds. All of the technologies
described here require further testing for efficiency and feasibility of application in a variety of clinical settings, but the
results to date are very promising indeed.
References
Almost, D., & Rosenbaum, P. (1998). Effectiveness of speech intervention for phonological disorders: A randomised control trial.
Developmental Medicine & Child Neurology, 40, 319-325.
Barlow, J. A. (2001). Case study: Optimality theory and the assessment
and treatment of phonological disorders. Language, Speech,
and Hearing Services in Schools, 32, 242-256.
Baum, S., & McNutt, J. C. (1990). An acoustic analysis of frontal
misarticulation of /s/ in children. Journal of Phonetics, 18,
51-63.
Becker, R. (1998). Application of spectographic feedback for improvement of speech production skills. In W. Ziegler & K. Deger
(Eds.), Clinical phonetics and linguistics. London: Whurr.
Bernhardt, B., Gick, B., Bacsfalvi, P., Adler-Bock, M. (2005). Ultrasound in speech therapy with adolescents and adults. Clinical
Linguistics & Phonetics, 19(6/7), 605-617.
Bernhardt, B., Gick, B., Bacsfalvi, P., & Ashdown, J. (2003). Speech
habilitation of hard of hearing adolescents using electropalatography and ultrasound as evaluated by trained listeners.
Clinical Linguistics and Phonetics, 17(3), 199-216.
Bernhardt, B., & Gilbert, J. (1992). Applying linguistic theory to
speechlanguage pathology: The case for nonlinear phonology.
Clinical Linguistics & Phonetics, 6, 123-145.
Carter, P., & Edwards, S. (2004). EPG therapy for children with
long-standing disorders: predictions and outcomes. Clinical
Linguistics & Phonetics, 18, 359-372.
Dagenais, P., Critz-Crosby, P., & Adams, J. B. (1994). Defining and
remediating persistent lateral lisps in children using electro-palatography: Preliminary findings. American Journal of
Speech-Language Pathology, 3, 67-76.
Dean, E., & Howell, J. (1986). Developing linguistic awareness: A
theoretically based approach to phonological disorders. British
Journal of Disorders of Communication, 31, 223-238.
Dent, H., Gibbon, F., & Hardcastle, B. (1995). The application of electropalatography (EPG) to the remediation of speech disorders
in school-age children and young adults. European Journal of
Disorders of Communication, 30, 264-277.
Ertmer, D. J., & Maki, J. E. (2000). A comparison of speech training methods with deaf adolescents: Spectrographic versus
noninstrumental instruction. Journal of Speech, Language, &
Hearing Research. 43, 1509-1523.
Forrest, K., Weismer, G., Hodge, M., Dinnsen, D. A., & Elbert, M.
(1990). Statistical analysis of word-initial / k/ and /t/ produced
by normal and phonologically disordered children. Clinical
Linguistics and Phonetics, 4, 327-340.
19
Speech Sound Disorders and Treatment Outcomes
Gibbon, F., Stewart, F., Hardcastle, W. J., & Crampin, L. (1999).
Widening access to electropalatography for children with
persistent sound system disorders. American Journal of
Speech-Language Pathology, 8, 319-334.
Hodge, M. M. & Daniels, J. D. (2004). TOCS+™ MPT Recorder©
ver.1. [Computer software]. University of Alberta, Edmonton,
AB.
Hodson, B., & Paden, E. (1983). Targeting intelligible speech: A phonological approach to remediation. San Diego: College Hill.
Scarry, J,. & Scarry-Larkin, M. (1997). LocuTour Articulation [Computer Software]. San Luis Obispo, CA: LocuTour Multimedia.
Scobbie, J. M., Wood, S. E., & Wrench, A. A. (2004). Advances in
EPG for treatment and research: an illustrative case study.
Clinical Linguistics & Phonetics, 18, 373-389.
Shuster, L. I., Ruscello, D. M., & Toth, A. R. (1995). The use of visual
feedback to elicit correct /r/. American Journal of Speech-Language Pathology, 4, 37-44.
Howard, S. (1998). A peceptual and electropalatographic case study
of Pierre Robin sequence. In W. Ziegler & K. Deger (Eds.),
Clinical phonetics and linguistics. London: Whurr.
Speech Assessment and Interactive Learning System (SAILS) [Computer software]. (1997). Ontario, Canada: AVAAZ Innovations.
Speech Viewer III for Windows [Computer software]. (2001).
Armonk, NY: IBM Corporation.
Lof, G. L. (1996). Factors associated with speech-sound stimulability.
Journal of Communication Disorders, 29, 255-278.
Stone, M. (2005). A guide to analyzing tongue motion from ultrasound
images. Clinical Linguistics & Phonetics, 19, 455-501.
Masterson, J., & Bernhardt, B. (2002). Computerized Articulation and
Phonology Evaluation System (CAPES) [Computer software].
San Antonio, TX: The Psychological Corporation.
Tyler, A. A., Edwards, M. L., & Saxman, J. H. (1990). Acoustic validation of phonological knowledge and its relationship to treatment.
Journal of Speech and Hearing Disorders, 55, 251-261.
Masterson, J. J., & Rvachew, S. (1999). Use of technology in phonology intervention. Seminars in Speech and Language, 20,
233-250.
Van Riper, C., & Emerick, L. (1984). Speech correction: An introduction to speechpathology and audiology. Englewood-Cliffs, NJ:
Prentice-Hall.
Pratt, S. R., Heintzelman, A. T., & Deming, S. E. (1993). The efficacy
of using the IBM Speech Viewer Vowel Accuracy Module to
treat young children with hearing impairment. Journal of Speech
and Hearing Research, 36, 1063-1074.
Weiner, F. (1981). Treatment of phonological disability using the
method of meaningful minimal contrast: Two case studies.
Journal of Speech and Hearing Disorders, 46, 97-103.
ProTrain 2000 [Computer software]. (2000). Ontario, Canada: AVAAZ
Innovations.
Williams, P., & Stackhouse, J. (2000). Rate, accuracy and consistency:
Diadochokinetic performance of young, normally developing
children. Clinical Linguistics and Phonetics, 14, 267-293.
Ruscello, D. M. (1993). A motor skill learning treatment program for
sound system disorder. Seminars in Speech and Language,
14, 106-118.
Rvachew, S. (1994). Speech perception training can facilitate sound
production learning. Journal of Speech and Hearing Research,
37, 347-357.
Rvachew, S., Hodge, M., & Ohberg, A. (2005). Obtaining and interpreting maximum performance tasks from children: A tutorial.
Journal of Speech -Language Pathology and Audiology, 29(4),
146-156.
Rvachew, S., Ohberg, A., & Savage, R. (2006). Young children’s
responses to maximum performance tasks: Preliminary data
and recommendations. Journal of Speech-Language Pathology and Audiology, 30(1), 6-13.
Rvachew, S., Nowak, M., & Cloutier, G. (2004). Effect of phonemic
perception training on the speech production and phonological awareness skills of children with expressive phonological
delay. American Journal of Speech-Language Pathology, 13,
250-263.
Rvachew, S., & Jamieson, D. G. (1995). Learning new speech contrasts: Evidence from adults learning a second language and
children with speech disorders. In W. Strange (Ed.), Speech
perception and linguistic experience: Theoretical and methodological issues in cross-language speech research (pp.
411-432).Timonium, MD: York.
Rvachew, S., Rafaat, S., & Martin, M. (1999). Stimulability, speech
perception and the treatment of phonological disorders. American Journal of Speech-Language Pathology, 8, 33-43.
Ryalls, J., Michallet, B., & Le-Dorze, G. (1994). A preliminary evaluation of the clinical effectiveness of vowel training for hearing-impaired children on IBM’s Speech Viewer. Volta Review,
96, 19-30.
20
ASHA Self-Study 7550
Speech Sound Disorders and Treatment Outcomes
Phonological Awareness and Treatment Outcomes
Heidi M. Harbers
Department of Speech Pathology and Audiology, Illinois State University
Normal, IL
Phonological awareness refers to the ability to attend to sound disorders. Lewis and Freebairn (1992) also found that remunits of the language to the exclusion of meaning. Phonologi- nants of a preschool phonology disorder were detectable past
cal awareness includes awareness of words, syllables, intra- grade school age and into adulthood, affecting the areas of
syllabic units (i.e., onsets and rimes), and individual sounds phonological awareness and production, as well as spelling
in syllables and words. This final level of awareness, termed and reading.
phonemic awareness, is the most complex and important
How well phonological intervention assists in the development
to attain in order to learn to read and spell. Because of the
of phonological awareness skills in children with expressive
hierarchical nature of phonological awareness, children come
phonological impairment has been directly examined in only
to phonemic awareness after they attend to larger units of
three published studies. Howell and Dean (1991) examined
sound (e.g., syllables, rimes).
patterns of phonological change and their association with
There are strong indicators suggesting that the period be- metalinguistic change in 13 preschool children (mean age 4;1;
tween ages 2 and 4 years is a very active time in the develop- range 3;7-4;7). Using their Metaphon approach to intervention,
ment of phonological awareness (Chaney, 1992). Although which specifically addresses phonological awareness when
emerging early, these skills develop gradually over time addressing intelligibility, each child attended one individual
(Chaney, 1992; Smith & Tager-Flusberg, 1982) in conjunction 30-minute intervention session per week (mean length of
with the development of spoken language skills. Phonologi- intervention was 22.5 weeks; range 11-34 weeks). Phonological awareness skills develop from implicit understanding to a cal production and metalinguistic skills (sentence and word
more explicit level (Ball, 1993; Stackhouse, 1997). For children segmentation, repair strategy use, and syntactic awareness)
to understand the alphabetic principle in reading instruction, were measured before and after intervention. Comparison
an explicit understanding of phonemes is needed in order for of pre- and post-intervention scores indicated a significant
difference in phonological production scores and in the metathem to link speech to print.
linguistic tasks that required phonological awareness skills
Early delays in phonology may impede the development of
(i.e., repair strategy use, and phoneme/sentence segmentaunderlying phonological representations, thereby setting the
tion). Harbers, Paden, and Halle (1999) examined changes in
stage for later phonological awareness deficiencies (Fowler,
awareness of sound class features and syllable shapes along
1991; Studdert-Kennedy, 1987). Significant differences in
with production in four preschool children who participated
phonological awareness skills have been found when comin an intervention program that combined components of the
paring children with phonological impairments with children
Metaphon (Howell & Dean, 1991) and the cycles approach
who are considered to be phonologically normal (Bird, Bishop,
(Hodson & Paden, 1991). For all participants, production of
& Freeman, 1995; Webster & Plante, 1992). In addition, the
most of the five phonological patterns improved in conversaseverity of phonological impairment was found to be a pretion by an average of 50% (range 15% to 96%) after only 6 to
dictor of phonological awareness and literacy skills in both
9 months of intervention. Although improvement was noted in
preschoolers and school-aged children with phonological
feature awareness for all phonological patterns, the effects
impairments (Bird et al., 1995; Catts, 1993; Larrivee & Catts,
of intervention on awareness were not as immediate as pro1999; Webster & Plante, 1992). Children with phonological
duction gains or similar across the four children. Using older
impairments may have difficulty reaching the phonemic level
children (5;6-7;6; mean age 6;1), Gillon (2000) compared the
of awareness. Bird and Bishop (1992) suggested that children
effects of phonological awareness instruction specifically on
with expressive phonological impairments may operate at the
reading outcomes in children with spoken language impairlevel of the syllable and are not able to identify smaller segment. One group of children received phonological awareness
ments (sounds) at all, thus preventing them from identifying
intervention (emphasizing development of skills at the phonethe phonological units represented by graphemes.
mic level and letter-sound knowledge training), while another
Although Bird and colleagues (1995) found that children with group of children received traditional intervention that focused
persisting phonological impairments (not resolved by 5½ on placement and sound production. Two additional groups
years) appear to be more at-risk for literacy problems, two served as experimental controls. After receiving 20 hours
retrospective studies raise questions as to how well conven- of intervention (in 1-hour individual sessions), the children
tional phonological intervention (i.e., without explicit work on receiving phonological awareness instruction made signifiphonological awareness) develops phonological awareness cantly more improvement in their phonological awareness,
skills in children with expressive phonological impairments. word decoding, and reading comprehension skills than did
Clarke-Klein and Hodson (1995) found that children who had the children who received traditional intervention. Additionhistories of severe speech production disorders, but who ally, children who received phonological intervention showed
had attained an adult sound system through intervention, more improvement in speech production when compared to
evidenced significantly poorer phonological awareness skills the improvements made by the children in the other groups.
and displayed poorer spelling strategies in third grade when Although traditional intervention was effective in improving the
compared to children with no histories of speech production children’s speech production, it had little effect on developing
ASHA Self-Study 7550
21
Speech Sound Disorders and Treatment Outcomes
phonemic awareness or reading skills.
Children with underlying phonological disorders are more vulnerable to reading and spelling difficulties than children with
articulation problems because of the linguistic involvement
associated with a phonological disorder (Stackhouse, 1985).
Without measurement to determine otherwise, phonemic
awareness skills may be insufficiently developed in the intervention process. For children with phonological impairments,
the use of traditional methods focusing on the articulatory
aspect of sound production may ignore the underlying representation and organization of a child’s sound system. Without
explicit focus on improving awareness of phonemes, children
may not be able to access the needed sound information
for literacy tasks (Gillon, 2000). Unless clinicians thoroughly
understand the concept of phonological awareness and its
role in phonological intervention and literacy, they may easily
teach it in ways that produce little benefit to children.
Implications for Clinical Applications
Because spoken language provides the foundation for the
development of reading and writing skills, current roles and
responsibilities with respect to reading and writing for speechlanguage pathologists as outlined by the Ad Hoc Committee
on Reading and Written Language Disorders (ASHA, 2001)
include prevention, early identification, assessment, intervention, collaboration and advocacy. Clinicians need to be both
informed and proactive if they are to identify and promote the
underlying skills that contribute to literacy.
represented by print. The Test of Invented Spelling (Mann,
Tobin, & Wilson, 1987) and the phonemic awareness subtest
of the Early Reading Screening Instrument (Morris, 1992) are
useful invented spelling screening tools.
Clinicians need to verify that phonemic awareness skills have
also developed in the course of intervention and are sufficient
to support the child when learning how to read and spell. For
a child who is being considered for discharge, it is imperative
to consider his/her phonological processing skills (i.e., phonological awareness, memory, multisyllable word production,
and rapid automatic naming) with post-intervention production
skills. The Comprehensive Test of Phonological Processing
(Wagner, Torgesen, & Rashotte, 1999) is a useful tool for this
purpose and can be administered to children as young as 5
years of age. The Phonological Awareness Test (Robertson &
Salter, 1997) is a comprehensive assessment tool that focuses
on phonological awareness (syllable, rhyme, and phoneme
levels) and includes sections that examine print knowledge.
Several nonstandardized measures are available that assess
varying levels of awareness. (See Justice, Invernizzi, & Meier,
2002; and Torgesen & Mathes, 2000 for a review.)
A decision tree outlined by Justice and colleagues (2002)
provides questions that are aimed at guiding practitioners to
select those candidates who would benefit from early literacy
screening. Interestingly, the suggested outcomes are either
to conduct a screening or informally monitor early literacy
knowledge, both roles that can be implemented by speechlanguage pathologists.
Assessment
Intervention
Assessing both awareness and production skills at the time
of diagnosis is important to learn more about the extent of the
sound system disorder. Harbers and colleagues (1999) suggested that assessing both awareness and production may
indicate when each component should be emphasized during
intervention. Gathering information about literacy socialization
from caregivers would be especially valuable and may help to
direct efforts in involving the family in the intervention process.
(See Jenkins & Bowen, 1994, for an example of a checklist.) If
the child is 4 years of age or older, it is reasonable to directly
assess phonological awareness skills, along with production,
at the initial evaluation when determining the existence of a
phonological disorder to gain baseline data.
When discussing the preparation of preschoolers with specific language impairment for school, Fey, Catts, and Larrivee (1995) stated that “...it is reasonable, if not advisable,
to include structured phonological awareness activities in a
comprehensive intervention package for preschoolers with
language impairments” (p. 17). Programs promoting phonological awareness in preschool children at risk for literacy
problems may help to prevent the full impact of a phonological impairment on later literacy development (Stackhouse,
1997).
By monitoring phonological awareness development throughout the course of intervention, clinicians will be able to identify the presence or absence of early literacy achievement.
Conducting informal probes during the intervention process
is also helpful in determining whether or not an increase in
intelligibility is accompanied by gains in awareness. The
information gained from these probes may assist in directing the focus of intervention. Phonological awareness skills
of blending and segmenting sentences, syllables, and onsets/rimes are developmentally appropriate for preschool
children and are necessary steps in reaching the phonemic
level of awareness. For older preschool children and those
in kindergarten or first grade, examining spelling skills would
be extremely valuable in determining whether a conscious
awareness of sounds is present and how that awareness is
22
Hodson (1994) contends that, for children with unintelligible
speech, intervention should aim at increasing intelligibility and
improving emergent literacy skills. To increase intelligibility,
cognitive reorganization should be emphasized rather than
simply articulatory retraining (Grunwell, 1982). For reorganization to take place, children require information that will
encourage them to make their own changes (Howell & Dean,
1991). Gains in intelligibility, however, do not automatically
affect phonological awareness skills. Richgels (2001) asserts
that phonemic awareness is not needed in spoken language,
but is required in order to link speech and print. He stated
that the ability to combine and contrast phonemes in speech
is unconscious perception of phonemes rather than awareness. Because of this, we need to make our interventions
assist children to reach the phonemic level of awareness.
Gillon (2000) emphasized that the ability to consciously access information about sounds needs to be made explicit for
children with phonological impairment, so that they will be
ASHA Self-Study 7550
Speech Sound Disorders and Treatment Outcomes
able to link spoken to written language.
Activities to promote children’s explicit awareness will be
successful only when adults are explicit with their own awareness. Making knowledge explicit tends to be easier for large
segments (syllables, onsets/rimes) than for small segments,
such as sounds (Goswami, 2001). Jenkins and Bowen (1994)
provide suggestions for using the phonological awareness
hierarchy (words, syllables, onsets/rimes, and sounds) as a
framework for phonological intervention. Syllable awareness
can be promoted by clapping out the number of syllables in
multisyllable words. When targeting final consonant singletons or sequences, rhyme families can be utilized. Younger
children can be encouraged to produce real and nonsense
word rhymes to promote sound play. Alliteration (i.e., maintaining the onset of a word) can become a common sound play
clinicians utilize in intervention when prevocalic consonants
are targeted. Listening to stories that contain rhymes or alliterations is also a worthwhile activity. Not only do children
need to recognize these phonological units, they must also
be able to make comparisons with them.
In the Metaphon approach, Howell and Dean (1991) present two ways to increase children’s awareness about the
information they need to effect change in their phonological
systems: conceptualizing the features of the target pattern
and providing feature awareness feedback. Conceptualization
of the features of the target pattern increases the saliency
of information needed for a revised pattern to be formed.
Examples of this technique include using a giraffe to emphasize velar sounds, long items to focus on continuancy, and a
caboose to focus on a closed syllable shape. (See Harbers
et al., 1999, or Howell & Dean, 1991, for more examples.)
For patterns that appear to be more difficult for a preschool
child to produce (e.g., velars and liquids), an emphasis on
awareness of the features of patterns through conceptual
activities may be needed in the early stages of intervention
(rather than a focus on production) to support the child’s
reorganization of feature knowledge (Harbers et al.). Providing feature awareness feedback is a means to reinforce the
salient information, as well as to promote self-monitoring.
This type of feedback focuses the child’s attention on the key
information needed to assist them in improving production
and awareness. Awareness feedback draws attention to the
information in an explicit way. Because it also promotes selfmonitoring, awareness feedback integrated into production
activities is an easy technique to scaffold awareness while
supporting production.
In addition to the Metaphon techniques, other techniques
can promote awareness of target patterns. Using clarification
requests facilitates reflection on the form of the word. These
requests can begin by being direct (e.g., Did you say that
word with two sounds?) and move to become more general
(e.g., What did you say?). To promote awareness, these requests need to be made after correct, as well as incorrect,
attempts at production. Young children learn to respond to the
adult’s intonation, rather than to think about how a word was
produced. Print should also be used to support awareness
and production and exposes the preschooler to future links
between speech and print. Changing the letter “l” into an arASHA Self-Study 7550
row pointing upward reminds the child of tongue placement
and tracing the letters representing fricatives in a continuous
flow to emphasize continuancy are examples of this.
Speech-language pathologists must also support and extend
the information teachers and parents have about the link between speech and print. Catts (1991) discussed the unique
position speech-language pathologists have in literacy instruction because of their training. Information about sound production (i.e., place, manner, and voicing) can greatly enhance
teacher instruction in sound-letter association. If teachers
have a better understanding of the relation between spoken
and written language impairments, they become better able to
assist children with phonological impairments during reading
instruction. By supporting and extending the information base
of parents, speech-language pathologists can assist them in
advocating for their child. When well informed, families can be
of great assistance in prevention activities by participating with
their child in sound play, joint book reading, and phonological
awareness activities.
Conclusion
This article has focused attention on the important role of
phonological awareness for intelligibility and literacy. Facilitating the emergence of phonemic awareness, along with
speech intelligibility, in children with phonological impairments
is important to support their success in literacy skills. The
information gained from assessing children’s phonological
awareness skills before, during, and after intervention will
assist speech-language pathologists in identifying their role
in supporting the early literacy skills of the children with unintelligible speech. Intervention studies are needed to explore
this important connection for young children with phonological
impairments.
References
ASHA. (2001). Roles and responsibilities of speech-language
pathologists with respect to reading and writing in children
and adolescents (position statement, executive summary of
guidelines, technical report) ASHA Supplement, 21, 17-27.
Rockville, MD: Author.
Ball, E. (1993). Assessing phoneme awareness. Language, Speech,
and Hearing Services in Schools, 24, 130-139.
Bird, J. & Bishop, D. V. M. (1992). Perception and awareness of phonemes in phonologically impaired children. European Journal
of Disorders of Communication, 27, 289-311.
Bird, J., Bishop, D. V. M., & Freeman N.H. (1995). Phonological
awareness and literacy development in children with expressive phonological impairments. Journal of Speech and Hearing
Research, 38, 446-462.
Catts, H. (1991). Facilitating phonological awareness: Role of
speech-language pathologists. Language, Speech, and Hearing Services in Schools, 22, 196-203.
Catts, H. (1993). The relationship between speech-language impairments and reading disabilities. Journal of Speech and Hearing
Research, 36, 948-958.
Chaney, C. (1992). Language development, metalinguistic awareness, and emergent literacy skills of 3-year-old children. Applied
Psycholinguistics, 13, 485-514.
23
Speech Sound Disorders and Treatment Outcomes
Clarke-Klein, S., & Hodson, B. (1995). A phonologically based
analysis of misspellings by third graders with disordered-phonology histories. Journal of Speech and Hearing Research,
38, 839-849.
Fey, M., Catts, H., & Larrivee, L. (1995). Preparing preschoolers for
the academic and social challenges of school. In M. Fey, J.
Windsor, & S. Warren (Eds.), Language intervention: Preschool
through the elementary years (pp. 3-28). Baltimore, MD: Paul
H. Brookes.
Fowler, A. (1991). How early phonological development might set
the stage for phoneme awareness. In S. Brady & D. Shankweiler (Eds.), Phonological processes in literacy (pp. 99-117).
Hillsdale, NJ: Lawrence Erlbaum Associates.
Robertson, C.,& Salter, W. (1997). The phonological awareness test.
East Moline, IL: Linguisystems.
Smith, C., & Tager-Flusberg, H. (1982). Metalinguistic awareness
and language development. Journal of Experimental Child
Psychology, 24, 449-468.
Stackhouse, J. (1985). Segmentation, speech, and spelling difficulties.
In M. Snowling (Ed.), Children’s written language difficulties
(pp. 96-115). Windsor, England: Nfer-Nelson.
Stackhouse, J. (1997). Phonological awareness: Connecting speech
and literacy problems. In B. Hodson & M.L. Edwards (Eds.),
Perspectives in applied phonology (pp. 157-196). Gaithersburg, MD: Aspen.
Gillon, G. (2000). The efficacy of phonological awareness intervention for children with spoken language impairment. Language,
Speech, and Hearing Services in Schools, 31, 126-141.
Studdert-Kennedy, M. (1987). The phoneme as a perceptuomotor
structure. In A. Allport, D. Mackay, W Prinz, & E. Scheerer
(Eds.), Language perception and production (pp. 67-84). New
York: Academic Press.
Goswami, U. (2001). Early phonological development and the
acquisition of literacy. In S. Neumann & D. Dickinson (Eds.),
Handbook of early literacy research (pp. 111-125). New York:
Guilford Press.
Torgesen, J., & Mathes, P. (2000). A basic guide to understanding,
assessing, and teaching phonological awareness. Austin,
TX: Pro-Ed.
Grunwell, P. (1982). Clinical phonology. Rockville, MD: Aspen.
Wagner, R., Torgesen, J., & Rashotte, C. (1999). The comprehensive
test of phonological processing. Austin, TX: Pro-Ed.
Harbers, H., Paden, E., & Halle, J. (1999). Phonological awareness
and production: Changes during intervention. Language,
Speech, and Hearing Services in Schools, 30, 50-60.
Webster, P., & Plante, A. (1992). Effects of phonological impairment
on word, syllable, and phoneme segmentation. Language,
Speech, and Hearing Services in Schools, 23, 176-182.
Hodson, B. (1994). Helping individuals become intelligible, literate,
and articulate: The role of phonology. Topics in Language
Disorders, 14(2), 1-16.
Hodson, B., & Paden, E. (1991). Targeting intelligible speech (2nd
ed.). Austin, TX: Pro-Ed.
Howell, J., & Dean, E. (1991). Treating phonological disorders in children: Metaphon-theory to practice. San Diego, CA: Singular.
Jenkins, R., & Bowen, L. (1994). Facilitating development of preliterate children’s phonological abilities. Topics in Language
Disorders, 14(2), 26-39.
Justice, L. M., Invernizzi, M. A., & Meier, J. D. (2002). Designing and
implementing an early literacy screening protocol: Suggestions
for the speech-language pathologist. Language, Speech, and
Hearing Services in Schools, 33, 84-101.
Larrivee, L., & Catts, H. (1999). Early reading achievement in children
with expressive phonological disorders. American Journal of
Speech Language Pathology, 8, 118-128.
Lewis, B., & Freebairn, L. (1992). Residual effects of preschool phonology disorder in grade school, adolescence, and adulthood.
Journal of Speech and Hearing Research, 35, 819-831.
Mann, V., Tobin, D., & Wilson, R. (1987). Measuring the causes
and consequences of phonological awareness through the
invented spelling of kindergarten children. Merrill-Palmer
Quarterly, 33, 365-393.
Morris, D. (1992). What constitutes at-risk: Screening children for
first-grade reading intervention. In W. Secord & J. Damico
(Eds.), Best practices in school speech-language pathology
(pp. 43-51). Orlando, FL: Harcourt, Brace, Jovanovich.
Richgels, D. (2001). Invented spelling, phonemic awareness, and
reading and writing instruction. In S. Neumann & D. Dickinson
(Eds.), Handbook of early literacy research (pp. 142-155). New
York: Guilford Press.
24
ASHA Self-Study 7550
Speech Sound Disorders and Treatment Outcomes
Nonlinear Phonology: Application and Outcomes Evaluation
Barbara Handford Bernhardt
School of Audiology and Speech Sciences, University of British Columbia
Vancouver, BC, Canada
Phonological theories are constructed to account for speech
sound inventories and alternation patterns within and across
languages. In the clinical realm, such theories have provided
useful frameworks for analysis of children’s speech patterns
and selection of goals for treatment (e.g., Barlow & Gierut,
1999; Bernhardt & Stemberger, 1998, 2000; Edwards & Shriberg, 1983; Gierut, 1998; Grunwell, 1985; Hodson & Paden,
1991; Ingram, 1981). Phonological theories have sometimes
suggested specific therapeutic approaches, such as the
cycles approach (Hodson & Paden, 1991), paired contrasts
(Elbert & Gierut, 1986), metaphonological awareness (Dean
& Howell, 1986), syllable structure interventions (Bernhardt,
1994). Advances in phonological theories provide new clinical opportunities for phonological intervention. In the 1990s,
studies were conducted that applied nonlinear phonological
theories to intervention (Bernhardt, 1990, 1992; Bernhardt &
Stemberger, 1998, 2000; Edwards, 1995; Major & Bernhardt,
1998; Von Bremen, 1990). Detailed descriptions of clinical
applications of nonlinear phonology are available elsewhere
(e.g., Bernhardt, 1992; Bernhardt & Gilbert, 1992; Bernhardt
& Holdgrafer, 2001a, 2001b; Bernhardt & Stemberger, 1998,
2000; Bernhardt & Stoel-Gammon, 1994). The first section
of this article includes an overview of major concepts and applications of those concepts in the intervention studies. The
Figure 1. The phonological hierarchy: From the phrase to
the feature
Phrase
Word
Word
Foot
Syllable Syllable
Onset Rime
V
C
Root (Manner features)
Laryngeal features Place features
Labial
[+labiodental]
/f/ (of cuff)
ASHA Self-Study 7550
Application 1: Phonological Hierarchy
Nonlinear phonological theory acknowledges sequential (or
linear) phonological patterns (e.g., that nasal consonants in
English have the same place of articulation as the following
stops: ruMBa, baND, taNGo), but emphasizes the hierarchical (nonlinear) nature of phonological form and the effects of
that hierarchy on phonological patterns. The major division
in the hierarchy is between phrase and word structure, and
segments (speech sounds) and features (see Figure 1).
In the studies applying nonlinear theories to intervention,
the concept of hierarchy was paramount. Both structural
targets and segmental and feature targets were included for
each child, in accordance with the major division within the
hierarchy.
Application 2: Syllable and Word Structure
The highest level of the structural division of the hierarchy is
the phrase, a grouping of words with different levels of phrasal
stress. Words are composed of feet, which are composed
of syllable groupings reflecting relative stress. Syllables are
subdivided into onsets (consonants occurring before the
vowel) and rhymes (the stress-bearing units, or moras, and
any additional consonants).
In the intervention studies, each child had two types of structural goals. One type was new structural form: new word
shapes (e.g., CVC, CCV, CVCV), stress patterns, or word
lengths. The other type involved strengthening of existing word
positions that were minimally developed in terms of features
and segments present in the inventory. For example, a child
may have used fricatives word finally only, even though there
were other initial consonants. The goal was to strengthen
word-initial position by introducing fricatives.
For structural targets in the intervention studies, treatment
strategies were developed that focused on either onsets and
rhymes, or moras (Bernhardt, 1994; Bernhardt & Stemberger,
2000).
Mora Mora
C
second section describes the studies briefly and presents
highlights of the results.
Application 3: Feature Hierarchy
The segmental portion of the hierarchy includes segments
and features. The features are also considered to have hierarchical organization (see Bernhardt & Stemberger, 1998,
2000). Manner features are typically viewed as “higher” than
place features. Within each category, there is also hierarchical organization. For example, /p/, /b/, /m/, /w/, /f/, /v/ are all
[labial], but /f/ and /v/ have an additional dental component.
This [labiodental] feature is a sub-category of [labial]. Such
details of pronunciation are at the lowest level of the phonological hierarchy. In the segmental/feature condition of the
25
Speech Sound Disorders and Treatment Outcomes
earlier nonlinear intervention studies, rate of acquisition for
higher and lower features was compared (Bernhardt, 1990;
Von Bremen, 1990).
Features and segments show other relationships of dominance. A particular feature or feature value may be considered
the default because it is more frequent, less complex, or more
unmarked in comparison with the contrasting nondefault. In
English, there are more [-labiodental] labials (/p/, /b/, /m/, /w/)
than [+labiodental] labials (/f/ and /v/). Thus, [-labiodental] is
the default, and [+labiodental] the nondefault. Segments have
different combinations of features. The fewer the number of
nondefault features, the less complex the segment. Across
languages, /t/ is considered to be an unmarked, or default,
consonant (Bernhardt & Stemberger, 1998, 2000). Stops are
the most frequent manner category for consonants across
languages; default manner features, therefore, include [-nasal], [-continuant], [-sonorant], and [-lateral]. Alveolar features
[coronal] and [+anterior]) are highly frequent consonant features across languages, and, thus, those features are place
defaults. In terms of laryngeal features, /t/ is [-voiced]. Among
non-sonorant consonants, [-voiced] consonants are more
common than [+voiced] consonants across languages; therefore, [-voiced] is the default. The consonant /v/ has several
nondefault features: [+continuant] (fricative), [labial], [+labiodental], [+voiced], and thus /v/ is more complex than /t/.
In early development, phonological systems have more
default features than nondefault features. Thus, throughout
the intervention studies, nondefaults were the usual feature
targets of intervention. In the later studies, the rate of acquisition for new individual (nondefault) features and new feature
combinations with nondefaults was compared. (Note that a
child’s default values may be different from those of the adult
language. For example, the child may use velar rather than
alveolar as a default. In this case, the adult default feature
would be the treatment target.)
Application 4: Interaction of Structural and
Feature Constraints
Phonological patterns occur because of constraints on phonological form at the various levels. Assume a child’s consonant
inventory shows absence of word-final [f] and [v]. Whether /f/
and /v/ are deleted or replaced by other consonants depends
on other aspects of the phonological system. For example,
if syllable-final consonants (codas) are generally impossible,
then word-final /f/ and /v/ will not be produced, even if they
are present word initially. Negative constraints on syllable
structure override feature requirements. If positive structural
constraints require production of codas, but a child cannot
articulate fricatives in that position, then word-final fricatives
will be replaced by other consonants. Positive constraints
on structure override segmental accuracy (top-down effects
in the phonological hierarchy). If accurate pronunciation of
target segments is more important than preservation of syllable and word structure, /f/ and /v/ may be deleted in coda,
even if the child produces other word-final consonants. In this
case, faithfulness to segmental content overrides structural
integrity of the syllable (a kind of bottom-up effect in the
phonological hierarchy).
26
In summary, there are constraints requiring production of
segmental content (positive, or faithfulness constraints) and
some preventing production of phonetic content (negative or
markedness constraints), both in terms of higher structural
levels of phonological form and in terms of the lower feature
levels. Describing a child’s phonological system entails determination of both the positive constraints (the strengths)
and the negative or inhibitory constraints (the developmental
needs). In setting up treatment goals and strategies in the
intervention studies, strengths of the system were used to
address the needs. For example, new features were generally
targeted in existing word structures; new word structures were
generally targeted using existing features and segments.
General Methodology of the Nonlinear
Intervention Studies
The intervention studies were conducted with preschool
children with moderate to severe phonological disorders
who spoke Canadian English. Taking production of the highly
frequent English monosyllable CVC as a marker for severity,
about half of a pool of 60 subjects showed less than 50%
accuracy on production of CVC (with deletion of one or both
Cs) or a severe impairment. Some of the children also had
delays in language production, but language comprehension
and hearing were within normal limits. No major physical
impairments were observable.
Studies differed slightly in number of weeks (16-18) and sessions per week (two versus three), but each child had at least
two blocks of treatment lasting 6-8 weeks each. Children
received individual treatment except in one group study. For
most studies, each child had both word structure and feature
target conditions within each 6-8 week treatment block, with
two sets of each target type within each condition, and three
to four sessions per individual target. If a child mastered any
targets of the first treatment block, new targets for that condition were introduced in block 2. Where possible, order of goals
and conditions were counterbalanced across subjects.
There were both small n studies, conducted by university
researchers (e.g., Bernhardt, 1990; Edwards, 1995; Von
Bremen, 1990) and larger group studies conducted by
speech-language pathologists in community health settings
and overseen by university researchers (Major & Bernhardt,
1998; Bernhardt, Major, & Brooke, 2003). For all studies, the
author was ultimately responsible for clinician training, goal
selection, and program planning, although in the later community-based studies, Major and S. Edwards contributed to
clinician training, and clinicians did the analyses and worked
out a tentative intervention plan.
Clinicians, in consultation with university researchers, developed intervention activities. Awareness activities were
included for all targets in all treatment sessions. For syllable
and word structure, activities also focused on either onsets
and rhymes or moras within syllables, using imitation and
awareness. Segmental and feature targets were addressed
with either awareness only or awareness plus oral-motor facilitation and imitation. Activities were play-based, but included
opportunities for drill. Children were given tangible rewards
ASHA Self-Study 7550
Speech Sound Disorders and Treatment Outcomes
for participation and the use of their “new way of talking” in
everyday conversation. Parents or caregivers received a
training session, participated in all sessions with the clinician
and child, and conducted home activities.
Speech samples were audio-taped before each treatment
condition and at the end of the project using a Marantz
OMD-430 audiotape recorder and PMZ microphone. The
main probe samples were based on spontaneous single
word elicitations from a 164-word list (Bernhardt, 1990), or,
in the later studies, the Photo Articulation Test (Pendergast,
Dickey, Selmar, & Soder, 1984). In addition, connected speech
samples were used to confirm or amplify patterns observed
in the single word samples. Samples were transcribed by
master’s level students in speech-language pathology in the
university laboratory. The author or project staff performed
reliability checks on the transcriptions. Where differences
in transcriptions were observed, consensus transcriptions
were developed. Phonological analyses were done with the
support of computer software developed during the project
(Bernhardt, Cam, & Major, 1994).
Highlights of the Results
Applications 1 and 4: Phonological Hierarchy, and
Interactive Constraints
Results showed different rates of learning for the various hierarchy-based conditions across participants. This differential
rate supports the concept of levels and suggests that the
various levels of the hierarchy have individual developmental
paths. One of the unexpected outcomes of variable learning
rates for the levels was the positive impact on the mood of the
participants. By examining all levels of the hierarchy, it was
possible to demonstrate a child’s strengths in addition to his
or her needs to the families. This launched the programs on
a positive note. The fact that children tended to improve on
at least one target in the first treatment block helped maintain
momentum and enthusiasm.
Application 2 and 4: Syllable and Word Structure, and
Interactive Constraints
Across the studies, most children showed significantly faster
rates of change for word and syllable structure targets than
for segmental and feature targets. In Bernhardt’s (1990) study
with six participants, a greater proportional gain in accuracy
was noted for word structure targets compared with segmental
targets, especially in the first block of treatment (Wilcoxon
Mann-Whitney test (W) = 108, p = .015). In a group study
with 19 children (see Major & Bernhardt, 1998), there was an
average gain of 24% in word shape accuracy, compared with
a 13% gain in consonant accuracy (excluding deletions), after
12 weeks of intervention (t-test (t) = -2.11038, p = .0418). In Von
Bremen’s (1990) study with identical twins, the twin assigned
to the word structure condition mastered both his targets and
those of his brother (assigned to the segmental condition),
who took longer to achieve both types of targets.
than precise articulation of segments. The fact that new
word structures were targeted using existing segments (Application 4) may have facilitated the accelerated change. The
word structure changes were beneficial in several ways: (a)
additional word shapes and frameworks became available
for the new feature and segment targets, (b) intelligibility was
enhanced at the end of the first treatment block because of
reduction of deletions in particular, and (c) motivation was
enhanced early in the project.
Application 3: Feature Hierarchy
Establishment of new features and segments generally required two blocks of treatment. Detailed discussion of feature
outcomes is beyond the scope of this paper. However, in
Bernhardt (1990) and Von Bremen (1990) higher level features
showed faster gains than lower level features for seven out of
eight subjects. Manner and laryngeal features showed faster
gains than place features, as might be predicted from their
relative positions in the hierarchy. Because it was difficult to
determine relative height within the hierarchy for some targets, in subsequent studies, individual (nondefault) feature
development was contrasted with development of new feature
combinations with existing nondefaults. Preliminary analysis
does not appear to confirm a strong difference between the
two types of targets, with individual difference, but further
analysis is necessary.
The concept of default was useful in describing the phonological patterns observed and setting up goals for intervention.
The concept of “favorite sounds” or “sound preferences” has
been around since the 1970s (Edwards & Shriberg, 1983).
The notion of default is a theoretically based construct from
adult phonology that accounts for the appearance of favorite
sounds. Introducing various types of features at different
levels in the feature hierarchy also was useful in addressing
the breadth of the target phonological system and enhancing opportunities for generalization (evident in the children’s
productions). New pronunciations are challenging, and giving
the child several different targets to attempt within a given
time period (as with a cycles approach) provides more than
one opportunity for change.
Conclusions
Application of linguistic theories is challenging, because it
takes time to learn the theories and to design and test potential
applications. The application of nonlinear phonological theory
to phonological intervention benefited individuals in the studies and other children indirectly. Clinicians who participated
in the projects have reported continued application of the
principles and practices within their clinics. As theories advance, clinical practice can keep pace, in order to maximize
opportunities for success in intervention.
The difference between outcomes for conditions may reflect
the dominant status of the word structure levels in the hierarchy. Structure involves larger units, and is more general
ASHA Self-Study 7550
27
Speech Sound Disorders and Treatment Outcomes
References
Barlow, J., & Gierut, J. (1999). Optimality theory in phonological
acquisition. Journal of Speech, Language, and Hearing Research, 43, 1482-1498.
Bernhardt, B. (1990). Application of nonlinear phonological theory to
intervention with six phonologically disordered children. Unpublished doctoral dissertation, University of British Columbia.
Bernhardt, B. (1992). The application of nonlinear phonological
theory to intervention with one phonologically disordered child.
Clinical Linguistics and Phonetics, 6, 283-316.
Bernhardt, B. (1994). Phonological intervention techniques for syllable
and word structure development. Clinics in Communication
Disorders, 4, 54-65.
Ingram, D. (1981). Procedures for the phonological analysis of
children’s language. Baltimore, MD: University Park.
Major, E.,& Bernhardt, B. (1998). Metaphonological skills of children
with phonological disorders before and after phonological and
metaphonological intervention. International Journal of Language and Communication Disorders, 33, 413-444.
Pendergast, K., Dickey, S., Selmar, J., & Soder, A (1994). Photo
Articulation test. Danville, IL: Interstate.
Von Bremen, V. (1990). The application of nonlinear phonological
theory to intervention with phonologically delayed twins. Unpublished Master’s thesis: University of British Columbia.
Bernhardt, B., Cam, M., & Major, E. (1994). Speech.app. Unpublished
computer software.
Bernhardt, B., & Gilbert, J. (1992). Applying linguistic theory to
speech-language pathology: The case for nonlinear phonology.
Clinical Linguistics and Phonetics, 6, 123-145.
Bernhardt, B., & Holdgrafer, G. (2001a). Beyond the basics I: The
need for strategic sampling for in-depth phonological analysis. Language, Speech, and Hearing Services in Schools,
32, 18-27.
Bernhardt, B., & Holdgrafer, G. (2001b). Beyond the basics II: Supplemental sampling for in-depth phonological analysis. Language,
Speech, and Hearing Services in Schools, 32, 28-37.
Bernhardt, B., Major, E., & Brooke, M. (2003). Comparative outcomes of two nonlinear intervention studies conducted by
community health speech-language pathologists. Manuscript
in preparation.
Bernhardt, B., & Stemberger, J. P. (1998). Handbook of phonological development: From the perspective of constraint-based
nonlinear phonology. San Diego, CA: Academic.
Bernhardt, B., & Stemberger, J. P. (2000). Workbook in nonlinear
phonology for clinical application. Austin, TX: Pro-Ed.
Bernhardt, B., & Stoel-Gammon, C. (1994). Nonlinear phonology:
Introduction and clinical application. Journal of Speech and
Hearing Research, 37, 811-820.
Dean, E., & Howell, J. (1986). Developing linguistic awareness: A
theoretically based approach to phonological disorders. British
Journal of Communication Disorders, 21, 223-238.
Elbert, M., & Gierut, J. (1986). Handbook of clinical phonology. San
Diego, CA: College-Hill.
Edwards, M. L., & Shriberg, L. (1983). Phonology: Applications in
communicative disorders. San Diego, CA: College-Hill.
Edwards, S. (1995). Optimal outcomes of nonlinear phonological
intervention. Unpublished Master’s thesis: University of British Columbia.
Gierut, J. (1998). Treatment efficacy: Phonological disorders in children. Journal of Speech, Language, and Hearing Research,
41, 85-100.
Grunwell, P. (1985). Phonological assessment of child speech. San
Diego, CA: College-Hill.
Hodson, B., & Paden, E. (1991). Targeting intelligible speech. A
phonological approach to remediation (second edition). Austin,
TX: PRO-ED.
28
ASHA Self-Study 7550