COMD 3700 for Distance Education. This is lesson 9 on speech

COMD 3700 for Distance Education. This is lesson 9 on speech
audiometry. This is what we will be discussing for the next 2 lessons.
This lesson will cover pages 126-135 in Chapter 5 of your textbook.
The greatest complaint of most hearing impaired patients is that they
are having difficulty hearing and communicating with other people. I
have yet to hear of a person coming to the hearing clinic and saying
I'm having trouble hearing pure tones. As we have discussed, one of
the very first tests that is administered to a patient when they come to
an audiologist complaining of hearing loss is a pure tone audiogram
using pure tones as the stimulus. Once we have accomplished a pure
tone audiogram, we can make some judgments as to how well a
person is hearing and some of the difficulties they may be having.
However, hearing loss depicted by the pure tone audiogram can't
reflect the degree of handicap in speech communication. So it is
logical that tests of hearing function should also be performed with
speech as the stimulus. Even though the pure tone audiogram gives
us a great deal of information, it seems logical we should investigate
a person's ability to deal with speech using further testing using
speech as the input to the individual. The information derived from
pure tone air conduction and bone conduction audiometric tests is
helpful yet speech audiometry is necessary to completely assess
basic auditory function. It is especially important for all of you
planning on becoming speech-language pathologists because you’ll
use the speech results in therapy planning and counseling with the
client and their family.
Just as we studied in pure tone audiometry, there are five factors that
you need to be aware of when performing speech audiometric
testing. They are: the test equipment, environment, the patient, the
clinician and the test procedure. We'll discuss more specific elements
to each of the categories.
So, first let’s discuss the equipment needed to perform
speech testing. Here we have a diagram of a speech
audiometer. This diagram is somewhat similar to a
previous diagram we looked at but the previous
diagram was of an audiometer that delivered pure tones
or was a pure tone audiometer. Here we have a speech
audiometer. We'll be dealing with speech as our input. If
you look to the left side on this diagram, you'll see three
different ways of introducing speech to the audiometer.
One is a microphone. Two would be a compact disk. On
this audiometer, they also have an accommodation for
tape recordings using cassette tapes. The compact disk
and tape recorder imply the material has been
professionally recorded. Most modern diagnostic
audiometers either use CD’s or built in recorded tests
for the speech stimulus. Another option is to use the microphone with your live voice as the input. There are some
parameters we must consider before we use a microphone for live speech input, which we will discuss later in the
lesson. The input is then fed through an amplifier. From the amplifier it goes to the attenuator, which is the hearing
level dial. On the pure tone audiometer, we had an attenuator, which was the decibel level dial. This is how we
control the intensity of the pure tone as it left the audiometer and was delivered to the earphones or to a loud
speaker. With speech as the input, then we have some considerations that are different than with our pure tone
audiometer. The main consideration is the intensity or nature of the input through the amplifier of the audiometer,
live voice vs. recorded speech. Again, we will discuss these differences later. Once we are satisfied that we are
inputting properly to the speech audiometer, we can deliver that stimulus from the attenuator to a set of earphones
on the patient's head or go through an auxiliary amplifier to a loud speaker which is in the room with the patient and
deliver the stimulus to the patient by way of what we call sound field or SF. Sound field means they are sitting in a
field of sound. That way, you hear with both ears. With the earphone, you can hear with one ear (monaural) or both
ears simultaneously (binaural). Now although this diagram is just showing the speech components of the
audiometer, all of this is actually combined with the pure tone components in the diagnostic audiometer. So there
aren’t separate speech and pure tone audiometers, they are combined into one unit. I just used this diagram for
teaching purposes.
So, as I mentioned speech testing can be conducted
using monitored live voice (MLV) or recorded materials. If
you are testing using your own voice, then you have to
have a way of making sure that the patient is hearing the
speech at the level you want them to. Monitored live voice
refers to the fact that the audiologist is speaking into the
microphone, monitoring the live voice through the VU
meter and delivering the stimulus to the client. Let me give
you an example of what I'm talking about. If we go to our
attenuator, our hearing level dial, and set that dial at 35
decibels, and we put the earphones on an individual, the
expectation is the client will receive 35 dB of output.
But if you go to a microphone with a live person speaking,
they may deliver one word relatively loudly and a second
relatively softly. The output at the earphone will reflect that
difference in the input intensity levels. The testing needs
to be consistent, so we are hoping we can control the inputs so they don't have the variability of input and affect the
output with great variability. To help us with this dilemma, we are going to use a volume unit meter or VU meter,
which as we saw on the block diagram, is positioned between the amplifier and the attenuator. The VU meter will
indicate to us the volume or intensity of the input prior to the time it reaches the attenuator. A simple VU meter is just
a meter with a dial or stylus, like the one pictured on the left. As input comes through the meter, the stylus will rise
and fall depending on the intensity of the VU meter. So if a rather intense sound comes through the meter, the meter
will swing to your right. If the input to the VU meter is soft, the meter will rise, but only slightly and will rise up on the
left side of the meter. So let's go back over to our attenuator for a moment. Let's manually set the attenuator at 30
dB. Then we speak into the microphone. The assumption might be since our attenuator is set at 30, the output to
the patient or earphone will be 30 dB. But that's not a correct assumption. If we speak loudly into that microphone
and spike our VU meter over to plus 5, we have to add the plus 5 from the VU meter to the attenuator meter. So, we
would be delivering 35 dB to the earphone. By the same note, if we go back to the microphone and speak too softly,
and our VU meter only rises to minus 5, we have to subtract 5 from the attenuator setting and patient would not be
receiving 30 dB but 25 dB. When using the microphone, you have to monitor yourself and make sure your speech
does not get too intense or too soft. The task of the clinician with live voice is to visually monitor the VU meter and
speak so the complex input of their voice peaks on an average close to zero on a VU meter. But the only time the
attenuator is giving a true indicator of the output is when the VU meter is standing at or pointing to zero. The other
option for delivering the speech stimuli is to use recorded materials like the CD pictured on the right. Using
prerecorded compact disks or speech material built into the audiometer can be handy in that the recorded material
can be calibrated in a sense. The recordings come with a band of a pure tone, usually a 1000 Hz tone. So you'd put
the CD and play the calibration tone. You hear the tone and adjust the VU volume to zero. Then you know whatever
comes from the CD and is delivered through the attenuator to the earphone is in calibration. In other words, when
the attenuator dial was set at 35, the assumption is the patient will receive a 35 dB stimulus. That can be done with
recorded material but not with the microphone. One other note in regards to the test equipment: Do you recall our
discussion of dB SPL and dB HL? Do you recall the difference between dB SPL at 1000 Hz and dB HL at 1000 Hz
was 7.5 dB. The 7.5 dB is an indicator of normal hearing threshold. Therefore, we said that 7.5 dB SPL was equal to
zero dB HL. In speech audiometry, for a person to repeat back correctly fifty percent of the speech material, it's
necessary to have the stimulus delivered to the patient at 20 dB SPL. Another way of saying this is people with
normal hearing need 20 dB of sound pressure level to be able to repeat back 50% of the speech material delivered
to them. So consequently, we will call 20 dB SPL 0 dB hearing level for speech. What this means is if you take your
speech audiometer and you set the hearing level dial at zero, you are actually going to deliver a 20 dB SPL stimulus
through the audiometer with that setting. So 0 dB on the audiometer dial will actually be a 20 dB sound pressure
level.
Let's talk for just a moment about the test environment. We can use
monitored live voice in our testing or prerecorded materials. With
monitored live voice, you really need a two-room suite in order to be
acoustically separated from the patient. Otherwise the patient hears
your voice ‘live’, even though the voice via the headphones is too
quiet to hear, making the results invalid. Doesn't that make sense?
How can you have a client in front of you in the same room and
saying words into a microphone? They might hear your speech in the
microphone but also there is the potential they hear you directly
without going through the speech audiometer. A disadvantage of live
voice testing is that you do not always speak the words the same way
or with the same intensity. Different dialects or accents also affect the
results. Prerecorded materials can be delivered in a one or two room
suite. This is one advantage of using prerecorded materials. Another
advantage of using prerecorded materials is the absence of
inconsistency between the examiner's voice and mannerism. This
allows you take a CD and use it week after week and every person
experiences the same examiner's voice and mannerism. That would
seem to be the correct way to manage the hearing testing
environment. However most clinicians use monitored live voice
because of the flexibility and the time it saves. It’s a flexible way of
testing because the client may have special needs where you have to
stop, start and take pauses between the different speech stimuli.
However, you can also use the pause button on the CD if using
recorded speech. Using live voice testing means you might be using
different voices, different people, different mannerisms. But in spite
of this in most clinics, speech testing is done through monitored live
voice even though you might think the recorded material would be
the way to go.
The next factor is the patient. There are different ways that the
patient can respond to the examiner. Most of the time the patient will
respond with a spoken response. For example, the clinician will
instruct them to say a word when they hear it and the patient will
repeat it back to the clinician. The advantages of a spoken response
between the clinician and the patient are that it's faster than other
response modes and there's a rapport maintained between clinician
and client. The disadvantages of the spoken response is the client may
have poor or unintelligible speech and therefore they may be hearing
the stimulus correctly, but because of poor or unintelligible speech, they
may repeating a distorted version of what they're hearing back to the
clinician. So another option is to use written responses from the patient.
Advantages of written responses would be patients' poor speech would
be eliminated and it provides a permanent record. However, there are a
good number of patients who can't write or respond as well to written
material as they can to speech. So we are handicapped again by that
factor. As far as providing a permanent record, that's a bit of a stretch of
the imagination I think. The disadvantage is it slows down the testing
procedure and takes time to score the results. I've seen very few
situations where written responses were used in speech testing.
The clinician’s role is to make sure that the patient understands
exactly what is going to happen and how they should respond. Most
of the instructions for adults are given through the microphone of the
audiometer and heard in the headphones of the patient. If proper
instructions are not given the patient may not respond correctly and
the testing results will be inaccurate. The seating of the patient is
more important in speech testing than pure tone testing, especially if
live voice is being used. If the patient can see the examiner saying the
words they may be getting speech cues from the face and lips of the
clinician. In order to make sure that you are only testing the hearing
ability of the patient, they need to be turned away from the clinician.
That is why it is recommended that the patient sits at a right angle to
the audiometer.
In this course we will be discussing five different speech measures. The first
will be speech detection threshold (SDT). Some people refer to it as speech
awareness threshold (SAT). So we have two abbreviations for this particular
speech test. Another test we'll consider is a speech recognition threshold. The
abbreviation for speech recognition threshold is SRT. Another test is most
comfortable loudness level or MCL. A fourth test of speech we'll use is
uncomfortable loudness level abbreviated UCL. Finally, we will discuss word
recognition score, WRS.
Let's start with speech detection threshold, SDT. It is a commonly used
synonym for Speech awareness threshold (SAT). Speech awareness and
speech detection. We interchange these quite a bit. Speech detection
threshold is the lowest level in dB that a person can just detect the presence
of speech and identify it as speech 50% of the time. We need to look at this
and be careful. We did not say it's the lowest level in dB that a person can
understand the speech being delivered. It's the lowest level where they can
just detect it as speech and identify it as such. The stimulus for this particular
test is usually sentences or connected speech, which is referred to as cold
running speech. Let me talk about cold running speech. What are the cold
and the running part of this? Cold means it's a statement that has no
particular meaning or interest to you. Therefore a good stimulus might be to
read the Declaration of Independence or something that is of no interest to
the person. They're just listening to you talk. The running part of this
statement is the fact that we want the stimulus to be continuous. We don't
want a lot of breaks. We want someone to say, "Mary had a little lamb. His
fleece was white as snow. Everywhere that Mary went the lamb was sure to
go." See how that flows, and there aren't a lot of breaks in there? It's
monotonous, and of no consequence to them. People don't need to pay a lot
of attention to it. That's the nature of cold running speech. SDT is helpful
when working with uncooperative children. The child can be in the booth and
the stimulus can be raised from inaudibility to a level of audibility to the child.
The clinician is watching to see if the child will stop what he's doing and pay
attention to the speech that he may hear. This is not a popular measure for
standard testing because have a lack of relevant information. It’s more used
when you can’t get any other results. It doesn't give us a lot of information
about the patient or how they are functioning. So therefore, it's not used
commonly.
The next test, the speech recognition threshold or SRT is used in
almost every audiometric examination. So we will go into a lot more
detail with the SRT. The SRT is defined as the lowest level in dB that
a person can identify correctly 50% of the time the speech material.
So this is a little different than awareness because now we have to
have an understanding of the material that's coming in and repeat it
back correctly 50% of the time. The purpose is to establish the lowest
level that the patient can hear and understand speech. This test is
also known as the speech reception threshold test. Hudgins and
others investigated in 1947 what the best words would be to use for
SRT testing. They determined that you can't just go out and grab any
type of word or speech material. If you're going to test a person's
speech criteria, you should have a certain type of word. They found
that the words should satisfy four criteria. They should have
familiarity. This is not a test of a person's intelligence or anything like
that. If needed, you can go into the booth, have the person look at
these words, and make sure they're familiar with each word. Also, the
words should be phonetically dissimilar. In other words, we don't want
the person to be challenged, wondering if we said one thing and they
heard another thing. The words should be distinctly phonetically
dissimilar. They should be a normal sampling of English speech. And
they should have homogeneity with respect to audibility. In other
words, we expect each word to have the same audibility quotient or
factor as the next word that comes along so you don't have a lot of
words on the list that have different audibility factors. This is to ensure
that they are all understood and heard equally. That's the
homogeneity part of it. So Hudgins et al determined 84 words that fit
this criterion.
In 1952 Hirch et al reduced the original 84 words from Hudgens et al
to 36 words. Standardized word lists now include 36 spondees
grouped into two lists of 18 words. They are used today in most
clinics. These speech materials are called spondaic words or
spondees. Spondaic or spondee words are the speech stimuli used to
obtain the speech reception threshold (SRT). A spondee is defined as
a two-syllable word spoken with equal stress on both syllables and is
excellent choice for determining threshold in speech because it is
easy to understand at faint hearing levels. I want to clarify something,
if you look in your text on page 130; it says spondees don't occur
naturally in spoken English. This can be confusing because the words
are definitely part of spoken English. But they're not usually spoken
with equal stress on each syllable. That turns them into a spondee. In
spoken English, you don't say them with equal stress on each
syllable. That's why spondees don't occur in spoken English.
Here are some examples of spondee words. There are others, but
here is a list to give you an idea of some spondaic words: Airplane,
Toothbrush, Hotdog, Sidewalk, Baseball, Pancake, Cowboy,
Armchair and Eardrum. You can see they're two syllable words.
They're spoken in the test environment with equal stress on each
syllable.
As I mentioned, criterion for the SRT is the lowest
hearing level at which 50% of the words are identified.
There are several different methods of obtaining an
accurate SRT. The textbook reviews the history of the
different methods as well as the method recommended
by ASHA. In a study by Martin et al (1998) it was
reported that approximately 90% of audiologists are
using 5 dB increments and 60% do not use the ASHA
recommended criterion of missing five or six words.
Instead they follow an abbreviated procedure shown by
Martin and Dowdy (1986) to yield results similar to the
ASHA procedures. This method is very similar to the
ASHA (1978) method for determining pure tone
thresholds. They recommend presenting one spondee
starting at 30 dB HL and at 10 dB decrements until an incorrect response is obtained. If the response is incorrect or
absent at 30 dB HL, the stimulus should be increased to 50 dB HL, and use 10 dB increments until a correct
response is obtained. The bracketing technique requires presenting one word at each level by increasing the
stimulus in 5dB steps for an incorrect response and decreasing in 10 dB steps for correct responses until 3 correct
responses have been obtained at a given level. Martin suggests individuals, audiologists, doing testing should use
his procedures because it requires no knowledge of other test results. It can be given and stand by itself. You don't
need a pure tone audiogram or other test results to compare to your SRT. Also he said it involves the use of 5 dB
steps, which is the same as pure tone audiometry, and therefore, the procedure for finding SRT pure tone threshold
can be the same, making it more familiar and easier to use. I would say that most audiologists measure speech
thresholds using monitored live voice and use 5 dB increments.
Another factor in speech testing is the use of a carrier phrase. Although some clinicians prefer the use of a carrier
phrase, many do not. No real advantage of using a carrier phrase with spondaic words has been found. But it is
used most of the time with speech recognition or word recognition testing, so I want to review how it is used. The
carrier phrase in most audiometric settings is, "say the word." "Say the word" precedes much of the speech stimuli
that are delivered to a patient. So if we were delivering a list of words to a patient, we would say, "Say the word,
'Baseball.'" "Say the word, 'airplane'." "Say the word, 'toothbrush.'" Say the word, 'sidewalk'." One of the reasons to
use the carrier phrase is so that when you are using monitored live voice; you can monitor yourself through the VU
meter so the output is correct. Remember that the assumption is when the speaker presents the words; they're all
presented with the correct inflection and the correct emphasis on each syllable. The carrier phrase is used to
visually monitor that the VU meter and in that very short time, "Say the word," a person can actually adjust their
voice so the VU meter is peaking very near zero. When you deliver the word, if the word is baseball, you say it
naturally. The word, the actual stimulus word may be slightly more intense or less intense than the carrier phrase.
The carrier phrase is designed to monitor and adjust your voice to a zero VU setting. Then you just allow yourself
naturally to proceed and present the word. So it's just, "Say the word, 'baseball'." The carrier phrase is used in
delivering single words to the patient. There are other types of speech materials where carrier phrases will not be
used. When using prerecorded material, a calibration tone is used. You set the VU meter at zero when the
calibration tone is on. The need for a carrier phrase is not as important.
This is an example of the instructions you would give to the patient prior to SRT testing:
“You are going to hear a person ask you to say a series of words, like ‘baseball’ and ‘schoolboy’. I’d like you to
repeat each word you are asked to say. I’m going to turn the words quieter and quieter, until you can’t hear them
anymore. I want to find out how quietly you can hear speech. Don’t be afraid to guess. Do you understand?”
So, once the SRT has been found. How do we use this
information? The SRT is used as a basis for setting a level for
word recognition testing. We will discuss WR testing in the next
lesson. Basically word recognition scores are scores indicating a
person's ability to discriminate one word from another. Word
recognition testing is not a threshold test. These scores are
found at levels above a person's speech recognition threshold.
We have to know a person's speech recognition threshold or
SRT before we can set a level to test word recognition testing.
Everyone's word recognition testing is different based on their
SRT score. Also, the SRT can be used as a cross check to
make sure that your pure tone results are accurate. Speech
reception threshold can be predicted from the pure tone
average. Let me talk to you about this for a minute. If you look at
an audiogram, you'll note along the top the frequencies that
were tested. With pure tone frequencies, 125, 250, 500 Hz, etc., across the top of the audiogram, these are the individual
frequencies we used to test the pure tones. 500, 1000, 2000, in particular are known as the speech frequencies because
most speech falls in this range within the borders of 500-2000 Hz. A few of them stray outside of that. Let's go back and call
500-2000 the speech frequencies. Remember the threshold was calculated from the tones at those frequencies. If we have a
threshold calculated from those frequencies, doesn't it make sense if we obtain a speech tone threshold, those two should be
related in some way? They in fact have a relationship where the SRT and pure tone average should be within a plus or minus
5 dB of each other. In other words, when you finish testing a person's pure tones and finish with the PTA testing, what you
should be able to do is look at the pure tone average from the audiogram and it should be within five dB, plus or minus, of
speech reception threshold. If not, something must be wrong. Also SRT is used to categorize hearing losses. Remember
when we tested pure tone audiometry, we categorized those into levels of mild, moderate, severe, profound. We can also
use SRT's to make those categorizations. A SRT of 45 dB would be a moderate hearing loss. So these can be put side by
side with a pure tone average. These can be used as a judgment to determine the need for amplification. You look at the
SRT and can say you're probably not a candidate for a hearing aid or you might want to consider using one from the SRT.
Lastly, hearing evaluations are used to determine the correct gain for a hearing instrument. So you use the SRT to determine
how much gain or power you want the hearing aid to provide to the client.
On another note, I also wanted to point out that masking may be required in speech testing as well when there is a possibility
that the speech stimulus has crossed over and is actually being heard by the non-test ear. We are not going to discuss the
methods for speech masking in this course. But I wanted you to be aware that the need for masking during speech testing
does exist. We will continue to study the additional speech tests in the next lesson.