the independent variable single-subject research Verifying

Verifying the independent variable
•
In
single-subject research
Judith C. Nelsen
Judith C Nelsen, DS~ is Professor,
Jane Addams College of Social Work,
University of Illinois at Chicago.
Group-design research on the effectiveness of treatment has paid increasingly
careful attention to defining and measuring treatment given as the independent
variable. Early studies tended to define
casework or other treatment as whatever
was done by someone with the appropriate education and job title. 1 Later research involved practitioners' identifying
what treatment models they were using
or, at times, specifying what interventions they had carried out in particular
interviews. Now group-design research
often includes actual measurement of
the independent variable through the
videotaping or audiotaping of interviews,
with independent judges verifying which
interventions were used and how often.2
Those who write about practice research
insist that such rigor is necessary if the
helping professions are ever truly to
learn which treatment methods and
processes help which clients to change.3
Single-subject researchers traditionally specify the independent variable or
treatment intervention used in a project
but do not further verify its use.' The
author suggested elsewhere that singlesubject researchers, like those doing research based on group designs, should
submit the use of their experimental interventions to verification. S During the
research period, they should audiotape
or videotape interviews or have the
interviews observed. Then they can count
how often the experimental interventions
were used and have the counts verified
by an independent rater, rather than
simply specify what the interventions
were. This safeguard is especially necessary when the practice studied is not
behavioral or task centered because
there is no widespread agreement about
the meaning of many terms used to refer
to psychosocial or eclectic interventions
-terms such as "support," "interpretation," or "confrontation." Further, the
exact content offered the client in any
intervention is important and is not
conveyed by naming the intervention.
Suggesting to a man that he is really
angry or that he is really yearning for
greater closeness can both be considered
interpretations, but obviously the ideas
offered, and, most likely, their impact,
are different. The frequency with which
an intervention is used often influences
its effectiveness, as do contextual factors, including voice tone, timing in the
interview, and so on. Practitioners are
unlikely to remember accurately all these
aspects of the interventions they used
even in an AB design project. The problem is compounded in reversal or multiple-baseline designs, in which it is important to determine whether the same
CCC code: 0148-0847/85 $1.00 © 1985, National Association of Social Workers, Inc.
3
Downloaded from http://swra.oxfordjournals.org/ at Penn State University (Paterno Lib) on May 11, 2016
ARhough re_archers u.lng group de.lgn. rouUnely verify their
experimental IntervenUon., .Ing....ubject re_archers typIC811y
.peclfy the eXPerimental Intervention without verifying R. Thl. article
explains how to OPerationally cleftne and mea.ure IntervenUon. In
single-subject re_arch project. and .ugge.t. procedure. to e.tabll.h
the reliability and validity of the mea.urement. that are taken.
EXISTING CODING SCHEMES
To complete an AB design single-subject
research project with a client or client
system, a practitioner follows a series
of steps. He or she chooses an aspect
of the client's functioning to be targeted
for change, arranges for its baseline rate
of occurrence to be measured, plans interventions designed to affect this rate,
carries out these interventions, and analyzes what subsequently happens to the
change target. For reversal and multiplebaseline designs, the practitioner uses
the same interventions several times with
the same or different targets of change. 7
Much has been written about procedures
for the reliable and valid measurement
of the rate at which targeted events
occur. If a practitioner audiotapes or
videotapes treatment sessions or has
them observed during baseline and intervention periods, an independent rater
can verify that the experimental intervention or intervention package was
used when it was supposed to be and
was not used at other times. However,
for such verification to be carried out,
the practitioner must first operationally
define the experimental interventions.
One potential set of operational definitions may be found in the various
schemes for coding interventions developed and tested in group-design research.
For nonbehavioral social work practice,
two popular schemes are the HollisMullen typology for psychosocial treatment and the Reid scheme for task-centered work.' Both these coding schemes
have been tested with various popula-
4
Social Work R. . . .rch & Abstracts
tions and modified or developed further
over time." In addition, some other social workers and many psychologists,
psychiatrists, and psychoanalysts have
devised ways to code their use of interventions in group-design research,"
Such coding schemes can be of some
utility to single-subject researchers seeking to verify their use of interventions,
but they seldom are fully satisfying. For
one thing, no scheme can capture what
single-subject researchers consider the essential features of the interventions that
are used. For example, practitioners may
deem it crucial that they gave a client
approval rather than understanding or acceptance, but the Hollis-Mullen scheme
groups all these interventions together
as "sustaining" procedures. II A related
problem is that most coding schemes
utilize broad content categories. It can
be important whether one says to a client, "I think your son is really trying to
show how grown up he is" or, "I think
your son is staying away because he is
tired of your husband's criticism." Yet
both these comments qualify as "personsituation-reflection: as perception or understanding of others"" in the HollisMullen scheme or as Reid's "explanations concerning significant others."! 3
A final difficulty in using any of
these coding schemes is that the coding
manuals, that is, the detailed coding instructions by which interventions can be
categorized according to a scheme, are
often not readily available to practitioners doing single-subject research. One
cannot reliably code interventions based
on the overall descriptions of the coding schemes published in the professional
literature. Some detailed coding manuals or instructions can be obtained by
corresponding with their original developers or by searching out unpublished
doctoral dissertations; others are simply
not available.
An alternative is for single-subject researchers to devise operational definitions of the interventions they will use
or have used to affect targeted events,
to develop coding instructions by which
their use of the interventions can be
verified, to establish units of analysis,
and to set up procedures to verify the
reliability and validity of any measurements taken. If an independent rater is
to observe interviews firsthand, all these
activities must be carried out before the
research is begun. The following discus-
sion assumes that treatment interviews
have been audiotaped or videotapedusually a more practical approach.
OPERATIONAL DEFINITIONS
Four dimensions are useful to consider
in developing operational definitions of
interventions: form, content, dosage,
and context.
Form
The form of an intervention is the
type of communication used. It may be
identifiable grammatically or taken from
the names for interventions cited in the
professional literature. 'Iwo examples are
"asking questions" and "exploration."
However, some of these names, such as
"confrontation," imply something about
the content of an intervention as well
as describe its form; in other cases, the
names do not have exact definitions.
What follows is a list of some common
forms of interventions used in interviews
with individual clients. Developed for
use in the author's courses on singlesubject research and practice, the items
on the list are relatively clear and simple and are mainly content free: eliciting
information, regardless of whether the
information concerns facts, feelings, perceptions, opinions, or anything else; giving information, again with "information" broadly defined; giving positive,
or negative feedback; self-conscious modeling by the practitioner; role playing;
and giving a directive or task. Nonverbal forms of intervention such as touching, moving play materials, and smiling
cannot be captured on audiotapes but
may be identified if videotapes are used.
Content
A second crucial dimension of an intervention is the content that is conveyed. That is, what exactly can the
client learn if the intervention is used?
An intervention such as eliciting how a
man is feeling tells him that he may be
having feelings, that feelings are to be
talked about in treatment, and that the
practitioner is interested in his feelings.
A role play may convey content about
how a certain activity can be carried
out if the practitioner models or directly
gives this information. If the client is
directed to try the activity and is given
positive feedback on the results, the role
play may also convey that the client is
Downloaded from http://swra.oxfordjournals.org/ at Penn State University (Paterno Lib) on May 11, 2016
interventions were used in the same way
several times.
This article proceeds from the assumption that the independent variable
in single-subject research-the intervention or intervention package used to try
to affect the change target-should be
verified through independent observation." It explains how interventions may
be operationally defined and their use
measured, suggests ways to establish the
reliability and validity of measurements
taken, and addresses special issues in
verifying the use of interventions in reversal and multiple-baseline designs.
The discussion emphasizes single-subject
research on eclectic or psychosocial treatment because such studies create the
greatest challenge for those seeking to
verify their use of interventions.
Dosage
Sometimes an important aspect of
experimental interventions is their dosage, or how extensively they are used
during one or a series of interviews. In
one case reported in the professional
literature, the practitioner confronted a
problem drinker about her alcohol use
occasionally during the baseline period.
The experimental intervention consisted
of confronting the client more times per
interview." In an AB design study, a
student working with a child in play
therapy decided not to change the interventions she was using but to use them
less, that is, to say less to the child and
to let him play more. In both these
projects, the optimal dosage of interventions was the most important dimension
tested. Whenever the frequency with
which an intervention is used may have
an impact on its effectiveness, dosage
should be specified in the operational
definition.
Context
Some aspects of the context in which
experimental interventions have been
carried out, such as the goals already
achieved in a case, the interventions
already used, and the state of the practitioner-client relationship at the time,
are highly important to understand but
too complicated to measure in most
single-subject projects. These factors
should certainly be reported as clinical
observations even if they cannot be
verified. Other contextual factors, such
as the voice tone with which interventions are made, are difficult to measure
exactly, but an acceptable range can be
given. For example, it can be specified
that the voice tone was accepting or
neutral, not hostile. Important contextual factors that often can be included
in the operational definition are the
times in the interview at which the
interventions were used, the immediate
subject matter the client was talking
about beforehand, and the order in
which several different interventions constituting a package were presented,
DEVELOPING CODING INSTRUCTIONS
By tentatively identifying the form, content, dosage, and contextual factors
crucial to the interventions used, the
practitioner-researcher takes the first
step in operationally defining them. For
example, a practitioner whose target of
change was that a woman keep appointments more regularly might begin by
operationally defining the following intervention package:
Form
1. Eliciting information.
2. Giving information.
Content
1. There are reasons for her absences.
2. Figuring out what these are may
help her come.
3. I want to help her figure it out.
4. She may be mad because I am
leaving in June.
Dosage
The content items were given between
2 and 15 times (total) in each of two
interviews.
Content
Interventions were used in each interview after a broken appointnient, as
soon as we sat down and finished
greeting each other. The client was
first given Contents 1-3 through information giving; next, her reasons
were elicited (Content 1); then Content 4 was given through information
giving and eliciting. Voice tone was
accepting throughout.
The next step is for a researcher to
listen to tapes of interviews to hone the
operational definitions devised and develop accurate coding instructions. This
is essentially a three-part process: delineating the exact forms and contents of
the experimental intervention or inter-
vention package; identifying crucial contextual factors; and deciding on units
of analysis, which is necessary to establish dosages.
To hone an operational definition of
the form and content of interventions,
the researcher first listens to tapes of
the baseline and intervention periods to
determine when interventions meeting
the tentative specifications were used.
From this process, clear instances of
usage and ambiguities in the original
operational definition usually become
apparent. By using them as a basis for
thinking more about essential features
of the experimental interventions, the
researcher should be able to develop
and write down coding instructions that
amplify the original definition, give examples of it, and clarify what is to be
done in the case of various ambiguities.
For example, in intervention-phase interviews with the client who was breaking
appointments, the practitioner might
have said that the client could be upset
or angry instead of using the word
"mad" (Content 4 in the operational
definition given earlier). This practitioner's coding instructions might state
that any words which signify that the
client is angry or upset would qualify
as appropriate content under the operational definition. Whenever possible, a
second person who will not later be the
independent coder should be used to
help the practitioner see ambiguities and
make the coding instructions (especially
in regard to the form and content of
interventions) as clear as possible. Using
all the relevant tapes to develop coding
instructions also makes a practitionerreseacher's task simpler. What was done
is what is to be coded; the researcherpractitioner does not have to anticipate
how to define and count something that
might have occurred but did not.
If part of the context to be coded is
what the client said immediately before
an intervention package was used, then
the definition of what constitutes this
content must also be clarified for the
coding directions. Similarly if the order
in which the interventions in a package
were used is considered important, the
form and content of the interventions
must be clearly distinguished from each
other and the required order specified.
An aspect of the context that may be
important to define in a particular project is the time in an interview when an
V.rlfylng the Ind.pendent Varlabl.' N....n
5
Downloaded from http://swra.oxfordjournals.org/ at Penn State University (Paterno Lib) on May 11, 2016
capable of carrying out the activity. The
same or highly similar content may be
conveyed in interventions that take different forms. A woman may be told
that it is important for her to talk
about her family in treatment (information giving), or she may be asked about
her family (eliciting). In the second instance, she probably receives the same
or a similar message as in the first. The
content conveyed by an intervention is
probably more important than the form
of intervention used, except that some
clients may receive certain information
more readily when it is given in one
form than in another. For example, a
mother who grows confused when told
how to be more giving with her child
may receive the ideas more readily if the
practitioner models the appropriate behavior in a role play.
intervention package was used. The tone
of voice may at least be noted.
•
SocI8I Work R...-ch & Abstnlcts
regarding the boundaries of the unit. To
make sure that a second coder will
agree on what constitutes a complete
thought, sentence, or utterance, the practitioner-researcher must carefully formulate an operational definition of the
unit used and develop coding instructions about how to delimit it. In using
time intervals, however, the practitionerresearcher can simply write down the
number that appears on the tape-recorder counter when each interval begins
and what is being said at that time, and
this information can be passed on to
the second coder.
RELIABILITY AND VALIDITY
The key difference between specifying
and verifying a single-subject researcher's use of interventions is not so much
that the experimental interventions are
counted as that a second coder takes
some of the same counts independently.
That is, there is an effort to determine
the reliability of the measurement process. However, one must first be concerned with the validity of measurements.
The time to establish the validity of
measurements, or whether one is really
measuring what one intends to measure,
is before the coding begins. If any part
of the intervention package consists of
an intervention identified in the professional literature, such as an interpretation or a role play, a practitioner can
examine his or her operational definition of this intervention in light of the
way the literature defines or describes
it. Another means of determining validity is to consult professional colleagues
who are knowledgeable about the use
of the intervention in question, to see
whether the operational definition and
unit of analysis to be used in the project capture the essence of what the intervention is supposed to be. More ambitious tests of validity are probably unnecessary unless the intervention is especially complex. An example of an intervention that might be complex is information giving in which the content is
intended to represent a paradox. In
such a case, it might be desirable to ask
an expert on the intervention to verify
that what the practitioner said could be
expected to be paradoxical for the client.
Another validity issue that is difficult
to test empirically but that should be
kept in mind is the practitioner-researcher's reactivity to the measurement pack-
Downloaded from http://swra.oxfordjournals.org/ at Penn State University (Paterno Lib) on May 11, 2016
divide all the interviews in the baseline
and intervention phases into intervals of
perhaps 2, 5, or 10 minutes in duration
and count the interventions (complete
SELECTING UNITS OF ANALYSIS
thoughts, sentences, or utterances) used
Something a practitioner says may during a random sample of the interclearly qualify in form, content, and vals in each interview. To maintain
context as part of the intervention pack- comparability, it is best to use intervals
age as it has been operationally defined, from the same time periods-such as
but the frequency of use or dosage still the first, fifth, and seventh intervalsmay be ambiguous. To determine the for all the interviews.
dosage, the practitioner must decide
In many studies, intervals of 5, 10,
what parameters constitute each use of or 15 minutes in which the simple presan intervention or part of an interven- ence or absence of an experimental intion package.
tervention is noted can be practical and
A practitioner's complete thought, sen- conceptually reasonable units of analytence, or utterance (consisting of every- sis. It is not unusual for parts of the
thing said before a silence occurs or the experimental intervention package to be
client speaks again) are some of the used a few times during the baseline
units used in prior group-design re- phase of. a project. In the study desearch." If the practitioner had said to scribed earlier in which the client was
the client who broke appointments, breaking appointments, the practitioner
"There are reasons for your not com- probably asked the reason at least once
ing, and I want to talk about them or twice during the baseline period, a
today; what do you think they might question that would qualify in content
be?" this speech would constitute a and form as one of the experimental inclear use of the interventions defined terventions. In the formal intervention
earlier in terms of form and content, phase, the practitioner might have spent
But it would consist of three complete half or more of several interviews using
thoughts, two sentences, or one utter- the experimental interventions. Rather
ance. A frequency count of the use of than have to count all these occurthe interventions would depend on how rences, the practitioner might divide all
the operational definition specified the the interviews into intervals of 5 or 10
unit of analysis.
minutes. A count of 100minute intervals
Decisions about units in any research in which the experimental interventions
study should be made on both concep- were present or absent might then show
tual and pragmatic grounds. i6 It can be that the interventions were used in one
tedious to count every complete thought, interval in each of two baseline tapes,
sentence, or utterance that qualifies as not at all in another baseline interview,
the use of an experimental intervention. and in four or five intervals per interPractitioners may thus decide to use such view during the intervention period.
small units of analysis only if experimenIf an experimental intervention is not
tal interventions were presented infre- used at all during the baseline period
quently or if, conceptually, a fairly ac- of a project and if its frequency of use
curate count of their dosage in baseline per interview during the intervention
and intervention phases is important. phase probably is not important, the
An accurate count was necessary for the total interview can be defined as the unit
project in which the student-practitioner of analysis for coding. For example, if
decided to decrease her total number of a role play was used as an experimental
interventions to help the child in play intervention in each of three interviews
therapy to change his style of play. She and was not used during the baseline
used her utterances as the unit of atten- period, the practitioner-researcher might
tion, counting that she made an average simply want to ascertain the presence
of 90 per interview in the baseline phase or absence of the role play in each
and an average of 31 per interview in interview. This would require simply
the intervention phase. Another option checking in each interview whether a
when the number of experimental inter- role play was used at all.
ventions is large and yet variations in
Whenever a unit other than the total
frequency between the baseline and in- interview is used, the practitioner-retervention phases are important is to searcher must be concerned with the
use sample periods. The researcher can reliability of the coders' judgments
REVERSAL AND
MULTIPLE-BASELINE DESIGNS
In reversal designs, the same intervention or intervention package used originally with a client is used again when
the client's targeted functioning returns
to baseline rates. In multiple-baseline
designs, the intervention or intervention
package is used again with different
behaviors of the same client, the same
behavior in different settings, or the
same behaviors of different clients. In
both designs, the researcher measures
from the outset the aspects of the client's functioning that are targeted for
change to monitor changes that occur
after the experimental interventions are
introduced.
Although the basic process of devising operational definitions, developing
coding instructions, and so on in reversal and multiple-baseline designs is similar to that already described, the practicality of measuring practitioners' use
of interventions for such projects can
be a concern. AB design projects typically require listening to and coding
tapes of six to eight interviews. If a
reversal design is used, the practitionerresearcher might have to code up to 12
or 15 tapes. Even more might be required with a multiple-baseline design.
In such cases, a second coder might
have to go over a reliability sample of
at least 4 or 5 tapes. If the project is
funded, paid assistants may help the
practitioner-researcher work out operational definitions of interventions and
do all or most of the coding. Otherwise,
a time-saving option that is available if
the experimental interventions are used
often in intervention-phase interviews
and infrequently or not at all in baseline interviews is for the practitionerresearcher to take random time samples
from all the interviews, such as two
10-minute or three 5-minute intervals in
each interview, and code the use of the
experimental interventions only in these
intervals.
The major issue in verifying experimental interventions used in projects
with reversal or multiple-baseline designs
is to determine how similar each of several uses of an intervention must be at
different times to qualify as a repetition.
Clearly, some flexibility is necessary. For
one thing, a client may remember the
first use of an experimental intervention
well enough so that on a second or third
occurrence, only part of the package
needs to be repeated to cause the change
in functioning desired. For example, to
help a client learn to communicate better
with her child, a practitioner might first
have intervened by giving information
about why such communication would
be desirable and how it could be carried out. That first time the practitioner
might also have modeled the new skills,
suggested that the woman try them, and
given positive feedback in response to
her efforts. If, as the second phase of
a multiple-baseline project, the woman
was to be helped to communicate better at work, simply suggesting the desir-
ability of this change and some ways
it could be carried out might be enough
for the woman to begin communicating
better in this setting as well. In a
multiple-baseline project with a number
of clients, some clients may need fewer
components or a lower dosage of the
same intervention package than other
clients to obtain the targeted changes
in functioning.
In addition, if an intervention package is used to affect different behaviors of one client, the same behavior in
different settings, or the same behaviors
of different clients, the content given
in each use of the intervention may parallel the content in other uses, but it
cannot be the same. For example, using
paradox to effect different behaviors of
the same client, the content of the paradox, that is, the actual words spoken,
will be different in each instance. In
other projects, the behavior targeted for
change may be the same, but if the settings or clients are different, practitioners usually tailor the content to the
unique circumstances or personalities
each time.
Often the dosage or some aspect of
the intervention's context cannot be
replicated exactly. The practitioner may
always use an intervention near the start
of an interview or try to repeat the
original voice tone, but what the client
says just before the intervention is used,
for example, is bound to vary somewhat.
The answer to the question about
how similar is similar enough is probably to establish parameters within which
each use of an experimental intervention or intervention package qualifies as
a repetition. The operational definition
could state, for example, that certain
forms and contents are necessary for
the intervention package to count as
having been used and that other forms
and contents do not count. Similarly,
the operational definition could establish dosage limits and define contextual
factors that are essential to maintain.
For example, an operational definition
then might read:
A role play will be considered to have
occurred if the practitioner models the
desired behavior for the client, suggests
that the client practice acting in the new
way in the interview, and gives feedback
when the client does so. Additional practitioner activities that, if they occur along
with the activities just described, will be
considered part of the intervention pack-
Verifying the Independent Variable I Ne••en
7
Downloaded from http://swra.oxfordjournals.org/ at Penn State University (Paterno Lib) on May 11, 2016
age. Having interviews taped with the
knowledge that some will be heard by
a second coder who may be another
professional can make practitioners selfconscious about their use of the intervention." Although any effects on what
is said should be dealt with in developing operational definitions of the form
and content of interventions, tension conveyed to the client by the practitioner's
voice tone, posture, or the like might
create an unknown effect and detract
from the validity of the measurement of
the intervention package.
Establishing the reliability of the measurement procedures is done after all the
project tapes are coded. The practitionerresearcher should select a random sample of one-third of the tapes, including
at least one from the baseline phase and
one from the intervention phase. These
tapes should be set aside, and the researcher should use other project tapes
to train a second coder to use the coding
instructions. Ideally, after being trained,
the second coder independently scores at
least one of the tapes not included in the
sample to make sure he or she has understood the operational definitions of the
interventions and units in the same way
as did the original coder.
The second coder then scores the
tapes selected for the reliability sample.
Dates and any other evidence of sequencing should be removed from these
tapes. This person's counts and those
of the original coder are compared to
obtain a percentage of agreement. For
interval measures, the researcher can
compute the percentage of agreement on
occurrence or nonoccurrence, whichever
is the more conservative.IS Generally,
agreement rates of 80 percent or more
are considered adequate.
age are eliciting information about the
client's reaction to the role-play situation
and giving information about what the
client could do. At least some of these
activities must be present in at least two
lO-minute intervals of a 50-minute interview.
CONCWSION
NOTES AND REFERENCES
1. K. M. Wood, "Casework Effectiveness: A
New Look at the Research Evidence," Social
Work, 23 (November 1978), pp. 437-458; and J.
S. Wodarski, The Role of Research in Clinical
Practice: A Practical Approach for the Human
Services (Baltimore: Md.: University Park Press,
1981), p. 9.
2. See the studies cited in D. E. Orlinsky and
K. I. Howard, "The Relation of Process to Out-
8
Social Work Research & Abstracts
E. J. Mullen, "Casework Treatment Procedures
as a Function of Client Diagnostic Variables," unpublished doctoral thesis, Columbia University
School of Social Work, 1968. For a description
of Reid's approach, see W. J. Reid, The TaskCentered System (New York: Columbia University Press, 1978), pp. 322-329.
9. See, for example, Hollis and Woods, Casework, p. 356; Reid, The Task-Centered System,
pp. 225-271; and Tolson, "Alleviating Marital
Communication Problems."
10. For coding systems devised by psychologists, psychiatrists, and psychoanalysts, see studies
reported in Orlinsky and Howard, "The Relation
of Process to Outcome in Psychotherapy." For
systems devised by social workers, see, for example,
G. Cunningham, "Workers' Support of Clients'
Problem-Solving," Social Work Research and Abstracts, 14 (Spring 1978), pp. 3-9; and R. A.
Brown, "Feedback in Family Interviewing," Social
Work, 18 (September 1973), pp. 52-59.
11. Hollis and Woods, Casework.
12. Ibid., p. 100.
13. Reid, The Task-Centered System, p. 327.
14. J. C. Nelsen and E. S. Hammerman, "Use
of Confrontation with a Problem Drinker: A
Single-Subject Study," in J. Conte and S. Briar,
eds., Casebook for Empirically Based Practice
(New York: Columbia University Press, in press).
15. D. Kiesler, The Process of Psychotherapy:
Empirical Foundations and Systems of Analysis
(Chicago: Aldine Publishing Co., 1973), pp. 35-40.
16. Ibid., pp. 37-38.
17. Kiesler, The Process of Psychotherapy, pp.
29-30; and R. N. Kent and S. L. Foster, "Direct
Observational Procedures: Methodological Issues
in Naturalistic Settings," in A. R. Ciminero, K.
S. Calhoun, and H. E. Adams, eds., Handbook
of Behavioral Assessment (New York: John Wiley
& Sons, 1977), p. 286.
18. For computations to establish the reliability
of observations of targeted events that can also
be used to determine the reliability of observations of interventions, see Bloom and Fischer,
Evaluating Practice, pp. 120-123 and 127-129.
Downloaded from http://swra.oxfordjournals.org/ at Penn State University (Paterno Lib) on May 11, 2016
By verifying their use of interventions
in studies involving AB and more complex single-subject.designs, practitionerresearchers strengthen the credibility of
their published reports of such projects.
They also help readers of the reports
to understand much more fully the nuances of the interventions used. This
enhances what readers can learn from
a project and makes it possible to carry
out accurate replications of single-subject studies. Careful verification offers
practitioner-researchers who are conducting projects an additional benefit. By
devising operational definitions and coding their tapes, they become much more
aware than they would otherwise have
been of what they actually did in interviews that turned out to help or not
help their clients. This strengthens the
practitioners' clinical knowledge and
skills.
come in Psychotherapy," in S. L. Garfield and
A. E. Bergin, eds., Handbook of Psychotherapy
and Behavior Change: An Empirical Analysis (2d
ed.; New York: John Wiley & Sons, 1978), pp.
283-329.
3. Wodarski, The Role of Research in Clinical
Practice, pp. 8-12; and Rosen and E. K. Proctor,
"Specifying the Treatment Process: The Basis for
Effectiveness Research," Journal of Social Service Research, 2 (Fall 1978), pp. 25-43.
4. Three single-subject research texts widely
used in social work devote an average of fewer
than two pages to the subject of specifying interventions; none suggests verifying through independent raters whether the intervention specified
was indeed used and, if so, how often and in what
context. M. Bloom and J. Fischer, Evaluating
Practice: Guidelines for the Accountable Professional (Englewood Cliffs, N.J.: Prentice-Hall,
1982), have three pages: pp. 9 and 241-242. S.
Jayaratne and R. L. Levy, Empirical ClinicalPractice (New York: Columbia University Press, 1979),
have one page: p. 115. M. Hersen and D. H. Barlow, Single-Case Experimental Designs: Strategies
for Studying Behavior Change (New York: Pergamon Press, 1976), have none.
5. J. C. Nelsen, "Issues in Single-Subject Research for Nonbehaviorists," Social Work Research
and Abstracts, 17 (Summer 1981), pp. 31-37.
6. In a single-subject study of task-centered
practice, Tolson verified her use of interventions
but did not emphasize this in the published report.
See E. R. Tolson, "Alleviating Marital Communication Problems," in W. J. Reid and L. Epstein,
eds., Task-Centered Practice (New York: Columbia University Press, 1977), pp. 100-112.
7. For further understanding of single-subject
research methodology, see Bloom and Fischer,
Evaluating Practice, Jayaratne and Levy, Empirical
Clinical Practice,' or Hersen and Barlow, SingleCase Experimental Designs.
8. For a report of the Hollis-Mullen typology,
see F. Hollis and M. E. Woods, Casework: A
Psychosocial Therapy (3d ed.; New York: Random House, 1981), pp. 95-105 and 337-359; and