Document

•
.,
.
,
1
TOWARD OPTIMIZING LEARNER FEEDBACK
~,
• OURING INSTRUCTIONAL MATERIALS DEVELOPMENT:
,EXPLORING A' METHODOLOGY FOR THE ANALYSIS
OF VERBAL ·DATA
.
,,9
,
by
~
Y
M. JANE CARROLL
,
.
'
)
l'
.
~n
A thests submitted to the Faculty
of Gr.duate Studies and Research
p.rti.l fulfillment of the requirements
for the degrae of Master of Arts
~ Educational Psychology
4
Oepa~tment of Edue.~ional ~~ychology and ~ounselling
MeGill University, Montreal
July, 1<188
•
CM.
,.
Jane Carroll
1988
..
,
.
, ..t;
li
.....
\
f
h
EMploring a .ethodology for the analysis Qf verbal data.
,
(.
.
.'
.
\
(
,
r
1
c
i
,
A8ST~
.'
~
Formative evaluation res.archerS have not yat
•
d.termined the id.al conditiçns for
r'
g~thering
quantit.d.ve and 'Qualitat1.ve letlrr\er.data.
superio~
pa~
oi the
j
difficulty may .b~ due ~o t~. fact that previous research has '
no~ extlmined
learner fè.dback prior to its conversion into
revisions in the
i~structi~nal·materials.
The present study
,
address •• this issu'.. qy developing a systematic: method for
comparing the verbal
fe~dbac:k
from learners who are trying
out instructional materials under different condltlons.
conv.rsatio~al
~
A)
approach wps used wherein each unlt of
discourse was coded acc:ording to its funetion.
Patterns of
discours. unit. were 1.dentified whieh c:aptured student
opinion ln the
eonte~t
of the topies diseussed and the
.pacific questions asked.
Student opinions were then
,
qual.tativelv sorted i~o three categories c:entraJbto
,
d •• i~ning effec:tive materials.
Th~ results of a pilot
applic:ation of the methodology are explained 1.n terms of
inconsisteneies ln procedures and design.
Speeifically.
differences ln learner feedbaek are attributed to influenc:es
such as: lac:k of ore-testlng for
~rior
knowledge; the
learner.· understandl.ng of their role during mater lais
try-outl the evaluator's style; and the dynamies of group
c
ii
•
(
L.s chercheurs en .v~lùation formative n'on\ p•• encor.
déterminés les conditions idéales pour la collecte de
,.~
...
~,
meilleures données 'Qûant1tatives et Qualitat1ves
chez~
l'apprenant.
les 'Ç'echerches précedentes i'\' ont pas examlne •• -" le fe.b.ack"
de
l'apprena~t
avant d,
.
matériel éducatif.
~s
convertir en
r.vi.io~
d.n. la
,
La présente étude addrt!'sse c.e prob l.m."
...
'
,
,en développant une méthode peur compar~r le f.edback verbal
'J.
de5 apprenants qUl
d~fférente~
ess~yent
conditions.
la matériel
Une approche à
éducatif sou.
l'étude
conversatlons a été utillsée, et conslstalt
a
d.~
coder chaqu.
1
unité de dlscours selon 5a fonctlon.
de
'.
discou~s
~té
ont
l'a~prenant'dans
J
questions
Oes profiles d'unlté.
...
ldentifiés pour représerrter l'oplnlon d.
le contexte des sujets dlscutés et des
Les opinions des apprenants ont enSUit.
pos~es.
été assorties qualltatlvement dans les trolS catégorles
,
,
centrales au développement de matérud éducatlf ef'flcace.
Les résultats d'une étude pilote de cette méthodologle .ont
1#<
expliquéS en termes des inconsistences dans les procédur ••
des sessions d'essaies.
Les différences entre le f •• dback
des apprenants sont attrlbués
a
certaines influences tel •
.-
que l'absence d'examens préliminaires des connalssances
préalables; la compréhension qu'ont les apprenants de leur
•
rôle dans l'utilisation du matérlel éducatif en
déveloPP,ment;
groupes ....
le style de l'évaluateur; . t la dynamlqu. d ••
--
IJ
"
~,
,
.
l,
c·
iii
,,
)
."
.
"
•
"
t
1 ~œil'td lik. ta .)(pre~5 my sincer. thanks ta m,y advispr
Dr. Cynthia W.~ton, who g.ave generously of 'her tim_.,
·.)(P.rti.~
and .'upport
thr'OU9hO~iS :nti~e
;,
undertaking.
1
a~ ~lso grateful to Dr. Bob BraceweII .for l'lis invalu',able
advic. and editorial comment., and his insi5~ence on clarity
l,
,
,
1 am lnde~t~d to my frier:!f and colleagues for' their
a •• i.tance.
,~
Particularly·Maria Silva and André Renaud for
trans1ation of. the abstract, Marie-Christine Busque and
\
Klein for their
for
mlnd.
statistica~
expertlse, Denis Bédard
MaC-Wlzardry, and Lynn Hannah and Alenoush Saroyan
for h.lping me
k.ep'things~ln
perspective and my goals in
My 5ister Suzanne Carroll deserves speCial thanks for
her a5s15tance wlth the many Ilttle things.
Finallv, l would 1ike to express my gratitude to Lorne
Freedman for h15 encouragement, support, and just for being
ther. when 1 needed him.
Preparatlon of this
the5~S
was supported in part by the
Fonds pour la formatlon de chercheurs et l'aide à la
recherche (FCAR). Grant *290-10.
(
c
..
,i' . /
•
iv
•
\
TABLE OF CONTENTS -
PAGE
.... ... . ...·.................................. ,. .. • i
RItSU",* •••••••
·.............. . ................. .
ACKNOWLEDGEMENTS. ........ , .................... . • •••••• " .. l.ii
ASSTRACT ••
4 •••
t
LIST OF
TAB~ES
......... .
• .•••••••• vi
LIST OF FIGURES ••••••••••••••••••••
CHAPTER
ONE~
Background
Introduction •••••••••••••••••
Organization of the Thesis.
• •••••••••••••.••••• vii
........ . . ...... . ......
1
.........................
'Components of Developmental Te9ting ••••.••••••••.••••• 4
Determining Ef fec: ti ve Testlng Condit.Lons.
Purpose of this Study.
•••••• 6
.... " .................. . .. •.•.•• 10
Researc:h Ques t~n ~ •••••••••••••••••••••••••••••.••.•
11
1
CHAPTER TWO: Review of the Literature
In troduc tion •
~
. . . ... ...... ........ ... . . ......... ..... .11
Number of Learners in a Testing Ses5ion.
The Role-s of The Learner and Eva luatar ••••
......... 11
....
Learner Characteristlc:s ••..•••••••.•••••••••.••.••.•• 32
The Testing Env.Lronment •••.•••••••••••••••••••
•
· ................. .) " ................ 37
<fi
CHAPTER THREE: I1ethod
Introduc tion ••
•
•
•••• 34
. . ... .. . ........
(
Method of Data Collectl.on.
Chapter Summary.
...
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
••• •
Il • • • • • • • • • • • •
....
•
]'
v
•
:'PAGE
..................... .......... ...... .40
Subject ••• ... . ...........
.... ..... . .......41
Mate ... ial ••
',If·"
~\
~~ocedur.
. . .. . . . . . . . . . . . . . . . . . . . .
1,1'•
e{
•
.47
~
Di.cour~.
Graphing •• -•••••.
.61
.......................... .62
Summary ••••••••••• .. . . . . . . . . . . . ... . . . .. . . .. . .65
Scheme.
Chapt.r
~
cHAPTER FOUR: Results
.......... .66
Differences Among Conditions.
. . . . .67
Ca teqori zation •••••••••.•••••••.•.
.... .86
In troduc ticn ..
Qua! i ta ti ve
tI
........................... .
CHAPTER FIVE: 4Discussion and Conclusions
... .91
........ . .. .... . .. ..91
Methodoloqy. . . . . .. . . . . . . . . .. . . . . •• 94
Ov-.rv i aw . . . . . . . . . . . . . . . . "
I~".
of the Methodoloqy.
Applica tién of the
t*
Concludl.nq Remarks ••••••.•••••• a._ •••••••••••••••••• .103
Recommandations for Future Research.
/
c
REFERENCES. • • • • • • • • • • • • • • • • • • • • i
•
•
• • •
•
•
•
•
................ .104
• •
•
• • •
•
1-
.43
for Collectlng Original Dat.base.
The An.ly~i~ Sy~t.m ••
J,
'-,
,
"
•
• • • •
•
• •
.106
APPENDIX A
Stimulus Mat.rials
APPENDIX B
Compilation of Guidelines for Expert Reviewers
f
•
LIST.OF TABLES
•
•
TABLE
1
Demographie Information and Po.t-t •• t R•• ult.· for,
the Sampi • . . • • . . . . . . . . . . . . • . . . . • • . . . . . . . . . . . . . . • . . . . . 42
2
Dore' 5 COd.5 and" Defirli tions wi th El<amp'l •• of
Conversational Act. From This Study .••••.•.•••••••••• ~2
3
Protocol length by condition ••••••••••••••.•• : •••..•• 69
4
Sources for THEMES from Each Condition ••
5
FreQuency of ASPECT Types Sy Condition ••••••••••••••• 76
6
Means and Standard Deviations for STRINGS
Responding to ASPECT Typ.s ••••.•••••••• ~: ••.•••....•• 78
v
7
l ••••••••••••
70
...
Problem Representations from 'Each
Con~tion
,
by
Ca tegory • . . . . . . . . . . . . . . . . . . . . \. . . . . . . . . . . . . . . . . . . . . . . . 87
,
8
Re~ision
Suggestions from Each Conditlon by
.
Ca tego ry ... .~ . . . . . . . . . . . . . . . . . . . . . . . . . . . • . . . . . . . . . • . . . eq
.
..
\
r
•
...
"
-----
vil
c
,
LIST OF FIGURES
PAGE
FIGURE
1
2
3
Role of test .ubjects ••..•.•••...
'Role of developer/lPvaluator ••.•••••••••••••••.••.•.•• 21
A network r.pre.ent.tion of the primary
c~nver •• tional functions, ganeral classes and
particular -conversational acts in Dore's
coding sch.ma ••••••.••••••••••.•••••..•••
.
4
.. . ..... ...... · .... .21,
'-
Frequency of ASPECTS per THE ME each
c en dit i on s. • • . • • • • • • • • • • • . • • • • . . . • • . • • . • .
~
....... • •••• • 51
...... · .... . 74
SampI. dlscourse graph from the 1-1 Active'
c:ond i t ion • . . . . . . • . . . . . . . . . . . • . . . . • . • • . . . . . . . . . .
· .... .80
SampI. discoursQ graph from thé 1-1 Passlve
c: on dit ion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . &.. . . . . . . . . . . 81
7
SampI. dl.course graph from the Group Active
con d l t i on • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 83
SampI. dlscourse Qraph from the Group Passive
condition •••.••••
8
.............................. · .... . 85
)
1
/
"
,
\
•
CHAPTER
,
(»E
Background
1n'trodyc tian
Oesp~te
much
f
into th. 1 •• rn1ng
~nvestigation
~"-
there exist no recip•• for
instruction (a.Q., Bak.r
p.r~.ct
1973; Dick, 1980; Markle, 1967).
are
One way te
~nstructional
~nQredi.nts
cemb~ned
~n$tructional
developers te try out
members of the population that
procas~,
~ill
d.tarm~n.
is for
apprepr~at.ly
the~r
mater~als
use them.
of obtaininQ feedback from learn.rs ha. be.n
if
with
Th • • fficacy
w~dely
,
('-1'
recoqnized by educational researchers (e.g., Gropper,
Komosk i
& ~Woodwa,"d,
198~;
~.th.nson
l (
-:....
1 ~
•
197~,
& He~derson, 1977;
Stolovitch, 1982; Thiag4rajan, 1978) and i5 con.ld.red an
.'
~nteqral
part of most instructional
Goodson. 1980).
de5~qn
models (Andrews &
While the goal of a tryout s ••• ion remain.
r
constant--to ldentify the flaws ln a produet in oraer to
rev~se
the instruction and make lt more efflcient and
effect~ve
(Dick
Carey, 1985;. Hartley
&
"
1977)--resea~chers
Burnhill,
&
~
.
vary markedly ln the tryout •••• ion
"
strateqies which they stress <Gei., Burt & We.ton, 1984).
..:
1
Materials trycut is only on. part of the emplrical
proees5 of determining the value of educational programs or
materials.
•
research, and has be.n practic.d sinee th• •arly 1920 •
(Cambre, 1981>.
~ormative
Scriven (1967) introduc:.d the t.,.",
ev.Ju.tion ta distinguish improv.m.nt activities
..,..
c
'~1
2
c Which
use.
occurs
,
.~ter
"completion
or dissemlnation for general
He describe. aIl attempts at improving educational
proorams and materials during thelr development as formatlve
.~luation
.
Learner tryout (e.g.,
1977), review by
~ubJect
Hend~rson
& Nathenson,
matter or instructional design
, ')
.xpert. (e.g., Montague, Ellis,
t •• tlnÇl (e.g., Frl.esan,
&
Wulfeck, 1983>, and field
"
1973>, are aIl variants of the
it.rative t •• t-mod~fy-retest-modify formatlve evaluation
cyc le.
We.ion (1986) revlewed the common approaches to
, formative .valuation of instructional materlals
three WhlCh deal speclflcally
w~th
and leiJrner
ver~
The term developmental testlnq
noted
learner tryout of
mat.rlals prlor to general use: dev.lopmental
thr •• -stflq. model,
a~d
test~nq,
the
ricatlon and revision.
lS
used to descr1be the
procedure of havlng Jearners try out mater1als in the1r
d.velopmental state and then revlsing lnstruction on the
baSl. of learner feedback (Markle,
196~i.
One learner and
one evaluator (e.q., BJerstedt, 1972; Horn, 1966; Johnson &
Jdhnson, 1975>, small groups of learners with an evaluator
<e.g., Abedor, 1971; Bjerstedt, 1972; Davis, Alexander &
Velon, 1974) or both conditlons (e.q., Geis, Burt & Weston,
,1:>.,
1984) are used to closely examine lnstructional materials.
,
one-to-one t •• tlOQ. ~)(t to small group testing and finallv
ta • fleld t •• tlnQ condition Wh1Ch attempts to paraI leI the
(e.9., Dick & C.r\..ev, 1985).
"
•
3
~roduct
dvvelopment when hv introduced the notlon of
R~vis~on
(ûVRJ was cOlned to refer to thlS onooing proe ••••
LVR lS a respon5ible publisher's approach to .nsurlno that
-
materials. contlnue to s'erve the changlnÇl n.ed. of th.
.poPulatjn' that use them (EPIE Instl.tute,
\
Theoreticallv, revised
instructional materials
~econd
'Y
~are
verification with learners.
1~n5).
and third editlons of
the frUl t of continuou.
--..-
LVR ~an be conduct.d wlth on.
t
learner or with small groups of learners (e.g., Kandaswamy,
StolovJ, tch, & ThiagaraJ an, 1976).
The focus of this'research is on learner tryout of
materials ln
developmental
dr~ft
forme
Slnce the procedures for
testing are subsumed ln the thrve-staoe model
1
and ln LVR, inferences can be made from the
,
varlables
w~ich
.~ploration
of
\
a4fect learner feedback ln developm.ntal
testing to the other two approaches.
Organization of the Thesi.
, In the balance of this chapter the fundamental i •• u ••
ln developmental testlng, some important terms and concept.,
the purpQse of the study, and the re •• arch Questlons, are
presented.
')
•
In Chapter II, a review of the reeent literature
,
rel'vant to the developmental t.sting of instructiona1
materials is presented.
-In Chapter
,\
III,
the m.thodoloqy which .volv.d to
(
c
4
verb.l d.t. from diff.rent developmental testing
conditlo~S,
i. d •• cribed.
ln Ch.pter IV, the re.ults of lhe appllcation of the
conver.atlonal analysis m.thodology and the cateqorization
Flnally, ln Chapter V the implications of this
)
.Mploratory res.arch are discussad in light of the
pr.liminary flndings.
r •••• rch are
Some recommendations for further
provld.d~
Co_paownt. of Qevelopment.l reating
E.ch approach to developmental testing has unique
adv.ntages and disadvantages and each requires a
consid.rable lnvestment in tlme, money, and planning
<•• g.
Macdonald-Ross,
Weston
1978).
(1~88)
synthesized the
many strategies for data gathering presented in the
formatlv • • valuatlon literature and grouped them into four
compon.nt •• • aurc•• , • • ttlngs,
Sourc •••
techniques,
ln developmental testlng,
and diJtiJ.
learners are the
prim• •ourc. of lnformation concerning the effectiven •• s of
lnatructlonal materi.ls.
V.riously labelled sl:udents,
.ubject., or test subJects,
learners are members of the
tarqet populatlon used to tryout and/or criticize draft
inatructlonal materials.
A ••condary .ource of data during deve)bpmental testing
, i.
the~v.lu.tor.
Aiso called the tester, experimenter, or
int.rvi.w.r, this indlvidu.l works with the learner(s)
durlnQ materl.l. tryout.
At times the evaluator may be
•
prompted through inter.ction with the l •• rner to reco;niz. A
~roblem
"
or 'sugge.t an improvement.
Settinqs.
The conform.tion of participants in
formative evaluation 1S referred
~
as .ettin9s.
The most
one evaluator and one learner, or a small group of l.arner.
,
with an evaluator (Burt & Geis, 1986).
Techniques .
t,
•
gather data from the source are the techniques of, formative
\
eva~ation.
Data can be gath.rad on-line. that
1S,
while
'" are lnteracting with Ahe ma:terl.als, or
learners
retrospectively, usually immedlately followlng the tryout
session.
On-line data includes prat.sts, embedded t.sts and
posttests. and Any comments or suggestlons, wrltten cir oral.
Retrospective data includes learners' attitude. towards the
materials (often obtalned from an
a~tltudinal
Questionnaire). and any- lnformatlon dlscussed during the
postintstructlonal or
debrie~ing
lnterview.
Th. lntent of
the debrief ing 15 to clar l fy any prob lems f lagged. th. t
1S,
~
located. while on-line.
Data.
-
Four varleties of data can arise from
dev.loemental testingl ob5ervatlons. scores, wrltten
information, and oral informatlon.
Any b.haVlor indicative
of confusion. such as a puzzled look or pag& fllpping. noted
by the evaluator. is an ob•• rv.tjon which can b.' dlScussed
~
immediately or during d.briefing~Score. are the results of
pre, post, and .mbedded test. and are u.ed to a.certain if_
the learners have m.t\the
desir.d/c~iterlon.
Error patterns
.
,
•
"
•
6
CAn b.
e~Amin.d
te det.rmin. if
Attitudin~l
instructionAl content.
net used to me •• ure
ther~
i~structienal
i~
a problem in the
ratinQ sc~l~'scores ~re
effectiveness but can be
helpful ln det.rmining the causes of problems in the
~ritt.n'Jnformdtlon
15
~
comprised of any comments
And .uQg.stions not.d by the learner durlng the tryout, and
Any open-ended
response~
to the attitudlnal questionnaire.
The l •• t variety of data is ordl ln forma tion.
Comments,
problem ldentiflcations, problem dlagnoses, and revision
SU;; •• tlons made on-line or during debriefing constitute
oral information.
The combl.nation of these components--source's, set ting,
technlQU.S, and data--comprlse the conditions for
developmental testing.
D_tecmining Effective Testina Conditions
The
eQul.vocal recammendations for the various strategies,
~nd
the demands on avallable resources, have led to a line
,
1
of r •••• rch which is attempting tg de termine whether a
particular
,.
conditl~n
fOr data collection yields
Qu.ntit.tlvely and Qualltatively superior data for revision.
Most often, the effectlveness of a partlcular
'.
developmental t •• ti~g condltlon
1S
jUdged by comparlng
.ubJect post test scores from the orlglnal verSlon and
varlous revlsed versions of the lnstructional materials
c
<e.g., B.ghdadl, 1980; Kanda.wamy, et al.,
thl.
.
n~tur • • ttrlbute
1976> Studles of
instructional 'improvement to a
partlcular data collection condition, and yet the judgment
1.
made long aft.r ,the data are collected.
Sy the time the
,
.
\
7
•
data~coll~cted
during the tryout of the oriQinal version ha.
revision.
,
Learner comments and problems are sUbject to
_
or ev.luation by a revisor.
~nterpretat~on
Th. nature of
this evaluation depends on the perspective and skill of th.
rev~sor.
as a
,
For example, an instruct~onal design .~pert Acting
rev~sor
miqht interpret a student comment about net
underst~ndinq
problem
~n
te~t
a particul.r a section of the
the sequencing of the 1nstruction.
a. a '
The .am.
comment 1n the hands of a cont~t expert could be seen a.
reflectinq a lack of prior knowledqe on the student's part.
\
In either case, the degree of expertise and extent of
.
exper1ence of each expert, and the1r facility at plaY1ng the
rev~sor'5
role,
w~ll
influence the way student comment_ are
interpreted.
Once a problem ha. been ldentified and
ev~uated,
the
1nstructl.onal materials must be altered t'o correct the
e
perce~ved
flaw.
The actual change wh1ch 1S made depends
'. ,
•
~
upon the reVlsor's subJective ablilty to translate a problem
"
Mm5t of tan
ldentificatlon lnto an_effectlve reV1S10n.
th~
prescrlptions are intultlve, as llttie e)(ists ln the
Ilterature to
•
diagnoses lnto
re~ision
gu~de
the revisor in convertlnq problem
prescrlpt~ons.
"
The lack of agr.ad upon
heuristics means not only that one revl.or's
revlsion declsion might be QUlt. dlffe ... ent
~cm
another'.,
~
but a given revislon may or may not adeQuately addr ••• th.
)
B
•
Th. fact that student
P~.tt.st
.cores can be shown to
1
incr •••• when
\
ma~eri,.ls
contribut1on toward
are revised, i . an important
the production of effective
instructional materl.als.
unAn.w.red
lS
The question that remains
whether it is the revisor's expertise at
lnt.rprRting .tudent comments, the1r
~ntuition
and Sk1ll in
cr.atinQ reV1Sl.OnS, or the condl.tions under which the
/
.tud.nt data was collected, that deserves the credit for an
improved set of instructlonal materlals.
Most freQuently,
the data
.
collec~ion
condition lS given
thi. distinctIon, suggesting that dlfferent types of data
can be ,colllfcted by varylnq the sample Quality and s,ize.
The 1nadeQuacy ofrthl.s Judgment lies in the fact that the
data have never been analyzed in their raw form, that is,
prlor to thelr conversion into ravisions.
about the worth of a
Bafore a decision
particular cond1tion can be made, the
klnds of data WhlCh result from
developme~tal
testing must
be d ••cr ibed and compared.
Th. data collected during developmental
testing can be
divided into two typesz QuantItative and Quall.tative.
,JuAntl.tatlve data lncludes student scores and 'learning
tlmes, and thel.r responses to the rating scale on an
attltudlnal questlonnalre.
Qualitatlve data is defined as
f,
.~)
"~-.
...... ~>
c
the oral and written informAtlon from ,. tryout session •
Quantitative data
15
readlly comparable.
.cor.s and learnlnQ time. can bè collected.
Student test
Higher br lower
.core. or t1me. from a particular condition signal the
•
quest10nna1res are used for 9ath.ring thi.
of dAta.
ki~d
Responses can be summarlzed accordlnQ to th. percent of
learners who chos• • ach alternative to the various
questions.
.'
The qualitative data
f~om
developmental
te.tinQ
sessions presents a challenge to the researcher.
Written
lnformation can be summarlzed and content analyzed, but how
can the vast amount of verbal information from a tryout
\ /
session be compared?
..
A set of empirlcal rules for ln.tructional developm.nt
"',~/
devised by Locatis and Smlth (1972>, advise .valuators to
collect
bo~h
kinds 0+ data for two rea50n ••
extremely dlfficult ta predlct in advance what klnd of
lnformatl0n wlii be most useful to the revisor, and .econd,
although comparison of pre and
postt~st
performa~ce
to identlfY1ng instructlonal weaknesses lt
15
can lead
inherently
lncapable of de-iineating difficulties in a way that leads to
the formulatlon df solutlons.
The issue relevant to thlS investlgatlon i5 the
contradictlon whereln many authors ln formatlve evaluAtion
advocate the vital lmportance of g-atherlng '-Y.arner comment.
and s,uqgestl.ons in lnstructl.onal cttaterlAl. lmprovement <e.g.
•
Nathenson
&
Henderson,
1977), and yet no sY\ltematlc approach
to deallng wl.th verbal comments and .uQg •• tions that emerQe
from tryout sessions app •• r. in the dev.lopmental te.ting
Il terature.
~
c
10
Purppaw of thi.'Study
Th. purpo •• of thi • • tudy wa. to d.v.lop •
methodology
that would allow the .y.tematic comparison of verbal
f •• dback obtain.d from students who are trying out
in.tructional materials under different developmental
te.tinQ condition..
Such a methodology represents a vital
link in the determination of the crucial components in a
d.v.lopmental t •• ting •••• ion which lead to Quantitatively
and Qualltativelv superior data for revision.
R•••• rch Qyestions
Con.idering this purpose, th. specific Questions which
Quid.d this researCh are:
1.
•
How can the verbal data from developmental testing
•••• ion. be systematlcally compared
2. What are
em.rQ.
~rom
the differences in the verbal data which
dlfferent condltions for the developmental
t •• tinQ of lnstructlonal materials?
,
c
?
•
•
11
J
CHAPTER TWO
Introduction
The use of developmental t •• tino technique. for the
purpose of
m~terlals
improvement is • central premi.e of
systematlc instructional
design models (e.g., Andrew. &
,
Good50n, 1980).
It is thus surprisfng that few heuri.tlcs
eXlst to guide instructional dev.lopers in carry in; out
developmental t.sting.
Geis, Weston, and Burt' (1984)
conducted an extenslve literature s.arch and found a limited
amount of material on developmental testlng.
Authors such
as Horn (1966). Abedor .( 1971 ), and Gropper (1975)
•
contributed extensively, but there was littl. consensus
among them or other researchers
revl~wed
1
"
li
. . GelS et al.
(1984) compiled the a550rted recommendations and sort.d them
into a number of
categor~s~
Sorne of these categories are
pertlnent to this revlew, and form a convenlent framework
for thlS dlScusslon.
The categories are, the number of
Il
learners ln a seSSlon; the roles of the learner and
evaluator; the characteristics of th. learner populatl0n;
...
the testlng envlronment; and the method of collectlng data.
Humber of Learners In A Dev,lopMental Testinq Sessigo
Developmental testlng uses lndlvidual learners or
groups of subjects as sources of informatlon.
~
A~vocates
suggest that the need for ri~h data and the fact that the
instructron is untried imply • relativaly .mall
~umber
of
students wlii best supply the d •• ired information (Baker &
c
.,.
c
12
1
d.v.lopm.ntal testing.
Th. first satting is comprised of
~
~
on. learner .nd on. evaluator and i5 variously labelled
one-to-one t.sting (Dick & Carey, 1985), the clinical method
(~owe,
Thurston, & Brown, 1983), or the tutorial method
(Markl., 1967>.
The second approach uses multiple students
w1th an evaluator and is referred to as either small
t.sting (Dick & Carey,
198~),
grou~O
or group-based testing
(Kandaswamy, . t al.,'1976>.
1.,
The
One-to-One ADDroach.
t.sting is to
the
ident~fy
The purpose of one-to-one
m~or
problems in a
instruct~onal
s.t of draft mat.rials (Dick, 1977; Kandaswamy, 1980).
Essent~ally,
a single learner's responses, reactions,
It is frequently
comments, and suggestions are recorded.
suggested that at least three students from the target
popul.t~on
that
try out the
~dent~fi.d
mater~als,
flaws are not the
of a sinOl. lndivldual rweston,
r.v~s~no
materlals
Il
base~
one at a
,
~diosyncratic
1986).
..J
preferences
The validity of
on the feedback of a
~arkle (1967), .nd Komoski
to ensure
t~me,
$~ngle
member
(1974).
1
Markl. refer. to one-to-one testing as a "debugging
op.ration". important' b.cause it allow$ the close
obs.rvation of on. learner responding candidly to a set of
c
l •• rnino materials.
Sh. further states that a friendly
ev.luator .ets th. tone that permits the learner to ask
qu •• tlons that mayapp.ar stupide
In a group setting
lmportant instructlonal conc.rn. may not be ralsed because
..
•
13
•
presence of peers.
other members of the sam. population.
H. sugge.t. the "lAW
of parsimony" guide educationAI re •• archers, A. lAroe .Ample
learner tryouts may not lead ta A superlor product And may
waste a great deal of time and money Along the way.
Empirical support for the one-to-one
from a
frequently~cited
strat.Q~ come~
early study by.Robeck
(196~).
H.
,
selected a single "bright" grade six boy and observed him
using a short prototype instructional sequence.
It is not
made expllclt how the student was chosen, but on th. ba.i.
of thls one learner'. test item errors and verbal r •• pon ••••
a revised·second verSlon of the instructional unit w••
produced.
The second ver.lon wa5 tried wlth anoth.r
student, and a third version produced.
AlI thr ••
verslons--the orlginal and two revisions--were giv.n te
three matched groups of stud.nts.
Postt.st
~core
comparisons indicated that students who receiv.d th. r.vi ••d
versions significantly outp.rform.d
•
(p(.O~
for th. fir.t
reviSlon, p<.Ol for the second ravi.ion) student. who
between students learninQ from the first and .econd drAfts
~
was aot signifie.nt.
ROQeck did not explain how verbal re.pon • • • •nd t •• t
item error. becam. preblem identification., or how
1
14
c
identified problem.
bec~me
revisions.
While he demonstrated
1
th.t .tudent performance can b. increased significantly when
instruction i. alter.d ba.ed only on the observations of'a
.ingle l.arner, it is uncl.ar what
contributio~
to the
of th • • tud.nt, the .kill of the evaluator in interpretation
of th• • tud.nt's comment., the revision decision, or a
combination of aIl three.
The Group-Basad Appreach.
Group
te~ting
invorves a
group of learners working through a set of draft
in.tructional mater1als with an evaluator.
Usually, wr1tten
~
re.POns •• and test scores are collected dur1ng learning.
A
debriefing interview is freQuently conducted to document
additional
e~planation,
)
opinions and suggestions.
Pr.fer'nc. for group size ranges widely.
Bell and Abedor
(1977) sugge.t the use of 4-6 learners, Dick and Carey
(198~)
propos. that 8-20 subjects
~re
necessary to ensure
the data is repr •• entative of the target population, and
N.t~.n.on
and Henderson (1977) favor a group of 20-30
b.c.u.e their re •• arch indicates that groups any larger donot ident1fy .ny more or different problems or generate.
nov.l
.olut~èhS.
They also point out that the group size
they use ha. the advantage that a large amount of open-endëd
data can qe collected.
c
The fact that SOph1sticated
mea.urement and analysis techniQues can be
appl~ed
to a
larg_r data b.se .pp.als to m.ny res.archers <Baker & Alkin,
1973).
Some r •••• rch.r. pr.f.r group t.sting to one-to-one
"
~
•
10
testing <e.g.,
B.k.~
& SChutz, 1977>, and
oth.~ •••• _
,
d
(
it ••
both be conducted and that one-to-one occur flrst.
believe the effectivene •• of the
larger groups because the
p~oc.s.
.valuato~
dlmini.he. in
is unable to record th.
domments and reactions of more than on • • UbJe~'
simultaneously.
cumulat~e
Other res.archer. • •• m to hope for a
effect: 1f one subject yi.lds intere.tin9
informatl~ ~ill
the amount?
two yield twice as much and thre. triple
The combined pressures of eHecuting a
•
~
cost-effective developmental testing sessjon and determining
which
condition~
\
..
yieid the most and the best
to a few studies Wh1Ch compare small group and one-to-on.
testing.
Comparisgn
d~$igned
o~
the TWQ
ADD~QACh.s.
a developmental testing
W.ger
.~perim.nt
t
in
part, the issue of the optimum number of subjecte
produclng the most useful data for reV1Slon.
fo~
Five r.vi ••d
versions of a set of instructional materials were produced.
Thtee of the revised
st~ents.
versi~~~
came from one-to-on ••••• ion.
The fourth version was produced with lnput from a
heterogeneous small group of students.
material was the
~
pro~uct
The 1 •• t .et of
of one-to-on. dev.lopment.l t •• tinq
wlth high, average, and low aptitude .tudent. followed by •
small group formative evaluation and revi.ion.
The researcher reported that .tudent. who u ••d
\
lb
c
m.t.r1ar. r.vi • • d according to
•••• ion. ""i th
feedback
from the one-to-one
the mi>ce,d apti tude group scored as we lIon a
po.tte.t :•• \\students who used materials revisedJac--COrding to
from a
comb~nation
of one-to-one and small group
The materials revlsed according to feedback from
•••• lon ••
one-to-one sessions
.iQnificantly
according
w~th
h~gher
to feedback
the
m~xed
posttest results
from
ensuring
group produced
than materlals revised
a small group session alone.
Wager lncorpora):'d several
.
apt~tudp
preCaUtlOns aimed at
ravisions were appropr iate and not arbi trary .
R• • • arch-based gUldellnes and theory-based prlnclples for
Graduate students
course on
format.lve evaluatlon were selected
asslgned,
.ln teams of
and
four,
.ln an
advanced
and randomly
to review the learner
the reV1Slon decisions made by the author.
feedback
However,
the
res.archer d.ld not descr.lbe precisely what const.ltuted the
formative evaluat.lon data.
but dOits not make clear l f
.cores, writtén
of
the above.
or common
She refers
this
feedback consisted of test
lnformat.lon, oral
Furthermore,
across condl.tl.ons
Kandaswamy,
et al.,
to learner feedback
comments or a combination
how thè da ta was J udged "unlque
is unknown
(1976)
since no system of
conducted an
experim~nt
to
compare the effectiveness of one-ta-one and small group
...
condi tlons.
c
These res.archers varied the aptitude
level
of
the subjects and ravi.ad the materials according te
po.ttest data and comments made during
that
l.ndividuAl
learning.
The notion
evtlluators or reVl.sors may interpret the
•
17
sam. data differently And ther.by Alt.r th. instruction
according to their own insights WAS r.coQnized.
~ntroduced
stepi were
Sev.ral
d~ff.r.nti~l
to control for the
eff~ctl.Veness of revl.s,~ons made b:, different r~v~sors.
served as' their own control by applYlng
evaluator/rev~sors
both one-to-one and small
..
mater-ials.
group~LVR
to the
orlg~nal
Revisors were instructed to treat .ach
independently and to
ba~e
aIl
modif~catl.ons
immedl.ately ~vallatile learner data.
in which the
Th.
rev~sors
applied LVR t
rev~.ion
on the
In addl.tion, the order
that is one-to-one first,
group second, or the reverse, was treated as a factor ln th.
design.
Revlsed verSl.ons of the draft materlals ware
randomly assigned to matched groups of target learn.rs ••
Posttest scores were used as the
~nde~
of effectl.v.n •••.
.
The results demonstrated the one-to-one approach result.d ln
.
reVl.Sl.ons to a set of lnstructional materlal. whlch were at
least as effectlve as reviSl.ons from group-ba.ad methods.
The order in which the evaluator/revl.sor applied the two LVR
methods to produce two revised
the effectiveness of the
In
~
vers~ons
dl.d not lnfluenc.
reV~S10ns.
sim11ar study Baghdad! (1980)
~nv.stigat.d
ln
part, the differences batween one-to-one' and small group
testing.
A draft
instruct~onal
to feedback from four one-to-one
o
group<
unit-wa. r.vis.d accordinQ
s.ss~ons
and four small
sess~ons.
"".re randomly administered to memb.rs of
t~.
target
population.
performance were documented
bet"" ••n the'one-to-one and small
,
1
1
1
4It
18
,
'oraup approach •••
In bath the.e .tudi •• th. re •• archers attempted to
camp.n.at. for differential affects due to various
evaluators by having the evaluators serve as their own
control.
Once
Evaluators revised the original materials twice.
accord~ng
to feedback from a one-tô-one session and
once from feedback from a
~roup-based
session.
focu. was on the effectiveness of revisions
~vi.ors
themselves.
Thus, the
ra~her
than the
These studies are important in the
onooing effort to determine the critical features in cost
,effective learner \tryout.
th. learner data
pr~or
Unfortunately,
th~
differences in
to lnterpretatlon and revision were
not documented or reportedo
Controlllng the confounding influence exerted by the
revlsor lS an lmportant contribution to isolating the
cruclal features in developmental testing.
In studies like
the.e ?t .ppears that feedback from a particular setting i5
re.pon.ibl~
Anoth.ro
~
for the outperformance of one verSlon over
However, we still do not know how the data differs
Acro •• conditions.
Is it merely that one condition delivers
more information, or is there a qualitative difference ln
learn.r feedback obta~ under different conditions?
S.leçting An ApDroacho
te.ting.
pr~duction
~o
date, the formative
80th methods apparently contribute to the
of more effective learning materials.
A survey
of instruction.l development practitioners conducted by Burt
j
•
19
and G_is (1986)
r.~l.cted
this
No
unc.rta~nty.
siQnif~cant
difference in thé u~e of on.-ta-on. follawed bv small group
t@sting or just small group procedures wa. found in
practitioner reports.
The rationale f r the use of the one-to-qne approach
appears to be
udent at a time will overload the evaluator
more than one
and imped9
,
e detection of instructional deficiencies.
On
the other hand, the use of the group approach is ba •• d on
the assumption that it will generate a broader data bas. and
that it avoids revision on the
In the interest of
ba9~s
of an
atyp~cal
student.
the effectiveness of a set of
~mprov~ng
learning materials, instructional developers want to select
the testing strategy that
learner
w~ll
produce the most us.ful
The recommendations for the varlOUS
f~edback.
~
strategies are inconcluslve and the confoundlng influence
the revisor's input
difficult.
comb~ne
o~
to make thlS selectlon
In arder to know more about the data produced
under different conditions, researchers must strip away the
confounding
in~luences
of interpretation and
rev~slon,
and
lnvestigate the lnfluences of the many variable. on a
developmental testing session.
The Roles of The Learner and Evaluator
The literature on developmental
t.~t~ng
yi.lds a wide
array of roles for bath the learners and evaluator durinQ
tryout seSSlons.
Ge~s
et al.
(1984>
have conceived th.
roles of each of the participants as corre.ponding
Both learners and evaluators make a
tran.~tion
cont~nua.
from pa •• ive
20
•
1nter.ctor~
with the materials and one another,
and ) reactive partners in the
in~truction.
to active
Figure 1
r.~r~sents a continuum of proposed roles for learners during
developmental te.ting and Figure 2 conceptualizes.the roles
of th • • valuator.
At one .Mtreme the te.t subi.ct i5 treated as a
tradit10nal student whil. on-lin..
rece1ves the
This type of learner
material and works through it
i~structional
independently (e.g., Markle, 1978; Wydra, 1980).
4
Altheugh
not ~ndicat.d in th. mod.i, a pretest is probably written in
order to determ1ne the entry skills of the
A
lear~er.
;
~
postt.st designed to determlne how effective the instruction
-
\
has b •• n ln terms of knowledge gain is administered after
the .e.Slon.
optional.
Retr~spective
or debrief1ng is 1ndicated as
Corresponding wIth this test-subject-as-learner
role is the adm1nlstrative role for the evaluater: giving
1nstruct1ons, tlme keeping and invigilat1ng the posttest.
If a debriefing discussion is included it rOUld presumably
be Quided by posttest .r"rors and learner generated comment.
A less eMtreme passive subject is characterized
learning the materIal, making comments and
i~
~s
necessary
.sking Quest10ns during the on-line portion of the seSSlon.
A postte.t 1S Indicated for this type of learner.
.
Merrill
and Tennyson (1977), for eMample,
suggest that students
,
~
~~
.
write down the problem. th.y have and rate their confidence
in the answers they give on the posttest.
The
evalu~tor
.ct• • s .n obs.rver, r.cording observations and answering
'.
'
'.
•
To.t .ubJoct
as learner
\
>
21
1
'\
To.t ,ubJoct
.
t
Learn material as
best you can
Post test
~
(Debrief)
Learn materlal
f
~
Il
as cri tic
- 1
1
Learn material
Try to learn
~
material,
thlnk aloud,
suggest
improvements
Make comments,
ask questions,
If necessary
Actlvely
comment
& critique
,~
t
(Post test)
~
~
Debrlef
(Oebrief)
r
Post test
1
Post test
Debrief
( ) • Optional
Figure 1. Role of test subJects.
From ·Procedure. for gevelopmental taatlng: Preaerlptlona and practlc••- by
G. Gels, C. Burt, and C. Weston, 1984. Manulcript submlned tor publication.
~
,Admln Isters
test session
Actlve• Listener
Probes for
difflcultles,
suggests
revislons
Passive
• observer
Records
observation,
may answer
questions
•
Tutor
Revises and
re~dlates,
téaches where
necessary
,/"
~,
"
"
"
}'.
o
Conducts
debriefing
Figure 2. Role of developer/evaluator
From ·Procedure. for developmentaJ test/ng: Prescription. and prlCt/c..• by
Gels, C. Burt, and C. Weston, 1984. Manuscrlpt submlned for pub.catlon.
cr.
>
,~
22
The continua are not explicit about the intent
of the debrleflng but it would appear that posttest errors
and Any observatlona, data the evaluator recorded durlng the
test ••• 510n could be dlscussed.
A slightly more active condition would see the subJect
learninQ the content wh1le slmultaneously ident1fyi~
~roblems
and crltiquing.
The evaluator lS expected to probe
~
for dlff1cult1es and make rev1sion suggestions.
For
lnstance, Friesan (1973) recommends that students be
encour.ged to plnpoint the cause of difficult1es whenever
the evaluator notices a problem.
A posttest is lncluded in
thlS cond1tlon but error analysis would not be a reflectlon
of the lnstructlonal materlals alone.
Presumably,
rese.rchers would exerClse cautlon ln data interpretatlon.
\
Oebrieflng ~$ also a component of this condltlon but
dl.cuSSlon dUf&ng the testlng seSSlon rnay make a post
instructional lntervlew redundant.
At~the
"
actlve end of the contlnuum the test subject i5
se.n as • crltic of the lnstructlonal materials
Bjerstedt, 1972; ThlagaraJan et al., 1974).
(e.g~,
Subjects are
encour.ged to m.ke comments and suggest revisions as they
..t:
attempt to learn the content.
about
th~
The continuum lS nct expliclt
use of • posttest in thlS condition, but lt would
app.ar that lt i~ not a crucial component as the
c
interactlon, dlScussion and clarification provided by the
•
.valuato~
would invalidate the posttest results as a
reflection of l •• rnlng from the m.terials .lone.
Debriefing
•
•
23
than afterwards.
interacting with the
remed~ating
test-subj.ct-cum-cr~t~c
&
O~ck
(e.Q.
as necessary.
tutor/evaluator contrasts with the adm1nlstrative
lat the passive end of the cont1nuum.
.uperv~.or
/
/
/
Empirically based mandates for ~ role. of the t~st
subject
and evaluator have not been agr •• d upon as yet. ,
,
This may be
~ue
to the fact that lt is not yet known how th.
roles of the evaluator and subJect affect the
k~nds
of data
which result from developmental test1ng.
~
We can 1nfer that if the 1nvest1gators simply reQuir.
that a percentage of subJects
5uc~essfully
compl.te a
posttest, then the superv1sory evaluator role palred w1th
the traditlonal student role 15 suff1c1ent to dellver' that
data (e.g., Baker & Schutz, 1977; eell & Abedor,
Gropper, 1975).
interested
~n
1977;
If on the other hand, the des1gn t.am
knowing
prec~sely
lS
where ln the 1nstructlon
students have difficulty, cl1nical probing by the evaluator
in conJunction wlth critical appraisal by the
t.s~ ~ubject
is more appropriate (e.g., Deterllne & Lenn 1972; ~orn,
1
,
1966; Johnson & Johnson, 1975; Merrill & Tennyson, 1977).
The researcher who chooses a more passlve condlt1on to
o
conduct testlng will easily be able to accommodat. small
groups of learners.
Indeed, testing with only one subJ.ct
in this condition is a wast. of pr.cious r •• ources.
Siml1arly, grouping learners at the activ. end of the
.'
24
c
continuum is contraindicated by the demands on both the
The function of the post~nstructional or debrief~ng
int.rv~.w
var~e.
cons~derably
Toward
across the continua.
th. pa •• ~v. end, subJects who' act as
typ~cal
learners while
'-
are called upon te reflect on their
on-l~ne
retrospect~vely
cr~t~Que
the
mater~als
exper~ence
and
and explain their
Toward the active end of the cont~nuum,
subJects who comment during learning have the opportunity to
change
the~r
retrospect~vely
solicits
and augment their
op~n~ons
dur~ng
op~nions
'students role.
debrief~ng.
during
wh~le
debr~efing
suggest~ons
The evaluator actively
regardless of the
on-l~ne.
Learner comments and suggest~ons are the data of
interest
~n
th~5
~nvest~qat~on.
verbal comment. and
sugge5t~ons
In passive conditions any
the learners might have will
be collected retrospectively durlng
debrief~ng.
In active
conditions learners are free to comment on-line and during
debrieflng.
Some cognitlve
sc~ence
research suggests that
retrosplfctlve data can be a$$u,med to increase the potentlal
for
d~stortlon
1984\ •
and los5 of information CErlcsson &
S~mon,
Conversely, data collected during learning can be
assum.d to be a more accurate
r~flection
\
actual difflculties and lmpression).
of a test subject's
Meqley-M~rk
and Weston
(in pres.) addres.ed thi. issue in a pilot studY which
compared Quantitative and Qualitative data from both
one-to-one and small group developmental testing seSSlons.
1
\
~
T~ese res.ar~hers r •• son.d th.t requ.stin~ subJects to think
mater~als
.loud while trying out instructional
miQht
minimize the potential distortion and 10•• of inform.tion
res~lt~ng
from the retrospect1ve report of problems.
reported that the one-to-one think aloud
Th.y
cond~tlon
ldentified the highest freQuency of problem. of both a
~
generai and detal1ed nature.
act~ve
A small group ln Whlch both
~
on-line (but not think aloud) and retrospectiv. data
.
were collected, identified the second h1ghest 'frequ.ncy of
1
problems.
~
A second small group in which probl.m
~
identlfication occurred retrospectlvely only, uncov.red the
fewest number and the least detal1ed problem..
Th. author.
sUQgest that these results support the not1on that on-lin.
data min1mlzes distortlon and loss of 1nformatlon.
assumpt~ons
from
cognit~ve
sc~ence
,
and the flndlngs from
Medley-Mark and Weston are prellmlnary.
How.ver, thl.
invest1gation makes an 1mportant contribut~on
bY
d •• cribinQ
the nature of student data collected und.r a varlaty of
condit1ons.
Relationship Bet"een The EvaluatQr and The learner.
The roles of the participants and the relatlonsh1p betw •• n
them during a tryout seSS10n have a
potentl~l
lmpact on the
data gathered from learners.
Interv1ew research in social psyChology h •• explored
o
experlmenter effects, that 15, attribut •• of the
experlmenter that influence a subject's response.
Ad.ir
(1973> reviewed the exp.rimenter effects lit.r.ture in a
26
•
variety of re.earch
conte~ts
and found th.t bias has been
demon4trated a. a r •• ult of influences from four areas:
~
reliQion,, and physical appearance; personal tra:ts of the
.~perim.nt.r
~uch
a~
an~l.ty,
hostillty, authoritarianism,
~
and ne.d for
~ocial
approval; sltuational effects such as
th. warmth of the eHperimenter/subject relationship and the
.pp.arance of the laboratory; and !inally, the experimenter
expectatlons for certain results which May lead to certain
e~p.r1m.nt.r
.ubj.ct's
Few
behaviors which unintentionally influence the
behav~or.
author~
or studi.s have investigated experlmenter
effects on the quality of subject data generated from
developmental te,ting seSSlons.
D1Ck and Carey (1985)
sugge~t
that the learner is
rarely placed in as vulnerable a position as when asked to'
play crltic to in.tructlonal materials.
Typlcally, learners
will a •• ume that fallure wlth the materlals reflects their
own ignorance.
These researchers cali on the evaluator to
convlnc. the test subject that lt is a legltlmate endeavor
to CrltlClze lnstructlonal mate,ials.
The facility of the
evaluator ln dOlng 50 potentlally influences data quality.
Horn (1966) off.red common sense suggestlons to
evaluators to help them put 5ubJects at ease.
a.
tell~ng
th. subj.ct. that the materials are
Advice such
be~ng
te.t.d--not them, may be helpful in getting test subjects to
think of themselv •• a. critlcs, but emplrical evidene. of
th. effectlven ••• of thes. admonitions ha. no,_ ~en
•
27
investigated.
mind set And
Further, if suc:h SU9gestions do &f-fec:t the
th.r~y
the qu.ntity and qu.lity of feedbac:k
from the sUbjec:t, how
e~plic:it
should the direc:tives be?
1
The Learner'. Rple.
Research from cognltive sc:ience
suggests that in order to perform a task, a person mu.t have
• definition of the task to be
SChriver, Stratman, & Carey,
(Hayes,
p.r~orm.d
198~).
Flowe~,
Two additiona1 points
about task definition should be noted.
First,
J
~ndividuals
may modify their task definitions over the cour.e of a ta.k •
•
Sec:ond, the definition of a particular task v.rie. from
person to person.
The lssue of task definition ha. an impact on
investigation because l.arners in a
th~.
development.t1~sting
session are asked to undertake several tasks slmultaneously.
They are expected to detect and
possible, suggest revisions.
dlagn~se
At the same
problems and, 1f
t~m.,
they are
required to learn content, and are cognizant of the fact
that there is a test at the end of the session.
How they
define their role and prioritize their tasks during
developmental testing has a potentially profound effect on
data.
For example, subjects who eoncentrate on studyin9 in
order to pass a test may comment very
inadequacies in the materials.
focus on
erit~quing
l~ttle
Conversely, subjects who
may not score weIl on a posttest.
Researc:h addressinQ this issue is scare ••
•
on
Baker (1972) and
Geis (1988) have advocated for the notion that le.rners need
preparation for their role as subjects in developmental
testing.
\
28
c
Saker inve.ti9ated the eff.ct of giving test subjects a
ln one
.tudy, half of the subject. were told before they began to
try
t9
~
identify specifie problems in the materials.
The
remaining test subj.ct. were told, after completing the
.econd
.~p.riment
duplicated this procedure but had a
prof •• sor delïver the instructions to the test subjects .
•
InvestiQators found no significant differences
condition. in
e~th.r
study.
be~ween
They concluded that havlng
.ubj.ct. keep revision in mind without having specifie
categorle. of information to look for,
lS
not a powerful
enough tre.tment to produce more freQuent and useful
revi.ion suggp.stions.
Ge1S argues that 1t i5 presumptuous to assume that
l •• rners are skilled at detect1ng when they are haYing
trouble 19arning.
Regardless of admonitions and the
.tructuring of an accepting environment, he sugge5ts many
.tudents may not possess this diagnostic abi11ty.
students' school
e~perlence
Most
has not prepared them to shift
the blame for lack of learning from themseives to the
instructional materials.
In support of this notion he
state. that students freQuently make no comments while
1•• rn1ng and few comment. during debrief1ng.
E~.mining
c
l •• rn.r comments and suggestions before
interpr.tation by an evaluator or revisor may shed light on
how l.arner. define their role as test subjects.
The Ev.lu.tor·s Bole.
8aker and Alkin (1973) briefly
..
•
concerning th. impact of the eVAluAtor'. role on leArner
data.
Specifically, they inve.tigated whether developmentAl
testing conducted by a noninvolved individual rath.r thAn An
instructional
des~gn
team member affected learner feedback.
These authors found that some res.archers favoured the
independence and objectlvity of an agent
design team.
.~t.rnal
to the
Scriven (1972> for exampl. advocated
" goa l-free evaluation" wherein the evaluator opera te.
without knowledge of the
for the1r product.
in~tructional
dev.loper:. claim.
.,
Other authors. such as Stuffleb•• m
(1971), viewed the choice of evaluator as secondary to
...
ensuring that evaluat10n served the purpose of provlding the
kind of informatlon necessary for declslon making.
Dick (1980) argues that chooslng whether an
instructlonal developer or an evaluator carries out
formative evaluatlon 1s an organizat1onal or admlnlstrAtive
question not of research interest.
He prefers to focus on
what happens during an evaluation.
However, more rec.ntly,
Dick & Carey (1985) suggest that designers pretend the
...
ma,erials were developed by other instructors and to thlnk
1
of themselves as uninvolved.evaluators mer.ly carrylnq out
an evaluation.
White stressing that learn.rs should not be
mislead as to thelr true role, they sugg.st that
.~aluator.
adopt a noninvolved psychologica! set •
•
the objectivity that an
e~t.rnal
agent brinqs to
developmental testinq, then using an uninvolved evaluatQr
would better suit their purpo ••••
Subt.rfug. is difficult
30
•
to control aero • • • vAluators and conditions and
~.n
confound
Gei. (1988) pre.ents four candidates as possible
dev.lopmental t •• ting evaluators: the author, a subject
matter expert, an lnstructlonal design expert, or an
individual skilled in conducting
inter~iews.
The duthor 1S
i
oft.n discounted aü an evaluator because of the likelihood
of bias and defensiveness.
Geis proposes that the author's
~
un~Que
knowledge of the instructional
~ntent
might allow for
preblng and interpretation that noninvolved evaluators could
miss.
SubJect mdtter experts as evaluators
the knowledge structure of the learner, but
"
thelr abll1ty to
recogn~ze
lnstruct10nal
explore
~ould
Ge~s
Questions
d~fficulties.
He
caut10ns that explanat10ns and suggestions be recorded or
important revis10n information may be lost.
Jn.tructJon~l
pedagog~cal
t •• ting,
The
desJgn expert is Qualified to pursue
and lnstruct10nal varlables during developmental
but would probably not be skilled in spott1ng
dlfflculties wlth the structure of the content.
~.ndldat.
The final
for the evaluator's Job 15 the skilled
•
ThlS persen could prevlde a goal-free look at
th. testing session and would not be tempted to teach the
content.
Hewever, Geis
eut that in depth prebing of
po~nts
f
problems may be lost because the
~ndividual
lacks the
~nsiQht n.ces.ary to identify when-such a route is
Whil. many of G.is·s arguments appear reasonable they
have yet te be backed by .\.ound empirical base.
The issue
•
31
does the package of, Skllls n.c •••• ry for an evalu.tor to
testlng session.
of thls
,While the former issue 15 b.yond th• • cop.
the latter
study~
15
addre.sed ln part by r •••• rCh
.
into intervlewer behavior.
\
Schuman and Kalton (198~) reviewed a numb.r of .0Clal'
{ ,
1
psyçhological studies
tha~
have shown that int.rvi.w.r.
mak~
,
more errors in reading questions than 1S çommonly r •• liz.d.
Moreover, the errors have a tendency to increase with more
interviewers.
e~perienced
Interviewers were also found to
frequently give indiscriminate feedback to respondents ln
the form of positive reinforcement
"That's interesting," "IHl
(~.g.,
:'Mm-hm,
1 5ee."
Thus, respon"dènts wer.
rlght. "J.
reinforced as much for undeslrable behavior (such as refusaI
to answer 'a questlon) as they were for deslrable behavlor.
In their attempt to maintain a tf;jood .re 1 at10nshlp wi th the
\
subjett,
lnterviewers gave inappropriate feedback at the
wrong time.
These findings indlcate the complexlty of the .valu.tor
role
developmental testing.
durin~
evaluator lS
e~pected
•
On the one hand, th.
to be skliled in establ1shlng rapport
and be aware of, and avoid
7
interviewer bias.
Vet
evaluators take liberties in rewording 4Pd rearrangino
quest10ns and probing as they see fit.
o
Devel opmen tal
test1ng research has attempted to control for t-h. influ.nc.
tha~,
revisor.
e~.rt
on comparative studi.
testing conditions (e.g., Kandaswamy, et a
1976>, but
,
•
\,
..+
evalu.ter v.riance ha. not been .mpiricAlly addressed.
32
~
Mor.ov.r, .tt.mpt. at comparing results from different
~h
r •••• rch.r. are thw.rted wh.n the influence of the evaluator
i. unr.ported or unknown.
R••• archers ln instructional davelopment have long been
.tt.mpting ta determfne if thera are particular learner
ch.ract.ristic5 and aptitudes which correspond to optimal
f • • db.ck
from developmental testing.
.
Aptitude.
Englemann (1983> argued that strategies for
;.therinQ feedb4ck from students are based on two
••• umptions.
First,
the primary focus of aIl information
;.th.ring must be on fallures, not suceesses, and that the
d.ta gath.red must permlt the investigatlon of the cause of
fai lura.
Tc lnvestigate fallure, he argued, evaluation must
yi.ld qualitative data.
Knowing that students achieved a
particular score on a posttest serves only as a slgnal to
•• eura procaS5 information.
in .ddition to making aIl the
The second assumption is that
~istakes
"
that higher
o
p.rformer. make, lower performers make additional
that higher performers tend not to make.
1f
mis~akes
t tu 5 i s
50,
low.r parformers should provlde the best lnformation about
in.tructlonal problems.
Englemann does not support his
th.ory with emplrical data but some backing eomes from a
c
Wa;.r found th.t varying student aptitude in on~-to-one
te.ting •••• ions produced different types of feedbaek.
Whil. hiOh aptitude student. pinpointed inaceuracies and
\
•
,
subjects identified more
subjects
al.~
problems.
b~sic
Low aptitude
gave few revision suggestions.
high, med1um and low aptitude
When data from'
examined together, a greater variety of types of
~s
feedb~ck
apt~tude cond1t~on.
produced than from the single
.
•• SS1ons were
on~-to-one
~
~-
did not report how the student comments and
Wager
were
sU9~e~tions
compared.
took place but specifie procedures were not delineated.
Feedback from learners is necessary in order to make
effectlve revisions.
Revisions are only effective 1f the
changes better serve the needs of the populatlon which ~ill
\
.
\
be uSlng the materlals.
1
For this rea.on representativ.
sampllng is recommended by several res.archers <e.g., Oavls,
1
. Alexander & Velon, lq74; Oick & Carey,
Nathenson (1977) added
Henderson &
the most lmportant l.arner
~hat
heterogeneous, pretesting on the
necessary to ensure that the
respect to this lmportant
1985).
mater~al's
~ampl.
.
objective.
~
•
i5 stratified wlth
characterist~c.
Some researchers ask slmply that the test subiect be a
member of the target populatlon <Johnson & John~n,
Merrill & Tennyson,
1977~.
197~1
and assume that common .ense will
be exercised ln subiect selectlon.
Tralts such as hlgh
motivat10n, strong verbal abillty, and enthuslasm are often
o
clted as deslrable Qualitie. for test subJects (SJerstedt,
1972; Markle, 1978; Thiagarajan, Semmel, & Semmel,
systematlc pretesting for the.e
~~rlbute.
1974)
is not carried
yat
•
34
out.
"
compl.tion sch.dul •• demanded of instructionai design
effort. suqgest that determining the most expedient way of
id.ntifYlnq approprlate test subJects be emplrically
'~
\
1
inve.tigated. \ Appropriatenéss must be judged in terms of
the kind of feedback desired from the learner.
Test Subjcct Motivation.
Motivating
subject~
to assume
• particular role that will yield usable data is sometimes
difficult.
Financial incentives are the most popular
subj.ct motlvalor.·
Nathenson and
~enderson
(1977) assert
that subjects for developmental testing must be motivated in
the .am. way as the target population.
Thelr experlence has
shown that subjects devote 30-50'l. less time to learning
materials than do real students and that subjects dropout
Brown (1978) reported
more freQuently than real students.
developmental testlng
subJect~
dldn't behave like real
~
students and were
.....-- generally unmotivated towards the
H1S solution was to glve subJects academic
credit for the materials they tested.
Henderson and
Nath.nson (1977) have also had success with this remedy but
"'i.
it is not appropriate in aIl cases.
The impact of test
~
sUbJeét motivatlon on the quantity and quality of dat~ wh~
.merge from
c
The
r •• tina
develo~mental
testing warrants further
Envirool!Hf!lt
Anoth.r potenti.l influence on student data is the
d.velopmental testing environment.
Some authors suggest the
tIt
3~
~
testing setting should correspond a. clos.ly as posslble ta
the natural conditlons in which the materi.ls will be used
<Baker &
5ChU~
1977; Hend&rson & N.thenson, 1977:,
Qther.
recommend an lnformai atmosphere .nd a quiet room
(Bjerstedt.
1972~
Horn,
1966).
While reproduclng the
J
natural setting sounds loglcal.
the learner may produce more
useful data when relaxed ln an undlsturbed .ettlng
~
(presuming the two are dlfferent).
differences between quality of data .cross settings would
add important lnformation to developmental testin9 r ••• arch.
Method of Data Collection
The method of collecting data has .erious implications
. on both Quality and quantity of feedb.ck.
In particular,
the relationshlp between the data collection procedure and
roles asslgned to the learner and
evalua~or
(Figures 1 and
2) will be examlned.
Toward the active end
~f
the contlnua the evaluator is
expected ta probe for difficulties and suggest revisions to
a
learner who 15 asked to Quest10n and crltiQue the
materlals.
As previously pointed out, • posttest ln
~hl.
kind of condltlon is not a reflectlon of learnlnq from th •
.
materlals alone and cannot be used as a basls for revlslon
declslons.
Information that lS potentially of use te the
revlsor can be collected in a variety of ways.
The most
/
.
common of these are notes on behavlor, attltudlnal surveys,
lit
comments or revision suggestions wrltten onto th. t.xt Qr on
a separa te comment sheet (by either the learner or th.
evaluator), and audio tape recordinq of both on-lin. and
,
1
36
debri.fing comm.nts.
~
As. the number of learners in a testing session
incr ••••• , th. emphasis on interaction between the evaluator
1
.nd learner decreases and posttest scores become more
u •• f~l.
Learners are expected ta note their problems and
comments for later discussion.
Thus,
the debrlefing seSSlon
incr ••• es ln importance as a source of useful revislon
information.
ThlS condition is represented towards the
pa •• lve end of the learner/evaluator contlnua (Figure 1>.
Intrlnslcally tled to the data collection procedures is
the faclilty wlth WhlCh the particlpants play the roles
~lpted
for them.
As previously
dl~cussed,
motivatlon,
test subJect characteristics, and of course passive
or~
actlve interactlon wlth the materlals and one another, aIl
have an effect.
If a test subJect lS insufflclently
motlvated to wrlte down problems or alternate sequences for
1nstruction, potentially useful informat10n may be lost.
Evaluators may fa11
to put the test subJects at ease, or
de.plte • presession d~5clalmer that the materl~ls were
belng te.ted not the student"subJects may still view the
dev.lopmental
thus
no~
testi~g
session as somewhat threatening and
fr •• ly comment upon or crlticize the materlals.
Res.archers acknowledge the relatlonship between
partic1pant roles and the issue of Quantltative versus
c
Qualitative data collection.
When posttest performance is
u.ed to measure the effectiveness of the materials, the more
ap~ropriate
developmental
testing conditions are toward the
pa._ive .ide of the continua (e.g., Abedor,
1971).
•
37
interest. data collectlon lS facilitated by structurlno t~e
developmental testlng se9sion conditions to match those
typical at the active end of th. continu. ( •. ~••
Johnson.
Johnson &
1975).
In thlS investigation, verbal feedback lS the data of
"t"
interest.
.'
While scores or rating scale re.ponse. lend
themselves to'statistlcal comp.rlson, verbal lnformatlon
re~1sts
quantiflcation.
data base rlch ln
stu~ent
Researchers attemptlng to gather a
problem e)(pllcation and oP1nion
face the arduous task of condensing the 1nformatlon into a
meaningful summary.
To date,
the developmental t •• ting
,
literature has nct addressed this issue.
t
Chapter Su",mary
Based on the argument presented in the prevlous chapter
that determlning the value of a particular condltion for
collecting learner data i5 contlnqent on e)(amlnlng the data
prior to its converSlon to revisions,
thlS chapter revlewed
the cemponents of a developmental testlng seSSlen and how
each variable potentially affects learner feedback.
The differentlal types of data, which might be collected
from either the one-to-one or group-based approach were
ex p lored .
A palr of corresponding contlnua were used a. a
framework te dlSCUSS the roles of the learner and the
evaluator durinq'materlals tryout.
o
It wa. suggested that
the lack of res.arch-based quidelines for 4.5ign1ng
partic,;-,lar raIes i. relatL to tne fact that lt' h .... yet to
be determined how varying the roles affects the data which
,
1 •
•
.m.r;.
.
from~deV.lopm.nta1
wa. a1so pr •• nt.d
~
38
t.sting.
r~;.rding
Inconclusive research
the impact of learner
ch.r.cteri.~ic. and the setting for conducting materials
tryout on the quantlty and quallty of learner feedback.
Fin.lly, the r.latlonship between the different methods of
collecting learner data and the kind of lnformation
\
L
•
\
c
,
J
39
•
,
Introduction
The present study used an expler.tory .ppreach te
develop a method of analysis for examininq developmental
testing interviews. The objectives were to develop a system
that would allow
quantit~t~ve
and qualitative comparisons
across dlfferent developmental testing conditions, and to
conduct an exploratory applicatlon of the system to a .ample
of developmental test1ng interviews.
The developmental testing of a microbiology unlt
provided the data base for thi. research.
the
or~glnal
prOJect varied the size of the
tryou~
group and
his own role during the on-lin. component of the
~,
developmental testing seSSlons.
recommended
condit~ons
He chose tour of the most
for trying out lnstructlonal
materials with learners: 5mall group--actively
~nvolved
evaluator; small group--pass1vely lnvolved .valuatorl on.
subject
w~th
an actlvely involved evaluator; and on. subJ.ct
with a passively involved evaluator.
The data analyzed ln this study were taken from thi.
earlier formative evaluation prOJect.
The" orlglnal data
consisted of the audiotaped recordings of the developmental
testing 5e5s10ns,
the 'scores from the postt •• t., and th.
responses to an attitudinal surv.y.
the four
cond~s
this study.
An exampl. from .ach of
was selected at random and analyzed in
As ba~kground, t~. data collectlon procedure
r
40
c
beloN.
for the entire corpus of
Each subject in the original study was presented with a
12 paQe stLmulus booklet. The cover sheet of the book let
briefly d.scriOed the purpose and procedures of the
developmental testing seSSlon.
A background information
paQe polllng mother tongue, age, years of schooling and
previou. related course work,
instructional
ma~erlals
followed the cover sheet.
were presented next.
The
The"
materlals were ln rough draft form and conslsted of a 5
paQe, s.lf-instructional unlt and a 5 item posttest from a
print module called 'Mlcroblology', a book in the series:
In,tructlonal Materials for the Dental Health Professions
<Dental Auxillary Education ProJect, 1982).
Finally, a 27
Ltem attltudinal questionnalre concluded the booklet.
The
r
stlmulus book let can be found ln Appendlx A.
Instruction.!
ti t led
"M1C
Hat~rials.
The instructional unit i5
roblol ogy Re 1 a ted to Ster il i za tion and
Disinfection."
This title appears on each page followed by
a subtitle pertaining to the 5peclfic contents of each page.
The unit beglns with an introductlon and then states five
instructlonal objectives.
A definltion of mlcrobioloqy is
proVlded and four generai classiflcations of mlcroorganisms
are pre.ent.d.
Each of the four types of mlcroorganism is
descrlbed, and one,
bact.ria,
is accompanied by Ilne
(
.
drawinQs illustratlnQ its thr •• bAsic shapes.\
The unit ends
with a brlef discu.sion of five g.n.ral laboratory
technique. u.ed te .tudy micreorganisms.
•
41
Pgsttest. The post test is compris.d of two
multiple choie. items, two fill-ln-the-blank items. and ane
f
Attitudinal Questionnaire. Th. attltudlnal
questionnaire is ba.ed on one developed by Ab.dor (1972) for
use with multimedia instruction.l materials.
The survey wa.
adapted to relate specifically to print materials.
Of the
27 items, 23 require the subjects to respond to st.tements
on a 5 pOlnt Likert-type continuum ranging from strongly
agree to strongly disagree.
The remaining four item. are
open-ended.
Sybjects
First year nursing stud.nts were selected on the advice
of a curriculum development specialist at McGi11 University.
The teaching staff and chalrpersons of two nur.ing
programmes reviewed the materials and judged tnem suitable
for first year students.
/
.
Subjects in the .valuation were nursLng student. at two
Colleges d'Enseignement General et
Montreal.
Profes.io~nel
(CEGEP.) in
From a pool of volunt.ers, the .valuator randomly
chose 36 subjects from each CEGEP (N=72).
They ranged ln
age fram 17 to 40 years with a mean age of 21.6 years.
Eleven mother tongues were represented, anq previaus years
of schaoling ranged from 11 ta 18 y.ars.
about microbiology was widely varled.
•
received a S50.00 stipend.
Prlor knowledge
E.ch participant
Table 1 provides demographlc
information and posttest results for the .ub.ample of
sub~cts
who.e verbal fe.dback wa. analyzed in thi. study.
42
•
T.bl.
1
D..aal:aRbi' IOfgl:!lAtigo aosa f2alt •• t Result. for the
Cgo,U. t 19;p
!!gtœc:
B9!I.
Tmnu,.e
V• • r2 Q:t
Schooling
Pr.vj,ou~
Post-
t;gyrses
~
1-1 Acti.v.
Indian
17
11
Chem 462,562
aroup
Act:iv.\
English
28
13
8io. 911
Intro Bio.
100i.
English
22,
13
Human Bio l , II
High Sch. Chem
100i.
English
23
13
Human Bio 1. II
Chem ~12
S7.Si.
English
17
11
High Sch. Bio
Chem 512
Human Bio 1
100i.
English
19
13
Bio 501
Chem 101
Bio l , 1 1
Human Bio,
100i.
II
.
EngUsh
18 .
13
~'--".,.--
1
SA.ple
1-1
P••• iv.
""'"
S7.Si.
l , II
Human Bio 1
Human Physiology
Organic Chem
100}
Enol~h
-
23
13
Human Bio l , 1 1
87.5ï.
EnoUsh
22
14
High Sch. Bio
High Sch. Chem
Human Bio I
Microbiology
62.Si.
Human Bio 1
Microbiology
75.0ï.
... ~
Group
P••• iv.
En;l ish
40
11
a.....k
24
17
1
Human Bio I
High Sch. Chem
' High Sch. Bio
87.Si. .
(
C
EnQUsh
2~ .
13
Intro Chem
Bio 101
87.5ï.
F .... nch
21
15
Bio 301
Chem 101.111
87.57-
Engl ish
31
16
Into 8io
Organic Chem
Inorganic Chem
10Oi.
•
43
•
Procedure for Colleçtina OriginAl Dit_bis.
Subjects, from each qroup of 36. were r.ndomlv ••• ioned
\
condition.
Active and
the evaluator
sessions.
~nd
pas~ive
rafer to the roI •• plfyed by
subjects in the development.l te.tin;
In the active conditions the .ubject. were
throu9h the instructional materials, and the evalu.tor
probed and helped identify problematlc portion. of the
te~t.
In the passive conditions the subjects worl<ed through' the
,
lJ_
materials independently,
~
ei~hteen
The same evaluator conducted
one-to-one actlve sessions (N a 1S>, elght.en
1
onie-to-one passive seSSlons (N=-18), thr •• small group act'ive
se\ssions (N=18). and three small 9rouP pas.ive ••••uons
(N= 18> •
AlI participants were instructed to l.af throuQh the
package and take note of the postte.t and attitudinal
quest'ionnaire.
The insJructions on th. cover sh •• t of th~
stimulus book let were read aloud by the evaluator while the
participants followed a10ng.
This paragraph stated: (a)
that the purpose of the session was not to test how weil
they learn, but how weIl the mat.rials teach;
(b) that
learner feedback on the effectivene •• of the materials i.
important; and (c) the procedures for the condition ln whiCh,
•
the subJect was participating.
The ev.luator expla1ned that'
on completing the book let th.ra would be a short break,
after which they would di.cus. their re.ponse. to the
c
~
m.t~,,1Al.
in A dltbrieHng interview.
44
,
Ali the debriefing
int.,.V1ews ... ere audiotape recor_ded, and the on-line
component of thlt active condltions was also taped.
th.r. w•• no verb.l
"••••
10n45
th.s~
Because
lnteractlon during the passive on-line
candi tlans were not recorded.
The speciflc lnstructions and procedures ln the
diff.rltnt canditlons of the developmental testing sessions
... ere as follows:
t
One-tQ-on@ active.
In the one-ta-one actlve condition
the .ubJects were told to go through the materials as best
they could, and if either the subject or the evaluator
percelved that there was a
problem wlth the materials,
would~be ~lscussed durlng the seSSlon.
it
Part1clpants were
ins truc ted to hlghllg ht any prob 1 ei,la tL- portl.ons in the
"vi
te)( t.
.t
The eva 1 uator provl.ded no instruc tlona 1 assistance,
but actl.vely probed for dlfficul t.Les at the end of each
/
pag~,
and watched for nonverbal behavlor such as rapid
~urnlng
or hdgeting, Wh1Ch might indicate the s,ubject
having dlfflculty.
.
ItvAluator
When a
p~e
wa~
problem area was flagged, the
,
asked the subject to explain the nature of the
dlfficulty, and to suggest,
if possible, a revis.Lon
correctl.ng the problem so that another student wou Id, not
encount.r the same difficulty.
The evaluator or subject
... ould then wrlte the explication and suggest10n next to the'
flAqqed .ec hon •
One-tQ=gn_ DA,sivw.
Subjects in the one-to-one
condition wer. in.tructed te go through the material as best
they cou ld, to highl ight Any problematic portions in the
...
•(
materials, and to write down the problem .nd sU9g •• t •
revision
ne~t
to the . controv.rsial .ection.
lèft·the subject alone in the room, to complete the t •• k.
Small group active.
in the small group
g~ven
act~ve
"
to the subJects
~n
condition were
the one-to-one
The only differences were that the
si~
sim~l.r
act~ve
to tho ••
condltlon.
pa~ticlpant9
were
asked not to talk to each èther durlng the ses.~on. and to
raise a hand lf they encountered a problem ln the
materials.
The evaluator did not probe page-by-rage, but
watched for nonverbal cues which
was
e~periencing
m~ght
lndicate • subj.ct
problems wlth the text, and lnqulred •• to
the_difficulty when this occurred.
Smalt group passive.
the subjects received the same
condit~on
subJects
In the small group pa.slve
~n
lnstructio~s
the one-to-one paSSlve condltlon.
participants ln the small group active
••
Llke
cond~tl0n,
su~jects were asked not to talk with one another.
the ••
The
evaluator remalned l.n the room but dld noVlnteract wlth the
subjects while they worked through
Debriefinq agenda.
~he
('
materlals.
Once the subjects had completed the
on-line component, they were given a short break to allow
the evaluator to cDnstruct an
~genda
for dl.cusslon.during
the debriefing interview.
•
from
thre~
sources Ca) comments made on the
on-line, (b> error. on the postt.st, .nd
(c)
m~t.rials
nugatlve
.
responses
on the attitudinal questionnaire.
,
In the one-to-one condition. ail comments on the
c
whil •
c
46
mAterials and error. on the posttast were put onto the
aoenda fO),
~i.cu •• ion.
Any response to the rating scale
,
items that w•• two pOints away from the ideal response, and
placed on the debriefing agenda.
For the small
group~,
the evaluator set a criterian pf
.greement between two or more,subjects before a problematic
"'" debriefing agenda.
item would be put on the
discus.ed during
debr~efinQ,
That is, to be
at least two subjects had to
have made a comment about the same section in the
have made the sam. Itrror on the post test, or have
~terials,
-~re~p~nded
unfavorably to the same open-ended attitudinal Questions.
The groups' mean rasponse to a Likert-type scaled item had
to be two points away from
t~e
aebriefing interview.
the same for aIl conditions.
ideal response to be
1
The format.for- debrlefing was
The evaluator explained the
.purpose of debrieflng a$ an oppqrtunity for the
subjec~s
retrespectivltly r __ conSid.~ their cemments and answers.
te
The
criteria for sltlection of agenda topies was briefly
outlined.
Problems relating to the instructional text were
discu •••d fir'lt.
Subjects were giv.n the epportunity to
while on-lin •.
For posttest items the evaluator asked the subiects ta
consider if their errer ...as related tOI the wording of the
item. the section in the unit providing instruction for that
47
1 -f th.
in the material, the @valuator proc ••d.d to th. next aQenda
item.
,
subject($) were asked to specify th. dlfficulty and su;;e.t
a resolutlon.
The final component of t'h@ debriefing interview was
•. based on the attitudinal Questionnaire.
Subject. wer. Qlven
the opportunity to elaborate on their unfavorable r •• pon •••• '
,
change their answers, or sU9gest remediation.
r
The Analysis System
Overview.
e~ploratory
\
'
.\
Due to its methodological
purpo •• and
orlentatlon, this study had two ph •••• s (a) th.
development of the system for analyzlng verbal data, and (b)
the Quantitative and Qualltative comparison of verbal data
resulting from different developmental testlnq
condltion~.
1
THe methods of analysis were a multistep, detalled
series of analYse$ which served as a heurlstic devlce.
In
order to compare developmental testing conditlons, an~mak.
inferences from these comparisons, a preClse analysi. and
structural representatlon of what occurs during a tryout
session lS necessary.
Since an interview is ••• entlally a conversation with •
1
\
\J
tIt
purpose <Dexter, 1970), a conversational analysis wa •
.~ ~
carried out on the interview data.
The aud iotaped
•
developmental testing lntervlews w.re tran.cribed and
segmented into conversational unit ••
according te
func~ion
Each unlt wa. coded
and content, and .pecific pattern. of
~)
48
l,
•
sPoken discourse unit. were identified.
Sinee dlseourse is
eone.rned with the functional use of language, the patterns
of discourse units were speclfied aceording to context. and
their
to previous
r.l.tionsh~p
The product of
t'h~s
conversat~onal
units.
phase of the analysis was a serl.es
~,
of discourse graphs WhlCh symbollze the patterns of
interaction within the developmental testing conversations.
Each step in this analy~is system is descr~bed in
tletail below.
TrAnscription. The debriefing interviews were
tran.cr~bed
verbatlm from the audiotaped recordings using a
.
Sony
transcr~ber
and the Volkswriter 3 word processing
package on an IBM personal computer.
Ali transcriptions
w~,e checked for aceuracy by a second transcrlber.
The
transcriptions are called protocols, the term given te
,
verbal reports of performance.
~m.nt.tion.
eonv.rs.t~onal
The protocols were segmented into
un~ts.
Th~se
conversational units or acts
cor,.. •• pond most nearly to the grammatlcal unl.t of cl.'!se.
Wh! 1.
~rammar
1S
coneerned Wl. th the formàl properties of an
lt.m, discours. focuses on the
1S,
the
item.
pr.~matlcs,
or use an
functio~.l
in~lvidual
properties, that
is making of an
As such, a conversational act (C-act) is a unl.t which
.conveys
both a proposition, and a
~peaker's
attl.tude toward
that propositlon (Dore, 1979).
In Qrammatical terms, a clause
tensed or conjuQated (flnite) ~erb.
typic~lly
contains a
In terms of
conversational function, a single clause usually corresponds
•
to a single C-act.
For example:
ItIhiit is the problem, in th.t section?
1 hdve your suggestion on how ta improv. Jt.
And that's how you should study.
accomplish one C-act.
For
e~ample,
the function of
responding to a request was fulfliled by thre. clau •• s,
1. 1 would look àt thJS ov.rvi.w •• • n Jntroduction,
3. dnd 1 didn't find thàt.
Clause fragments also function as C-acts.
For example,
th~ flow of conversation i5 regulated'by openings, clo,ings,
and':..o.,ifts, marked by single words such ~s Hi!, By.!, or
Alright.
,
Recognizing C-acts of varying length5 was accomplish.d
t.'!'
by analyzinq the conversation according ta shifts.
A shift
.."
was defined as a change or transfer from one functlon to
another.
A normative feature of Amerlcan Enqlish
conversation is that at least one and not more than one
party speaks at time.
a turne
Speak~r
change recurs, each chang. is
At times, speakers take
ex~~nded
C-acts can be embedded in a single turne
e~ample
•
turns.
Several
In the fo llowinq
each new lin& during the subject's turn indic.t •• a
new C-act:
Evaluator: How do you study?
•
50
c
Subject,
/ go b.ck wh.n / r •• d.
Lik. /'JJ r •• d a c.rtain s6ction dnd ['JJ go bdCk
but it Yar~.s
In gan.ral, the C-act serves as an identifiable unit
thAt cAn be categorlzed according to the function it
p.rforms in conversation.
Conversational coding .. A system of discourse analysis
d.veloped by John Dore (1979) was applied to each protocol
1
ln order to
del~neate
the pattern of interaction
d.velopmental testlng conditions, and to
for
~omp.r~son
in the
analV~ls
across
condlt~ons.
prov~de
w~th~n
a database
Dore's approach is useful
of developmental testing dlscourse because
~
of it's comprehenslve conversational categorles and'
o
systematlc, ObJectlve, codlng schema.
The
SlX
general conversational classes in Dore's
taxonomy are: reQuestives (solicit information or actions),
ass.rtives (report facts, state rules, convey attitudes,
etc.), performatives (accomplish acts and establish facts by
b.ïnq .aïd), respônsives (supply solicited information or
Acknowledge remarks), regulatives (control personal contact
And conversatlonal flow), and expressives
(nonpropositlonally convey attitudes or repeat others). '
Each of these classes is made up of particular
conversational acts which code for a speclfic discourse
function within the general class.
Figure 3 is a network
r.pre.entAtion of the distinctions captured in Dore's C-act
sch.me, And TAble 2 is •
e~Ample.
list of Dore's C-act types with
of conversational Act. from the protocols of this study.
•
Primary conversatlonaJ
General convcl1ltÎonll
cl4ss
fwrCllOn
PanicuJar conversauonaJ
tlCt
SOlicit
-{IQQl
r----{ -iI.
mformatlon
(requesti'\le
y
.
101'!t
10Pe
OAC
10PM
solieit
action
10SU
percclvable _ _~rASlD
phenomena
~SDC
mternal
asscrtlve 1 - - - - - - - 4 phenomena
ASIV
-lASJI
inmat
ASAT
..
/
'social'
~ASIU
phcnomena
LASO
.Convey
inUla'i"CL
perfOJTQ8tive~ reactlve
content - - - - - - - 4
"10
'Pn
~PPI
FWA
-{IICH
supply
soliclted
In{ormatlon
\.
U'l
UPC
uco
---jJlSCL
,c
respond (responslVe t-----+supply
addmonal
..
mformatlOn
acknowlcdge
RSOL
IlSAO
---iIlSAIC
nonrcqucstlve
,---iSOhCU -i~~:~
-0+.------ [regulauve]
Regulate'-'
conversation
'"
~~
other
mark
content
OOIQ
OOCO
[00'"
00'"
[~
atutude - - - - - - - - - - [expressIVe] ------------lEXAC
EXU
Fiaure 3.
o
A network repre~.nt.tion of th. prim.ry
functions, general cla ••••• ~d particul.r
conversational act5 in Dor.'5 cod1ng seh.m••
Note. Reprinted from "Conversation and Pr •• ehool L.ng ..... g.
~lopm.nt .. by J. Dor., in Langyage Açgui.itionl s~wci1l' i.o.
fir~t language developm.nt, <p. 354) P. Fl.tch.r .nd M.
Garman (Eds.>, 1979, Cambridge: Cambrldg. Univ.r.~ty Pr ••••
conv.r~ational
52
c
DocC"
Act.
Code
Cod«. and D.finitign~ with Exa.ples of Conversational
Fcom This Study.
/
-
Definition and Example
Requestives: solicit information or actions',
ROCH
CHOICE QUESTIONS seek either/or judgments relative
to propositions:
inform.t~on
RQPR
So which would you preTer, mGre
about the example, or get rid of it?; Is
PRODUCT QUESTIONS seek information relative to most
wh-interrogatlve pronouns: Where did you get làst?;
What l5 the problem in that section?
ROPC
PROCESS QUESTIONS seek extended descriptions or
explanatlons:
Whdt could 1 do
50
that another
.
student would not encounter that Sdme diTTiculty?;
How could 1 change
ROAC
th~s
ta make l t applicdble?
ACTION REQUESTS seek the performance of an action by
h•• rer: Give me your honest opinlon.; Just view me
.5 .
ROPM
p.rson ln a p05ltlon to change anything about
PERMISSION REGUESTS seek permission to perform an
.ctl0n: Can 1 go on?r ..Can 1 5àY something?
ROSU
SUGGESTIONS recommend the performance of an action
by heartrr or speaker both:
.
Let·s Just go .round the
t.bl. on thi. on. dnd se. wh.t people's opinions
•
Table 2 (continued),
Code
Assertives: report facts, st.te rul •• , conv.y
attitudes, etc.
ASID
...
IDENTIFICATIONS label abJects, events, people, ete.1
Today's the nineteenth.
ASDC
DESCRIPTIONS predicate events, properti •• ,
locations, etc., of objects or events.
~
would be the
5ect~on
on st.inJng.1
Th.t s.ction
This~.
th. first
unit in the workbook.
ASIR
/7
INTERNAL REPORTS express emotions, sens.tions,
intents, and other mental events: Mayb. i t ' .
m.,~t
they should Just -say what each one is.
ASEV
EVALUATIONS express personal judgments or attitud •• ,
That
's whdt
l 'm interested in.; And th.t
's
how you
shou1d study.
ASAT
ATTRIBUTIONS report beliefs about another's internaI
state: You may not have answered lJk. th. r •• t of
the group. i ft leâves you nanqJ.nq .
..
ASRU
RULE5 state procedures, definitlons,
etc.:
'social ru1 •• ',
You don' t hâve ta inven t things th. t don' t
eXJst.; This is 1311 up for grabs.
•
ASEX
EXPLANATIONS state reasons, causes, justlflcatlons,
~1
and predictlons:
It's nard ta r.m.mb.r n.m•• th.t
are confusing., l guess 1
WcJS
doing it too qUlckly.
54
\
C
Table 2 h:ontinuH i
Definition and Ex •• ple
Perfor,.eatives: eaccomplish Act. (and establïsh facts>
by being said.--no examples of thi5 class occurred
i~ the developmental
e.~liSh
testing protoFols.
PFCL
CLAIMS
PFJO
JOKES cause humorous effect by stating lncongruous
~
rights for speaker.
information, usually patently fals&.
PFTE
PFPR
PRWA
TEASES annoy, taunt or playful1y provoke a hearer.
• PROTESTS express objections to hearer's behavior.
WARNINGS alert hearer of impending harm.
Re.pensives: supply solicited information or
.acknowledge remearks.
RSCH
CHOICE AN5WERS provide solf?ited Judgments of
propositlons:
RSPR
-
RSPC
1 know whât you meân, yes.;
PRODUCT ANSWERS provide wh-information:
(] wouldJ
(] got
lost) on p.ge seven.; The nineteenth (is tODâY'S
PROCESS ANSWERS provide solicited explanations,
etc.,
,
1 would look at
this overview as àn
introduction,/ .nd to me .n introduction would hâve
Table 2 (continu.d)
}::
\
'\
e~press
acceptAnc., dental of
acknowledgment of reque.ts: 1'11 do it.
/
1
COMPLIANCES
RSCL
CLARIFICATION RESPQNSES provide solicit.d
~c nfirm ... tions:
R
pOlnt where i t is neceS.dry to know th.t.l (50 1
AGREEMENTS agree or
d~sagree
with prior
nonrequestive act: And 1 dgree,
studies; No,
RSAK
mll..n.
QUALIFICATIONS provide unsolicited informAtion to
-iii
RSAG
Thiilt 's not whdt 1
thdt's how .v.rybodY
you don't know.
ACKNOWLEDGMENTS recognlze prior nonr.qu •• tive., F.ir
enough.:
Thdt's fine.
Regulatives: control personal contact and
conversational flow.
OOAG
ATTENTION-GETTERS
solicit attention: OkiilY.
..
OOSS
SPEAKER SELECTIONS label speaker of
ODRa
RHETORICAL OUEST IONS seek .cknowled9ment to
n.~t
turn,
continue: Whdt hdve 1 qot myse1r into h.r.?
ODCO
•
CLARIFICATION QUESTIONS seek clarification of prior
remark: Whtilt?
BOUNDARV MARKERS indicate openinos, clo.inO • •n~
shifts in the conversations Fin •• , Okay.
56
Tabl. 2
(cant1nu~)
Cade
DeHnitian and Ex_pl.
ODPM
POLITENESS MARK ERS indicAte ostensible politeness:
~
Expr••• ives: nonpropositionally convey attitudes or
EXCL
EXCLAMATIONS .xpress surprise, delight, or other
Attitudes.
EXAC
ACCOMPANIMENTS maintain contact by supplying
information redundant with respect to sorne
contextual feature: Here.
EXRP
REPETITIONS repeat prior utterances.
/
J
"i.c.llAn~dus codes •
.,../..--
UNTP
UNINTERPRETABLES for uncodable utterances.
NOAN
NO ANSWERS to questions, after 2 seconds of silence
NVRS
NONVERBAL RESPONSES for silent compliances and ether
1
g •• tures •
~. The codes and definitions are from "Cenversa-tion and
Pr.school Language Oevelopment" by J. Dore, in LanguAge
ACQyisitionl Stydies ~ firlt langyage development, (pp.
3~~-l~6) P. Fletcher and M. Garman (Eds.), 1979, Cambridge:
Cambridge University Pre.s.
•
S1
A numb.r of
conv.n~10~w.r. adopted
C-acts f.rom th. developm.At4l1 telltihQ .e•• ions,
IDENTIFICATION (ASID),
DESCRIPTIONS (ASDC).
~h.
for cod1n9
First, .ny
Second, subject rellpon.ell r •• d .loud
Third, reQuests to "thlnk", or "view",
were coded as ACTION REOUESTS (ROAC).
Fourth, opinlons
about changes in the instructional materlal .. WhlCh .r.
prefaced by 'maybe', are coded all INTERNAL REPORTS (ASIR),
Finally. any abrupt break in the speaker's speech pattern
that would change the meaning of the of the C-act. or any
incomplete statement or uttetance was labelled
UNINTERPRETA8LE (UNTP) and was not analyzed.
A few
.~.mple.
of UNINTERPRETA8LE data are:
rt
' l l be more •••
1 Cdn read them at •••
Sometimes r'll--well, no.
of Discours. Unit..
Pa~terns
The evaluator us.d a
structured interview format to gather data during
debri~fing.
•
He set an agenda fer discussion befor.hand,
according to comments made on th. materials, errer.. , on the
•
Questionnaire.
structure the interview protocols in term. of THEI'1ES,
ASPECTS,
and STRINGS.
Each of the.e element. wa. identified
1.
58
c--
Th. aQ.nda item9 raised by the evaluator are labelled
a •. THEHES, and .re the ganerai topics of the discourse.
Occa.ionallya spontaneous comment raised during th~
intervt.w would .lso become a THEME for discussl0n •
. In the protocois THEMES were recognized by both the
l
,
introduction of a new 1ssue and an overt or lmplied ACTION
REQUEST (RDAC),
fol fowed by a locatl.ve C-act such as, a
DESCRIPTION (ASDC> , o} an IDENTIFICATION (ASID>. which
•• rved to orient the .subJect(s) ta a specifl.c section ln the
instruct10nal text.
A THE ME from the Small Group Active
d.brieflng intervlew, divided into
C-ac~
• F
and coded, 1$
shown below.
Code
C-act
ROAC
The first thinq l wa,nt (you) ta talk ab0'r't i5
based on question 21 of the questionnaire.
ASOC
Some peopl. were poin~ing te a particular
section in the materlal as being
problem.t~c.
ASIO
Th.t
sect~on
would be the section on staining.
ROACs were i=requent"ly implied because of th? ell iptical
1
·progres.ion ·lm~os.d by the structure of the lnterview.
1.,
a~the
sine. the .valuator explained
That
outset how the
-
•••• ion would be structured, a common understanding was
.'
•• tabl i.ah"d.
The participants understood that when he
direct.d their attention to a particular section they were
.
-.xpect.d to comment on.whether that section was
"
.
problematic~
Participants d.monstrated their understanding by complying
•
"-
to the expectation.
Impliêd RQACs
ar~_
recognized by the introduction of a
new issue through a locat1ve C-act.
a.
FreQuantly,
in thi.
example, the evaluator identified the 1tem he wanted
""~
discussed by reading aloud the Questionnaire or te.t itam
,
(ASIO below) and the written studant re.pop •• (ASDe)
below)a
-\
Code
- C-act
~SID
Far you,
wh~t
WdS
th. most di-f-ficul t
p.rt of th.
Jesson.
The J.ast part.
ASDC
When the evaluator put a
T~EME
discussion, one or several ASPECTS
covered.
An
A~PECT
1S a
on the floor for gan.ral
o~
the issue might be
feature of a THEME
wh~ch
evaluajo( reQuests the subJects to considere
the
Thus, CHOICE
'QUESTIONS (RQCH), PRODUCT QUESTIONS (RQPR), and PROCESS
QUESTIONS (RQPC),
introduce the
pa~ticular
ASPECTS of a
THEME on which bn@ evaluator wants subject reactions And
Over the course of the discussion of tha a~
opinions.
example, the evaluator intrtfduced four ASPECTSz
Code
C-act
ROCH
, ROPR
~
What is thè prabJem in th.t s~ction?
ROPC
o
ROCH
Subject re.ponses and ra.ctions to a pArticular ASPECT
~d~.ider.tion .a~. labolled sTRINBS.
STRINGS Ar.
60
c
bec.use they sh.re content.
Fer example, in response to the
ASP~tT /RQ~R "What i5 the problem ln that section?", thl.s
.ubJ.ct'~
remarks constituted
3
single STRING:
C-act
don' t think they redll y gdve l ike enough
informdtion dbout stdining,
RSPR
l
ASEX
i~ke
l s t i l l don't
Qu~te
know what i t i5.
STRINGS can be coextensive with turns or one turh can
contain several STRINGS.
A subject might shift from
r •• ponding to a requestive, to qualifying the response, or
•••• rting novel information.
"
Sometimes)the evaluator would
.cknowledge these shifts and pursue the content by
new ASPECT.
raisin~
a
An example of this is shown below:
SpeAker
Code
C-act
Evaluator
RQPC
Dos. that sect~on do d good enough job in
lnforming you as to whdt you are
supposed to be ledrning within the unit
th,à t you' ve jus t gone through?
Subj.c t
RSPe
l
that whdt l WdS redding l knew From
the introduct~on there
that followed suit witr,..what l WdS
learning.
but 1 don't see ho"" i t ..,a5 related ta
sterilization and disin-fectian as i t \"....
..,.5 statecl on p.age 5.
found
on pdge 6,
ASEX
RSOL.
Evalu.tor . ODBt1
ASOC
ASIR
«
OOSS
RQPC
ROCH
'v
Okay
rhdt '5 whdt Karen's stdting
l want ta go around the room and get other
people's op~nlons on tha t.
Didne?
Whèlt do you think about page 5?
1. th.t a l l t t l . misleading wnere i t s.a ys
sterilizéJtion and tJisinfection?
Within th• • ubject's turn,
/
the first twe C-acts
\
1
61
•
comprise one STRING becAuse
deAl with the ASPECT on the
th.~
floor.
bold typeface> when she qualified her re.ponse (RSQL> ,
last C-act is a one C-act STRING.
Thi.
Th. evaluator acted on
the QUALIFICATION <RSOL) by stating his lntentlon to put the
i
issue on the floor.
RQCH>G'Ch
became new ASPECTS fdf the subj.cts ta con.ider.
•
the discours.
dur~ng
-
't.sk' of ldentifyinQ
testing interview addressed the
~
problematic portions of
~he
the dev.lopmental
instructional materlAI_, or,
possible, generating sordtions te th.se problems.
if
In the
course of accomplishing the task some conversation dwelt
upon such things as: explaining procedures; organizlng
particlpation; orienting participants to a partlcular
section; and revlewing previously dlscussed tOP1CS.
1
~
C-act.
which dealt with the process of accompllshlng the task at
hand were categorized as TASK TALK.
which faii
Examples of C-act.
in the TASK TALK category are:
Th.t's what you told me.
Now there are two things on th. t.bJ ••
50 that's it for that
pârticuJ~r
••ction.
Discourse Graphing
Biscourse graphs provide a visual representation of th.
~
structure of each THEME.
•
~
Blosses or summaries of
~he
C-acts are inciuded, as are the conversationa! codes.
actual
Wh.n
viewed sequentlally, the structure of the .ntire intervlew
.r
is apparent. Sample discourse graQh. are presented and
~~
~
(
62
-,
c
)
di.cu •• ed in the following chapter.
Ibw
C.tlQoriz.t~on Sc~
Baçkgroynd.
'"
,-
Since the purpo5e of developme~l testing
/
.,
is both to determ1ne student problems ~ith the instruction,
and to document their opinions and suggestions concerning
the mat.rials,
the, STRING was the unlt of interest for
'.
comp.ring acr05S conditions.
STRINGS captura student
opinions and perceptions of particular ASPECTS of a THEME
~nd.r
discussion.
Oetermining if one developmental testing condition
yi.lds Quantitatively and qualitatively superior information
for revision reQuired a three step process.
First, ail
positive or noncritical STRINGS which would not
cont~ibute
to the improvement of the instructional materials, were
eliminated from the pool of STRINGS.
For
exampl~,
about enJoying a particular section were not included:
l'J.d no problem with the materia1.
ft
was
..
comments
Uf
straightforward."
Nor were 1diosyncrat1c reports about personal learning
habits.
1 go b.ck wh.n f read.
Lik. 50metimes r"ll r •• d like the rirst sentence and
th,m 1" 1 1 ga bàCk.
And sometimes r"ll read the whale paragraph and read i t
.giJln.
It
~lw.ys
varies.
The rema1ning STRINGS were divided into problem
th •• e concepts
Gei. (1984).
\
,;
w.~e
ba.ad on the work of
~eston,
Burt, and
Problem repr.sentations included STRINGS
•
b3
which: (a) flagged a proql@matic section wlthout further
and descr i bed di f hcu 1 tUtS (e. g..
confusi~g
to me.
U
);
·.1n th. modul.
•
or (c) suggested a reason for a
difficulty <e.g.,
UIt
sterilizàtion ànd
dis~nfection.
wha tsoever.
"Wh. t does
SàVS thàt i t ' s rfiOferr.1ng to
but i t doesn' t mention th.t
U) •
Revision
suggest~ons
"Whàt you could hàve done
were of two types, overt C•• g.,
lS
mdde
derinitions right beside them.") ,
d
l i s t of the word • • nd
or embedded ("! would'v.
liked l t i f !'d known whdt edch of these were. ").
Once the STRIN9S'were dlvided into probtem
representatlons and reV1Slon suggestlons, they werer ready. to
be sorted accordlng to the klnds of lnformation they were
providing to the revisor or develcper.
In a compr@henslva
search of the literature for gUldelines and heuristics for
-expert reviewers of instructlonal materlals, Saroyan and
Geis <in press) determlned three categcrles which
.~pert
•
•
reccmmend revlewing when instructional materials
are being
evaluated: Content,
Whlle these
Pre~entation,
recommendation~
or Instructional D•• ign.
are net necessarlly
research-based or empirlCally valldated
1981),
(e.g., Hartley,
they do represent the areas which subject matter,
pedagoglcal, and instructlonal design
e~perts
consider vlt.l
in the systematic review of instructional mat.rial.
The
compil~tion
which emerged from Saroy.n and Gel.'
search is a consolidation of guidelines which:
<a> w.r.
64
~
:recommend.d for the imp~ov.ment of instruct~onal materials;
~(b)
of
w.r.
con~1d.red effect~ve
cont.nt, and
pr.~.ntat~on,
,
recommendat10ns in the areas
instruct~onal.~esign;
and (c)
did nct r.quire access to learners.
The Q.neral deflnltions of each of the categorles are:
Pre"ntitlon:
... the
instruct~onal
m~t,rials,
attr~butes for the
the recommended techniques of
pre,ent.tion, the use of graphics, the medium of
pre •• ntation and the professional packaging of the
mat.rl.l.
phys~cal
Content: ... focuses on elther the semant~c or sy~tic
structure of content, and the general attributes of
cont.nt such as accuracy, clar1ty, affective quality.
Instructlonal Deslgn: ... embraces aIl those items WhlCh
diseuss and relate to the components of a tYPlcal
instructional deslgn model.
Items that ,refer to
••~u,nclng, objectives, ~valuatlon and feedback, cost
'.ffectiveness, etc., are included ln thlS category.
and
5ar~yan
th. pr.clse
Ge~s's
deflnlt~ons
compllatlon of guidellnes contains
of each of these categories, and can
b. found ln Append1x 8.
The use of learner data
,
ln
the development of
lnstructlonal text was not the purpose of
and yet two lmportant
comments are sorted
cont~lbutlon,
u~lng
contributlons~an
~nto
th~s
be made when learner
its three categories.
and mOlt pertlnent
t~ th~s
compilation,
study,
The first
is that
the categories allows clear comparlsons within and
across condltions.
If for example, a particular condition
favours cont.nt 155ues over presentation problems, or if one
c
condition yi.lds more instructional design comments than àny
oth.r,
thl. can be clearly documented.
A second advantage
of usinq a system culled from expert recommendations is that
•
discussed more thoroughly ln a
later chapter.
Procedure for Categorization.
Problem ldemtlflcatlon
and revision suggestlon STRINGS from each of the protocols
were coded as belonglng to one of the three categorl.s of
elther presentation
des i 9 n
(P).
content
(C),
or instructional
Some items received a double codlng as it was
(1 0) •
difficult to establlsh thelr membership ln one category.
For reliability two lndependent Judges'7were glvlm category
definitions and were asked to sort the pool of STRINGS lnto
the above mentloned categories.
Any disagreements were
subsequentl y brought to consensus by discussion.
of Questlonable items did not exceed
~ï.
pf the total numb.r
of items.
Chapter Summé!ry
The materials, sUbjects, procedure and conditlon. us.d
in the developmental testing. of a microbiology unlt were
presented.
The
~Ploratory
methodology used te develop a
system foyl' comparing verbal data from dlfferent
developme~tal
methodology
testing conditlons was descrlbed.
in~olved
The
two phases Ca> the conversational
analysis and structural reprè6entation of the developmental
.
testinq sessions via discourse graphing, and (b)
o
categorlzatlon
of problem representations
..
suggestions.
~nd
the
revision
bb
c
CHAPTER FOUR
Rlt!iults
Introduc tian
Two
quest~ons
guided th15
invest~gat10n:
th. verbal data from developmental
.yst.mat~c.lly
test~ng
(a) How can
interviews be
compared?, and_ <b) What are the 1m p l1cations
J
of the differences in the verbal data that emerge from
dlfferent condltions for the developmental testlng
instructional materials?
The flrst Question was addressed descriptlvely ln the
prevlous chapter.
The system that was developed consisted
of seven dlscrete st.ps:
Î
1•
Verbal data from the developmental
testing sessions
were f1rst transcrlbed verbatim.
2.
The transcriptions or protocols were segmented into
conversat10nal acts <C-acts).
3.
"
Each C-act was coded according to fts functlon in
conversa tion.
The degree of interrater reliab1lity for
coding conversational function was 87/..
4. Three patterns of C-ar.ts, THEMES, ASPECTS, and
STRINGS,
~
were ldentified and the dlscourse was divided
accordlng to these patterns.
Some discourse concerned
accomellshlng the developmental testlng task or was
off-task.
,
Th1s discourse was consldered TASK TALK'and was
not analyzed.
5.
E.ch THEME and
it~
corresponding ASPECTS and
STRINGS w.r. repr.sented visually on • discourse graphe
\
•
67
6.
representat~ens
or
and (b)
revis~en,
in
d~vid.d
STRINGS were
into two groups <a> prebl.m
5ugg.st~ons
or
pra~se
which would b. us.fui for
noncr~t~cal
comments not helpful
rev~s~on.
7.
The
f~nal
step
reQù~red
representatlon and revlsion
Present~tion,
categories of
the sortlng of preblem
suggest~on
STRINGS lnto the
Content, and Instructional
Des~gn.
Th~s
chapter addresses the second
Quest~on
by
presentlng the results of an application of the aboya
methodology to the verbal 'data from the developmental
testing of a microbiology unlt.
The database used consisted
of a randomly selected protocol from each of four condltions
~
used in the tryout.
,"
The chapter has been
dlv~ded
lnto two sectlons.
first sectipn presents a global qUdntit.tlve
the protocols.
The
e~amlnatlon
of
Data are presented coneernlng qeneral
dlfferenees between condltlons as weIl as
specif~c
concerns
such as the freQuency of THEMES, ASPECTS, and STRINGS acre ••
and bttween the conditions.
presented and are used as a
ln
~ntervlew
lnteraetlon.
Four
d~sceur5e
fr4~ework
\
,
graphs are
te diseu5s dlfferences
The second seetlon reports the
results of the qualitdtJve cateqerlzatlon of the STRINGS.
Aga~n,
the results are presented across and between
cond~tlons
•
.
Quantitative Differences
Amana
Conditions
General differences.
tlme and in text length (Table 3).
Sinee l •• rners in th.
c
68
pa •• ive conditions were silent during
debriefing portion of these
only the
on-l~ne.
cond~tions
80th
was recorded.
on-lin. and debriefing feedback was collected from the
active conditions.
The protocol from the 1-1
condition had the longest elapsed
t~me
Act~ve
(about one hour) and
The Group Passive
ranked second in number of C-acts.
condition had the second longest protocol (about 45
and the most C-acts.
The Group
w.s about a half hour
~n
Act~ve
length and ranked th1rd
The shortest protocol in
of C-acts.
cond~t1on's
was the product of the 1-1
Pass~ve
~n
m~nutes)
protocol
number
and text length
t~me
and was about 7
condit~on
minutes in length.
The
d~fferences
in protocol length and text length
app •• r to be due not only to the differences in
e~per1mental
.,
the
c~ns~dered
further
Thèmes.
dlScuss~on
pretest~nQ
for
~n
the
D~scussion
knowledge, and
pr~or
style of the evaluator.
QUe9t~on~ng
or
but also to learners post test error
cond~t~on
patterns, the lack of
learn~ng
These
~ssues
are
sectiom.
THEMES are general topics which gUlde the
during developmental
test~ng.
On-line THEMES
were derlved from learner comments, behavlor-cued probes,
and page-by-page
rev~ew
of the materials.
came from four sources (a) comments
mater~als,
(b)
error~
Debriefing THEMES
wr~tten
on the posttest, (cl
re.ponses to the attitudinal
on the
negat~ve
Que5tionna~re,
and (d)
spont.aneous verbal comments from learners during debri·efing.
A' total of 39 THEMES were discussed during the
developmental testing of the microbiOlogy
un~t
<Table 4).
1
•
Table 3
PrptQCOl Length by Conditign
t)
Condition
\
..
Minut ••
C-.ct.
1-1 Actlve
~b
442
Group Active
38
3:!59
7
200
4~
l ,316
1-1 Passive
Group Passive
o
Pr::gjj.gs;g 1 J.muljj.b
'-
-/
, 70
c
r
\
!
1
,
,
/
Condition
/
Squrc. :ûU:.
IHEME
/
1-1
Active
Group
Active
1~
0
behavioral
cu.
1
5
1
pag.-by-page
r.vi.w
4
20
0
comment on
m.t.,..i.ls
2
20
0
:s
37.:5
0
.,..ro,.. one
postt •• t
1
H5
0
1
12.5
3
37.5
Que.tionn.i,...
,.. •• pon ••
9
4~
2
3
37.5
:5
02.5
.. pon t.n.ous
comment.
0
0
1
12.5
0
20
:s
8
learn.,.. Î
comm.nt
'
l
,
"-
~
r.
r.
:s
On-lin.
ri
r.
Group
Passive
r.
Total nUMar
of THEt1ES
«
1-1
Passive
ri
~
33.3
O.bri.fin9
1oj4..
1
Squrc•• fgr !HErES frOf!! E.ch Condition
00.7
8
•
71
4
Th. degrae of interrater .Qreement in identifyin; THEMES wa •
.100Y..
The 1-1 Active condi tion C'overed the hi;he.t numb.r
(a.=20) of THEMES during developmental
conditlon dlscussed 8 THEMES each.
tlf'sting.
Each pa • • ive
The Active Group covered
"-
the fewest THEMES (12=3) dur ing the tryout of th. "\
microblology unit.
\
The prlmary source for THEMES in .. 11
condition'~a.
th.
'\,
questlonnaire.
THEMES.
durinq
The posttest was the 5econdary source
.
f~r
"
Spontaneous comments, whether made on-lin. or
debrlefi~
cOhdltions.
were only dlscussed in the 1-1
Wlth the exceptlon of one behavior-cued probe
ln the Group Active condltl.On. on-llne THEMES occurred
strictly in the 1-1 Active condltlon (Table 4).
Each of
khese sources for THEMES i5 discussed below.
Negative responses to the attltudlnal questlonnalre
comprlsed about two-thirds of the THEMES ln both group
condl. tions.
In fact,
for the Active Group, questlonnaire
items were the only THEMES presented during debrlefinQ.
In
the 1-1 Active condition and the 1-1 Passlve condltlon,
questlonnalre related THEMES represented about 40% of aIl
THEMES discussed <45% and 37.5%
re~pectively).
Whll.
th~.e
proportions are lower than those ln thé group conditlons,
o
they represent the major source for THEMES ln both 1-1
cond i tlons.
Posttest error THEMES were r .. ised five time. acro •• th •
•
four condi tions.
The 1-1 conditions each diseus •• d on. t •• t
item during debriefing.
For the 1-1 P •• sive condition th.
one posttest error discussed repr ••• nted
~ï.
of th. THEMES in
7
72
c
that condition.
The 5ame-number in the 1-1 Açtive condition
r
r.pr ••• nt.d 12.5% of
its~total
number of THEMES
d~scussed.
"In the grpup conditions. the evaluator decided WhlCh
~
n
po.ttest items would be dlscussed durlng
debr~eflng
~y
only
including ltems on whlch two Reople made the same error.
a.cause of
thls criterion, no test item errors were
di.cussen in the Group Active conditlon.
c:onditlon devoted three THEMES (37.5%
The Group Passlve
of ltS total) to the
1
post test.,
"
In the 1-1 conditlons aIl written
dlscu.sad durlng debriefing.
were
com~ents
In the groups, a criterlon of
at least two learners agreelng on a problematlc sectlon in
the"materlals was used to.determl.ne Which,lea
"would be dlscussed durlng
ln
~he
debrl~fing.
S~nce
co:nmèl1ts
J
no two learners
group condltlons madE! the same comment,
wr~tten
. comments on the materlals dl.d not become THEMES.
A~tlve
and
The 1-1
1-1 Passive conditlons discussed a comparable
number of THEMES related to written comments <2 and 3
l
\.J-
respectlvely)~
However, when these frequency counts are
converted lnto percentages of total number of THEMËS
j
dlscussen withln each condition, the 1-1 Passlve condition
contrlbuted proportlonally more than the 1-1 Actlve
,
\..
,
)
,
c
condltlon
(37.~ï.
and' lQï. respectlvely).
1/1'
Onlv one THEME "'las the p~Odutt of a spontaneous
.
debrleflng commeAt made by a learner.
This occurFed in-th~
1-1 P •• sive condition and represented 12.5% of that
condltion's THEMES.
On-lln. THEMES compris.d 40%
o~
the 1-1 Active
)
\
\
Ela
.
73
condition's total, and 33.3h of the Group Active condition'.
to~al
number of THEMES.
-
In the 1-1 condition thl.
proportion was comprised of eight THEMES, four the product
of
page-by~page
.-
problng, three concernlng l&arner comments,
l
and one from a behavloral observatlon made by the &valuator.
Differences in the number of THEMES dlscussed in •• ch
condition
ap~ear
to be a functlon of the learners' error
patterns and prlor knowledge. and the number of subJects in
the tryout seSSlons.
These influences on THEMES wlll be
addressed in the follow1ng chapter.
1
AspectsJ::..
ASPECTS are features of a THEME Whlc:h the
rJ
\
...........
evaluator requests the learners to conslder.
Interrater
reliability in ldentlfying ASPECTS was 94.71..
Flgure 4
shows the frequency of ASPECTS for each THEME for aIl four
côndltions.·
. The ~umber of ASPECTS presented durlng the dlScusslon
"
of a THEME ~anged from zero to six. Slnce ASPECTS are
deflned as evaluator lnitiated features
posslble to have no ASPECTS
THEME are implled or
l~
lnltiated
<10%
of total).
4
AlI
,
~r the THEMES wlthout ASPECTS occurred ln
condltions.
15
when features of 4
pres~nted
learne~
a THEME,
û~
the 1-1
THEMES wlth only one ASPECT comprlse~ apout one
fourth of the total number of THEMES and also occurred
prlmarily ln the 1-1 c:ondltlons.
,J,
o
fou~
About 9% of the THEMES had
or more ASPECTS, and the maJOrlty of these <80%)
,
/~
occurred in the Passlve Group condition
(Q=4l.
Activ,, condition and the Group Passlve
conditlon discu •• ed
.
1
approximately the same number~f ASPECTS over the co~ •• of
g'rolJp Passive Condition.
74
\
C'
Them ••
.
1 - 1 passive Condition
...tn
~ 6
~ 5
-
.: 4
o 3
>~ 2
!
r
u.
1
1 ·-2
3
4 5
Them ••
6
7
8
Group Active Condition
/
1 2 3
Them ••
..
,
.'
1
123' 4
Th.m ••
,
1
1
1
5
6
7
8
9
,
l , ,,
10 11 12 13 14 15 16 17 18 19 20
Flaure 4. Frequency of ASPECTS per THEME in each conditIon
(
•
the tryout (Q-26 and
~28
respectively>.
Passive and Group Active dealt
~ith
like numbers "of ASPECTS
(6 and 7 respectively).
ASPECTS are of three types: cho1ce requests (ROCH),
product requests (RQPR), and process requests (ROPC>.
5
o~llnes the total number of ASPECTS across cond1tions and
within cond1t10ns aceording to requ@stlve type.
Across
~~§'_
four eondltions, the largest proportion of
ASPECTS were choice requests (62.7%>.
we~
process reQuests
About half
presented (32.8%).
r@Quests were presented across aIl
f~ur
m.ny
~s
~dUC~
Only thr ••
Th~
protoeols.
number represented only 4.5% of the total number of ASPECTS
presented.
In the Group Passive, Group Actlve, and 1-1 Aetlve
cond 1. tions f
•
choiee
,1>
~eQuests
represented the largest
proportlon of reQuests made (71.4%, 65.4%, and 57.1%
respec tlve 1 y)
•
In the 1-1 Passive condltlon about 16% of
the total requests were cholce requests.
was noted w1th the proeess reQuests.
Whlle two-th1rds of
in the 1-1 Active cond1tlon about one thlrd of the requests
i,
were proceS5 reQuests, and both the group cOlld 1. tlons
~eceived about one fourth of the1- total number of re~~est5
ln a process forma t.
•
Product requests wer@ s'eidom pO"éd ln
any eond1tion •
In these protocols, the evalu.tor favoured choie.
(
çequests.
~
THis preference, and th'e gen.ral ef tee t
of
~
intervlewlng style on learner feedback, will be .xplor.d in
!
76
c
•1
/
Frwgueoey pf ASPECT Types Dv Condition
ASPgcI
• 1
ROPR
ROCH
(choie.
reQuests)
Condition
1-1 Active
FreQuency
Group Active
FreQuency
ï.
,i
( productt
reQuests)
17
65 .• 4
'l.
TYRe
,
0
0
4
1
14.3
57. 1
Total
RQPC
(process
reQuests)
26
9
34.6
2
7
28.6
.,
1-1 Passlve
FreQuency
1
16.7
1
'le
16.7
4
6
6),.7
/
,
Group Pas!il.ve
Frequency
ï.
20
1
71.4
3.6
25.0
Comblned
Frequency
ï.
42
62.7
3
4.5
32.8
28
7
22
67
"
1
/
'-
.
•J
77
the following chapter.
Strings.
react~ons
~.
STRINGS capture learn.r opinions and
to ASPECTS of THEMES under
dlScus~10n.
231 STRINGS documented in total for the four cond1t10ns. Th.
de~ree
of 1nterrater agreement in 1dent1fying STRINGS
was 89%.
,It
lS
important to note that the focus of thl.
study was to document the d1fferences ln data across
.' \.
conditions, not to attribute particular responses to
1nd1vidual subjects.
Thus, learner responses ln the group
cond1t10ns were put lnto the pool of STRINGS for that
This means that each opinl0n or reactl0n was
tall1ed as a separate STRING, but who said what
!
lS
unknown.
On the other hand, agreement was tall1ed 1f more than one
person gave the same
op~nion
a group.
w1thl~
Almost two-thirds 9f the total number of STRINGS
(63.2%) were judqed to
be'~ps1tlve
the learn1nq materlals.,
th1S Judgment was 96.1%.
or noncr1tlcal towards
Interrater agreement eoncernlng
D1sagreements were dlscussed and
brought to consensus subsequent to the lnltial sortlng.
\
Table 6 depicts the mean number and the standard
devlat10ns of STRINGS
of ASPECTS.
respond~nq
to'Ôeach of the three types
The me an number of STRINGS respondlng to• a
~
request was largest for produet requests at 5, next largest
for process requests at 2.5. and smallest .for chOlce
o
reQuests at 1.8.
One case of 10 STRINGS
(
cholee request was doeumented.
nonrepresentatlve by two
inclusion would have
,
respond~nq
to a
It was judged extreme and
in~oendent
inflated~
judg...
mean and
Since
~tandard
78
c
•
1
TAble b
Mean. and Standard Deviations
o
o~
STRINGS Responding to
ASPECT Types
ASPECT
SO
~
Umt
~
,'"
~
RDCH
1.8
RDPR
5.0
RDPe
2.5
C·
\"
\
."
\
\.
\
40
3.46
3
1. 79
16
•
,
<Il
1
1.21
•
•
dev'l.ation, it was net l.ncluded l.n their calculation.
was little variability in the number of STRINGS respond1.nQ
.. J .
1
te choice requests (sd=1.21).
Sl1.ghtly more varlab1.11ty was
neted in the number of STRINGS relat1.ng ta process reQuvsts
(sd= 1 .,,'f!q) •
The greatest variabil1.ty was documented
'STR 1 NGS wh1.C h, responded te produc t
fo~
reques ts (sd=3. 46 )
it should be recalled that there were only three of
t
bu t
these
requests made across aIl four cond1.tl.ons.
Disccurse Graphs.
A
d1.Scou'rs~
graph tis a vlsual
representation of each THEME and its ASPECTS and STRINGS.
\
\
Patterns of lnteract1.on can be recognlzed and tal1led when
the dlscourse graphs are examlned.
A sample from each of
the four cendltlons has been selected and presented here .s
examples of the l.nteract1.on patterns durlng the
~evelo~mental
test1.ng seSSlons.
Figure 5 1.5 taken from the on-l1.ne
Act1.ve condl.tion.
portlo~
of the 1-1
In thlS case, the evaluator notl.ced a
~
Qehavl.~r
whlch he lnterpreted as flagglng a problem wlthln
the mater laIs.
He raised an ASPECT by presentlng a cholce
request (RQCH) te the learner.
observatlon ln
~
The learner negated the
response censistl.ng of a one C-act STRING.
,
The enly other time the evalu.tor probed because of a
behavioral cue occurred ln the Group
Activ~
conditlon, and
the result was the same.
•
A discourse graph 111ustratlve of
~
debrl.eflng
dl.scuss1.on of a questl.onnaire l.tem in the 1-1
condition appears in Figure b.
~
learner' s "uncertàin
fi
.
Passlv.~
This THEME 1S based on th.
"
response ta a Questl.onna1. re 1 t.m
,"
'J
"
cl
,
THEME:
Evaluator probe during on-line
r
1. 1noticed vou flipped back.
ASPECT:
los there a problem somewhere in there
ASPECT
,l:,
..?
RQCH
TYPE
}
-.
)
f.
;
STRING:
~1.1 No
~
~.. it
..
m
'Figure 5. Sample discaurse graph from 'he 1 - 1 Active condition
o
'g'
•
•
"After completing the lesson, 1was more interested in
'THEME:
and favourably impressed with the general subject
matter - microbiology"
~
r
ASPECT:
unsure about
~.
How is it posssible for
somebody designing mate rials.
~
to design them to stimulais you?
ASPECT
---Hope
lYPE
--------
STRING:
'
(lmplied)
1. Yeah, cause 1
ays
RQPC
2.1
liked microbi,
y-il
didn't chang after 1did
this.
l'm sure if 1hadn't known
anything about il 1would have
enjoyed it because you leam
trom il, it's interesting.
•
...m
"-,
Figure 6. Sample disrourse graph from (h~ 1 .; 1 Active condition
.
82
•• kinQ if the mieroblology lesson eaused learners to be more
impr ••
, sed wlth the general subject than they were prlor to
The evaluator presented an lmplled process
the lesson.
reQuest (RQPC) by rephraslng the learner's response and
ù
walting for an
explanatlon.
the
stru~ture
The learner was
of the lntervlew and demonstrated this
understandlng by glvlng an explan~tlon.
followed the
with
famll~ar
l~arner's
The evaluator
STRING wlth a second RQPC inquiring
if the learner had a suggestlon on how to lmprove the
materials.
The learner stated that the material w041d have
.J
been lnterestlng for someone not famlliar wlth mlcrobiology,
and dld not glve a revlsion suggestion.
While thlS partlcular learner dld not have a
suggestlon, the evaluator dld provldè an opportunity for the
learner te try to devlse a
resolut~on
to her
crlt~clsm.
Problem ldentlflcations were followed by ASPECT requests
reV1510n ldea9 34.2% of the tlme.
up wlth
reVls~~n
being asked.
5~ggestions
Wr.en asked,
fe~
learners came
'"
71.4% of the tlme.
W~thout
learners volunteered revlsion suggestlons three
times (14.:)% of the total number of revislons sl,lggested).
Problem representatlons were not followed by a request for a
suggestlon or a volunteered suggestlon 57.1% of the tlme.
In the group condltlons the conversatlon--and thus the
dlscourse graphs--were more complex because of the number of
f
partlclPants.
An Actlve Group THEME concernlng a
Que.tionntl1re item 1S depie ted i.n F·l.~ur.e 7.
open-ended Questionntllre l tems
In one of the
.. ..
ht"arners were asked
ab'out the
An area withln the
te~t
o
'!>
~
o
if
~E:
Question #21 trom the questionnaire - one of the ,-"orst thingsiÎbout this
'-
""-
"
lesson was the section on staining
*"
\
~
2. What is the
problem in
-t that section?
1. ts there a
ASPECT:
problern- with
• '·that sect:on?
ROCH
ASPECT
TYPE
STRING!, (
RQPR
2.1
/
6~
1
CIO 0
È?
~~
\~
~
!
~
:T
~
..
'.
c
1
AGREEMENT:
'Cil
3. Howcould
the section
be changed?
t
1
2:2
C):E
Dl::T
3~
2.3
a.~
00
3:!:la 111_
C)~
05
~::T
IIIID
:l,..
f
3
"'"'8II!.
g- ~
2.4
~
f
~
!.
c'<
-C)
a.
11111
1CIO
~
~ ~
if
~
0.
~~
C) 0
C
3CIl
_
2.5
2.6
a::
i"~
:;
~
-
0:0. 111 9-::70
c
:J.ID 3
--gll'O
_!!t
li
iô~~-·
~i:ln~~
1D3J~IICII
CIl
... II'iig.~
_.::1.
,ô.3fl
; , _ .. 0.
3
-
a
"0
Q;
~.
~
-
li! !
311
2.7
2
= 3i ~
<
~ 3
~~ s.n.
"'111 " , -;r--ü·
~ III .~
•
Figure 7. Sample discoul=Se graph from the Group Active conditioft
Rapc
RaCH
1\
(
1
/\
3.1
8.2
4.1
-g~
--<
;:g
!!
~;j
~ 1»
oa
!lOi;,
a.:l
il 0
3 ~
-
;,
ClIC)
1il~
f
(il
IDa.
::1.
)( 0
:J.
0.
j
g,
i;a.1!.
:T
CIl
•
1-<
)C
=i
~
o~!.
_
U.
3
4.2
ô~i
~.
.. c C
a~~
a~J
~.
;,
a~
C)_
?"-
§
.... 1
!'J
~
1
.
6
.J
!~i
.-9r"i_
CIl
C-;r-
U.
! '1
t
1
4. 1can only do
one of those
suggestions.
Which do you
prefer?
.
-Ji
!
1
..
CD
w
c
84
ident~ied by some of
d.alin; with staining techniques was
th." l.arn.rs as problemAtic.
After presenting this THEME
\
th. evalu.tor presented three ASPECTS
RQPR, RQPC>.
Th. learners
--
.
respoh~e~\to
~,
(RQCH,
simultaneous~y
the product request
\
and off.r.d three explanations (STRINès 2.1, 2.2, and 2.3>.
J
In addition, four reasons that the stain1ng section was
not a problem were presented (STRINGS 2.4 to 2.7>.
The
evaluator then repeated the RQPC by 1nquiring how the
1
section could be changed.
d~finition
3.1)~
One
~earner
suggested that a
of the staining technique be provided (STRING
And two other $ubjects agreed.
Another learner
1.~gg.sted that the example be deleted from ~he text (STRING
3.2>.
!
The evaluator then asked the learners to choose
be~en the two suggest10ns by tabling the fourth ASPECT.
The )Vearners all concurred with the suggestion (STRING 3.2)
to om1t the example (STRING 4.1).
One learner added that
,
J
the eXAmple be introduced when 1t was necessary to know it
(STRING~4.2).
DurlnQ this THEME, the evaluator
tur~ed
a revision
'sugQestlon STRING (3.2) into an ASPECT (4) and sought other
l.arner's oP1nions on that suggestion.
t~m
•• across the two group
c~nditions.
Th1s occurred 12
Two-thirds of
le.rn&r reV1S10n suggestions durlng group discuss10ns were
turned over to the other subJects to get the1r opln10ns on
/~
th. suqgest~on.
An examole of a group
error
~s
~iscussion
d.plcted in Figure 8.
about a test item
Test items
~ere
~
included in
group debriefing di.cussions when two or more learners made
,
/
•
)
THEME:
'"
Question #2 on the post-test - spherical shaped
ba~teria
are calJed cocci.
'<
..
2.ls thera a
problem with
tha
•
1. Is thero a
-ASPECT: -
....
problem ln the
waythat ,,"va
3. 15 there
anything 1could
havadone to
implrov.e il or
instruction?
asked the
change it?
question?
/
'4. Is It my fault?
-(
'"'
'"
ASPECT
ROCH
TYPE
..;
'.
STRING:
'"
"1.
.
2.1
22
NO
V04J hava 10
look al your
1go" wrong
because il •
notes more
was 100
't
than once,
\.
but vou only
hadto bokat
this
ont.
. AGREEMENT:
il
'
\
'
.
'.
+
6
t
1
ROCH "
R-QCH
RQC,H
2.3
easy
!
3.1
l
,
4.1
No,lib il.
No.
It's good.
~t
'ô
.
4
1
1
1
SQn,pl~ disc:oufs~ 'Iraph rro~ thé Group Passive c:ondition .
"
-~
~
,
.
.
"
m
li
t
ncure, 8.
1
•
a6
the same error.
1
in the Group
Th~s
Pass~ve
criterion was met for second Question
The evaluator presented
cond~t10n.
this THEME and then tabled two cholce reouest ASPECTS
simultaneously.
Over the course of the discussion, aIl
5ubjects agreed that the lnstructlonal
seQue~Fe
Sl~
was not a
problem (STRING 2.1) but none of them addressed ihe Issue of
whether the Quest10n itself was a problem (ASPECT 1).
one le.arner was asked if anythlng
improve the
lnstruct~on,
COU
Only
Id have been· done to
1
and only one learner gave an
opinion about the fourth ASPECT concernlng whether error. on
the posttest were the fault of the developer.
Qualitative Categorization
Probl~m representatlon and rev~sion $uqge5ti~ STRINGS
were sorted into the categorles of Presentatlon, Content,
,
and Instructlonal Deslgn.
A 90% interrater agreement rate
was obtalned between the two lndependent
Problem
Represe~tations.
~epresentations
Ju~qes.
A total "of 34 problems
occurred ln the four conditlons.
Table 7 "
c
presents the number of problem representatlons across
condltlons and wlthln each conditlon accordlng to category.
The 1-1 Active ldentlfied the hlghest number of problems
(Q=13),
the Group Passive the next
hi~hest
(Q=11),
followed
by the 1-1 Passlve conditlon wlth about half as many as the
hlqhe~t
condition
(Q=6) ,
and flnally,
the Group Active
identifled the fewest number of problems (Q=4).
For three of the fqur condltlons, the geQeral pattern
was
th~t
more than half of the problems Identlfled pertained
to Content issues, followed by Instructlonal Design lS5ue'5
\
"
Problem Representations~from Each Condition by Category
ÇoOdi tion
Categor~
Presentation
a
1
7.7
Total
1 nstruc tl.ona 1
Design
Contertt
1-1 Actlve
... Frequency
ï.
..
13
4
30.1
61 \-5
fi
Group Actl.ve
- --0
Freque~
.,;
~
'l.
3
4
1
75.0
25.0
1-1 PasSlve
Frequency
0
Group Passive
Frequency
7
~
63.6
'l.
Comblned
Frequency
ï.
4
66.7
'l.
8
"
23.5
3
;
2
33.3
1
9.1
18
8
\
,..
11
27.2
52.9
6
'1
23.5
34
~
\
ee
.~
( ( about one fourth for each candi tion ). and f 1.na11 y, \
Pre.sentation issues for which only thE' 1-1 Actlve cddlt1.on
ldentified a
~roblem.
The Group Passlve COndl.tlon was the
exceptlon ta th1.s pattern.
More than half of
ldentlf1ed 1n th1s condltl0n
were~ln
the problems
the Presentatlon
category, about one fourth were Content problems. and only
1!lne was an Instructl0nal Design issue.
conditlon was responsible for almost aIl
In fact,
th1s
(7 out of 8, 87.5%)
of the Presentatl.on problems ral.sed across COndl.tlons.
~
The exceptional pattern for the Group PaSSlve condltlon
appears to be related to the lack of pretestl.ng for prlor
knowledge and the posLtest error rate ln that condltlon.
The fact that overall
the most problems
1dentlf~ed
concerned
Content issues was influenced bv both the constructlon of
the attltudlnal Questl.onnalre and the task deflnltlon of the
learners.
AlI of these lnfluences on learner feedback wlii
be cons1dered further 1.n the next chapter.
Revision Suggestions.
RevlS10n suggestlon STRINGS were
also sorted lnto the categor1.es of Presentatl.on. Content.
and Instructional Des1.gn.
Table 8 outllnes the total number
ot revlSlons suggested acrosr condltlons and Wl.thln
COndl.tlon by category.
A total of 24 reVl.Slon suggestions were culled from the
four condl.tl.ons, more than half of whl.ch came from the Group
Passl.ve condltion (62.5%),
Thl.s condltl.On was aise the only
COndl.tlon that contributed suggestl0ns ta more than one
category and 1.n fact contr1.buted to every category.
In terms of d1.SperSlon of the revlSlon suggestlons into
..
c
Table
e
Bltvision Suggestions from Each Condition by Category
Ir
Total
Category
Condition
Presen ta t: ion
Content
Instructional
Design
1-1 Actlve
Il
Frequency
ï.
/
Group~ctive
Frequency
/
1
b
ï.
G.... oup Passl.ve
F .... equency
ï.
~
Comblned
Frequency
ï.
0
:5
0
1
100
ï.
1-1 Passlve
Frequency
0
100
~
11
5
0
5
1
15
100
:5
73.2
20.0
12
50.0
11
45.8
"
6.7
1
4.2
c
r,
24
different categories, a
~im~lar
number of suggestions
• concerning Presentation and Content were obtained
~=11
(Q~12
and
However, AlI but one of the
respectively).
\
Pr~sentation
suggest~ons
came from
th~
Group Passlve
<?
condltion (91.6%),
~hile
the source of Content suggestlons
was divided a,lmost equ~lly among the Group Actlve (n=;!I) , 1-1
Passive (n=5), and the Group Passive (n=3) conditions ..
The
.>
only suggestion construed as
belong~ng
i~
the lnstructlonal
Design category came from the Group Passive condlti~n and
represented 6.7% of its total number of suggestlons.
\
grou~
The evaluator's intervlewing style, the dynamlcs of
intervlewing. and the task definltlons of the learners
a~ear~o
s gge
have an impact on the dlsperslon of-revlsion
across categories.
These issues will be
t
examined ln the D1Scusslon .ectlon.
10ns
/'-0
1
)
\
/
o
l
'
.'
91
(
CHAPTER FIVE
Discussion and Conclusions
1
OVèrview
The results of th~s ~nv~st~gat~on present two main
~ssues
(a)
the
v~lue
of the methodology that was developed
ln facilltatlng the comparison of verbal data from
developmental testing
sess~ons,
qnd (b)
appllcatlon of the methodblogy to four
the
f~ndings
from an
from
p~otocols
d~fferent
condltlons for materials tryout.
orqan~zed
around these two issues and concludes with some
recommendatlons for further
This chapter is
lnvest~gat~on.
Value of the Methodology
To determlne 1f a particular
condlt~on
for
learner
tryout ylelds quantltatlvely and qualltatlvely superior data
for revls10n, dlfferences ln learner feedback have to be
documentcd prlor to their converS10n lnto
rev1s~ons.
The
flrst research question addressed the challenge of
develop1ng a methodology that would facilitate the
.
comparlson of verbal lnformat10n obtained from the
developmental test1ng of instruct10nal materlals.
The
approach to verbal data analys1s that was developed in this
1nvestlgatlon accomplished that goal ln the context of a
,
conversational analysls
wh~ch
reliably identlfled patterns
of dlscourse.
The system that was devlsed followed the structure that
the evaluator lmposed on the developmental testlng seSSlons.
The evaluator's probes and agenda items focused the
~
•
92
discussion and were labelled THEMES.
PartLcular feature5 of
a THEME on Wh1Ch the evaluator requested feedback,
t~rmed
ASPECTS.
~RINGS.
Learner
respons~s
were
to ASPECTS were labclled
The vLsual representatlon of the appllcation of
_/
thlS methodology was a serles of discourse graphs
~n wh~ch
1
STRINGS are deplcted ln the context of the ASPECTS
presented.
Slnce STRINGS capture learner oplnlon,
they were
the unit of lnterest in examlnlng qualltative dlfferences,
and ln making comparlsans across conditlons.
STRINGS that
were rot crltIcal of the materlals ùnd therefora could not
~
be used to Improve the lnstructlon, were
e~lmlnated
from the
~
pool of STRINGS.
The remalning StRINGS were dlVld~d lnto
problem representations and revislon sugqestrons and then
qual{tatlvely sorted Into three categorles: Presentatlon,
Content, and Ihstructlonal DesIgn.
While
~hIS
methodology was tlme consumlnq and
intrlcate, Its appllcatlon ensured that each unlt of
conversatIon was appraised according to Its functlon and
contribution to the evaluatlon of the learnlng materlals.
ri~
Other approaches to analyzlng verbal data have lended to
rely on fLeld notes and the codlng of
learner feedback by
listening to audIO tapes for global tapIes,
bellefs and
nonllnguistlc data such as moad, emphasls, and Intonatlon.
Such approaches are admIrable ln scape but SUbJ8ctlve
methad.
ln
Different coders, wlth dlfferent perspectlves wlii
lnevitably find dlfferent categorles with WhlCh ta structure
and make sense of the data.
Furthermore, the same coder may
be unable to reproduce equivalent codlng over time.
The
)
(
.
93
process used Ln thlS investigatlon, while labour-lntenslve,
lntroduced rlgor and was shawn to be rellable.
In additlon, the conversational approach to verbal data
~.
analysls documented the context wlthLn whLch learner
oplnlons occurred.
of
An advantage of a contextual examlnation
feedback LS the opportunity to trace both what was sald
T~at
and what was not said.
is, Slnce the developmental
testlng seSSLons were structured by an evaluator and the
format Lnvolved a series of questlons and answers. the
degree of conslstency ,in question types and styles, and the
correspondlng effect on learner feedback, could be shown
across condltlons.
Ftnally, the categorlzatlon scheme
comparlson of the results
obtal~ed
provlde~
a bas1s for
ln the present study and
those of future stud1es of verbal data from the tryout of
lnstructlonal materLals.
The cateqorLes are not specifLc to
these materLals but rdther pertain to the general areas
WhlCh researchers and authors Ln 1nstructlonal deslgn
ldentlfy as essentlal
<Saroyan & GelS,
to effectLve materials development
in press).
In add1 t1on, the use of a categorl.Zatlon scheme
garnered from gU1del1nes for expert reV1ewers adds another
dlmenslon to the value of tnis methodology.
The groundwork
for explorlng the notlon that learners contribute un1que
lnformatlon to the formative devetopment of materials
lald.
lS
Thl.S cateqorlzation scheme facllitates two lines of
f
investlgation (a) comparative studies which examine the
dlfferent kinds of data which learners and expert yield, and
94
(b) instruçtlonal development efforts whereln bath learner
\)
and expert data are collected ln arder to lmprove the same
set of in~tructlonal materlals.
~plication
The
of the,Methodology
s~cond
research questlon sought to ide9tify the
differences in verbal data from varled condltions for
out instructional materials.
size,
trY1ng
8ecause of the small sample
the results of thlS pllot appllcatl0n cannot be
co.nsidered representative of what generally occurs ln
developmenta~
testing.
However, the analysls system
facilltated an indepth exploratlon of the nature of verbal
~~ta
from four developmental
testlng protocols.
D1Scusslon
of the f1ndlngs allows us ta conslder sorne of the dlmens10ns
of verbal data and thelr lmplicatl0n on lnstructlonal
materials tryout.
Dlscourse graphs deplct discusslon tOP1CS (THEMES),
Features of tOP1CS covered (ASPECTS>, and student op1n1on5
(STRINGS) .
The sortlng of STRINGS lnto the categorles of
Presentatlon, Content, and Instructlonal Deslgn reflects the
klnds of comments and revislon suggest10ns WhlCh emerge from
the dlfferent developmental testlng condltlons.
wlth this sample.
lt appeared
ldentlfied the most problems.
When used
that th" 1-1 Actlve condltlon
whlle the Group Passlve
contrlbuted the hlghest number of reVlSlon suggestlons.
ALI
of the flndlngs of thlS pllot appllcatlon of the methodoldgy
must be vlewed ln conjunction wlth a number of other
considerations before one data collectlng conditlon
deemed superlor ta anather.
lS
The levels ln the methadology
\.
~
"-
•
95
w~ll
"'-
be used as a framework to d1SCUSS these cons1derations.
Themes.
One 1nterest1ng find1ng was that the number of
THEMES var1ed markedly across con.d1tions.
Even when
'empty'
on-11ne THEMES conslsting of a single probe wlthout student
react10n to the mater1als were ellminated. the comparlson
yiœlded dlfferential results.
One
explan~tlon
for the
differences concerns the sources for the THEMES.
A large proportlon of debr1efing THEMES were drawn from
negat1ve responses to questionnalre items.
Dn the surface
we could assume that these responses reflect neqative
.,
reactions to the 1nstructlonal materials.
Hawever,
there 1S
no reason ta expect that learners ln one condlt10n would
rerct more or less favorably to the materlals conslderlng
thâ, the sample was drawn from the same Dopulatl0n and
, _
î
randomlv asslgned to the condlt10ns.
A closer exam1nation
of the questlonnalre ltself reveals some lnherent weaknesses
,~..-.
"
(see Appendlx Al.
Several questlonnaire ltems lncluded two distinct
statE.'ments.
For examp 1 e:
many un famJ.l.1. ar words.
gOJ.ng on."
/1
The
vocabu 1 ary used con tained
1 often did nDt understand wha t
was
Comparlson aCrOss learners lS confounded by the
dlfflculty ln determinlng Whlch statement they are reacting
to.
Furthermore, items may have been placed on the agenda
because they were amblguous, not because they represented
problems wlthln the 1nstruct1onal materials.
(
A second lssue related to the questionnaire concerns
the effect of the compositlon of the instrument on the kinds
of 1ssues dlsCussed.
When the questionnaire items were
96
1
submitted ta the same qualitative
categor4zat~on
scheme used
to sort student opinlon, lt was found that one half of
26 items were in the Content category,
the
about one thlrd ln
the InstructIonal Deslgn category, and less than one tenth
in the Presentation domain.
difficult to place.
S~nce
The remainlng
~wo
ltems were
questionnaire 15 heav1ly
th~
welghted in favour of Content Issues,
the probabllity that a
Content concern will be ralsed durlng debrleflng 1S
increased by the proportion of questionnaIre Items relatlng
to~thlS category.
,1
In fact, when the problem representatlons
ufrom the four conditlons are comblned, more
tWIce as
t~an
many Content issues were ralsed than e1ther Presentat10n or
Instructlonal Deslgn lssues.
The evaluatlon of these
learnlng materlals appeared to be strongly lnfluenced by the
composItion of the questlonnalre, speclflcally by the
amblguous Items and the disproportlonate dlstrIbut10n of
,/
items per category.
Posttest errors were.another
THEMES.
so~rce
for debrlefinq
Although the learners were randomly asslgned to the
experlmeital
con~ltions,
an examinatlon of thelr test
'\
results lndlc~tes a slgnlf1cantly lower error rate ln the~
,
Group Active condltion than ln the Group PaSSIve condltlon
~
(11=.17 and 11=1.33 respectlvely;
.t< 10)=-3.672.
Q.<
.005).
Since learners ln the Group Active condltlon made fewer
errors,
l
posttest Items did not become THEMES ln that
condltlon.
The fewer the THEMES dlscussed ln a qlven
conditIon, the fewer the potentlal comments, crltIclsms, and
suggestlons that will Emerge from that condItIon.
97
Another jssue concerning posttest results is that
scor.~
were generally high aer05S conditions (see Table 1).
In faet,
several sUbJects',stated that they
affeetlveness of the lnstruetlon, lt ffilght
~~ferred
A
not
:the
f~nd
While thlS could speak to the
materlals dlfflcult.
prlor knowledge.
d~d
varlat~on
ln prlor
als;~?t:ffîeet
knowl~dge
eould be
J
from the dlfferenees ln each learner's prevlous
coursework
(~ee
Table 1), and sinee no pretest was
admlnlstered lt cànnot be assumed that h1gh scores refleet
l~arnlng
from these materlals.
Th~
use of posttest scores
as an lndex of lnstructlonal effeetlveness 15 only
meanlngful when the 1nfluenee of prlor knowledge
controlled through
The f1nal
~s
pretest~ng.
pOlnt regardlng THEMES eoncerns the faet that
the evaluator set the debrleflng agenda accordlng to learner
errors and comments wh11e on-l1ne.
This meant there was
little opportunity for learners to VOlee spontaneous
1
lretrospee tl ve op1n10ns about topies not on the evaluator'5
agenda (see Table 2).
Reaet~ons
to the'materials WhlCh were
,
,Î
not eovered by the test or questionnaire could thus be
m15sed.
Only once dld the evaluator table a THEME whleh
gave the learner the opportunlty to change, add, or delete
1
any comments made whlle on-Ilne.
Aspects.
Examlnatlon of the dlseourse graphs showed a
varlat10n ln requestlng teehnlques used by the evaluator
(see Ta b 1 e 4).
The objective of developmental testing
gather learner reactlons and
instruetional materials.
opln~ons
about the
rt could therefore be expeeted
s
to
•
(;'
.
1
'.
~hat
98
the Question1ng techniQues used support thi5 Objectlv.
bv allowing maximum latitude for learner response.
In
Dore's schema the open-ended or process"reQuests (RQPe>
opportun1~V
afford the greatest
1
/
~
independent thought,
w~ich
-:
for reflectlon and
followed by product
"
r~Quests
(RQPR)
lead to speclflc answers. and flnallY,bv cholce
'
"
reQuests
<ROCH) WhlCh force selection from a Ilmlted number
of opt10ns.
Yet ln these protocols, signlflcantly more
ç-:\
chOlce requests were presented than e1ther product or
process requests.
On the other hand, whlle there was a
tendency for the number. of STRINGS respondlng to process
requests ta be slightly larger than those
choice requests,
..
r~spondlng
pr9duct requests Ylelded the largest number
of op1n10ns per request.
v~ile
lt must be remembered that
only three product requests were ra1sed across aIl
conditions,
to
four
these flndlngs raise an lnterestlng SUPPosltlon.
Intuitlon suggested that open .ended quest10ns would Yleld
the rlchest data, but perhaps such a request1ve style lacks
sufficient focus for the learners.
learners'
"here.
unders~and1ng
The Issue of
the~
of the1r task may have an Impact
1
Because the cr1tlc'5 role 15 not relevant to the1r
"!l,
1nterests or exper1ence. learners may respond best when a
"
balance is struck between the rèstrict10ns lmposed by ch01ce
requests and the amblguous nondlrect1ve framework of process
\
reQuests.
The
~umber
of opln1ons
(~TRINGS)
correspondlng to
a reQuestive does not equate with the value of response, but
the probability of more problems being detected 1ncreases
),
with increasing the latitude for learner oP1nion.
c
99
Another issue related to the variation of ASPECT types,
concerns the evaluator's question1ng style in the group
condit10ns • . The 1ntentlon of the verbal port1on of tryout
•• ssions was to gather op1n10ns and suggestions from the
\
learners.
However.
~he
hlgh freQuency of cholce requests
presented dur1ng the tryout seSSlons suggests that the
.valuator lmposed a d1rection of 1nQuiry Wh1Ch constrained
time and space far the learners ta elaborate their ideas or
reac tl0I"\S.
The evaluator essent1ally carrled out a serles
of indl.V1dual qUantl tative interviews in parallel.
(
That
~s.
the evaluator a ~ tempted to polI each group member about a
\
ser1es of cholce\request ASPECTS, and ln fact lnstructed the
learners not to allow other partlc1pants comments to
lnfluence thelr declS1ons.
lnterv1ew1ng
lS
One of the advantages of group
the P05s1ble generatlon of new 1dea5 or
nével solut10ns to problems from the creatl.ve bralnstorml.ng
st1mulated dur1ng group lnteractlon (Hedges.
1}?95).
This
potent1al 15 effectivelv ellm1nated when learners are forced
to adhere to a preset agenda and an5wer 11mited cho1ce
reQuests.
In the group condltions the evaluator frequently placed
learner suqgestions on the floor for discussion (see Flgure
8).
When there were two different suggestlons tabled
learners were asked to choose between them.
c
ThlS tendency
to seek agreement or consensus has several possible effects.
F1.rst. there 1.S a potentlal
los5 of alternate
suggestlons when one idea becomes the focus of the group.
Amld the d1Scussl0n of one learner's suggestion, other group
•
100
\.
members may forget or be unduly influenced bV th. discusslon
and not able to generate thelr own unlque solutlon te th.
problem.
Second, as Hedges <1985> points out, the soclal
dimenSlon and group dynamlc ln a group lntervlew has an
lmpact on the responses of group members.
group lntervlews
t~\e
Partlclpants ln
lnto account other people's Vlews when
framlng thelr o~n responses.
Domlnant or artlculate
partlclpants can lnfluence what
ii said
feel constralned ln what they say ln
and indlvlduals may
~ront
of their peers.
In these seSSlons for example, learners may have been
reluctant to admit a problem w1th the materlals when thelr
"
classmates were clalmlng the lnstruct10n was stra1ghtforward
or easy.
The
f~nal
\4'
pOlnt about tak1ng group l.nven tory on an
ldentlf1ed problem or
suggestlo~
concerns the value of
learner consensus on any one ldea.
rt could be argued that
agreement between subJects means that the problem or
revislon is not the ldlosyncratlc preference of a slng1e
lndivldual.
However,
the fact that lt may be the
ldiosyncratlc preference of a slngle group should be
consldered--partlcularly ln
llght of the effects of the
lnterviewer's questloning style and the group dynamlc.
Furthermore, the Ilmltatlons of the expertlse of the learner
..".
should be kept in mlnd.
TallYlng
learner preference 15 not
equivalent to determl.ning l.nstructlonal effectlveness .
,",
The evaluator's Questioning style may also have had an
effec t on the amount of data from the on-llne portlon of th.
---i
101
1
1-1 Active condition.
Some researchers advocate actlve '
one-to-one sessions because of the opportunity to probe for
problem expllcations and revislon suggestlons (e.g.,
Komoski,
~uggested
1974).
Vet an examlnatlon
bf
the on-llne THEMES
that such problng Ylelded little lnformation from
In only one case dld a probe lead to the
this sample.
ldentlflcatlon of a problem.
The evaluator posed cholce
requestlves most frequently durlng the seSSlon.
Perhaps a
varlatlon ln the evaluator's questloning technlque would
have lncreased on-llne problem identlflcatlons and reVlSlon
5uggestlons.
On the other hand,
the number of problems
ldentlfled over-all ln the 1-1 Actlve condltlon was tWlce as
hlgh as ln the
1-1 Passlve conditlon.
establlshed between the evaluator and
Perhaps the rapport
learner whlle on-line
allowed the learner to better play the raIe of crltlc
demanded of developmental testlng subjects.
Research
lnvestlgatlng both the effect of the evaluator's questlonlng
style on learner data. and the effectiveness of on-llne
problng is warranted.
Strings.
As the revlewed Ilterature indicated, many
lnstructlonal developers and educatlonal researchers
advocate developmental
testlng to ldentlfy the flaws ln a
set of lnstructlonal materlals.
In llqht of thlS objective.
one of the most lnterestlng flndlngs ln thlS study was that
mo~
than two-thlrds of learner comments were not crltlcal
of the
learnl~g
lmprovemen ts.
package or dld not suggest changes or
Learners frequently praised the materials,
stated that they dldn't have problems with the lnstructlon,
102
and reported their personal study'hab1ts.
this tendency
dynam~c.
of a
m~ght
It might aiso be
suff~c~ently
not ta
be explained
part by the group
1n
lnferr~d
that the
mater~als
were
hIgh callbre ta warrant pralse or at least
undue crltlclsm.
~nv~te
In th" group'5
However, It seems Just
a~
likely that the learners were Ill-prepared for the task of
learnlng the content whIle slmultaneously crltlQulng the
materials.
The task of critlc
new and less practlced ln
lS
comparlson w1th the role of learner.
Support for thlS
notion comes from a THEME dlscussed ln
debriefing.
The evaluator presented thlS Item from the
Questlonnaire:
the materials
\i
the Group PaSSIve
"J often needed to go back over
ta
fully understand
~
t."
d
portl0n of
Learners were asked
thelr agreement wlth thls statement IndIcated a
problem
wlth the InstructIon or reflected a strateqy they used ln
studywg.
AlI of the learners responded wlth a descrIptIon
Two of the learners
of thelr personal study habIts.
remarked that they rehearsed the content because they knew
they were g01ng to be tested.
Comments llke these IndIcated
that these subjects focused on the learnlng component rather
than the crItIquIng component of
the evaluator's request,
and that the task definltIon of learner took precedence
on-ll.ne.
Another possIble expianatlon for the hlqh frequency of
noncrltlcal comments
1S
related ta the way the evaluator
presented himself to the learners.
Instead of the detached
Interv1ewer role advocated by some researchers,
thls
evaluator at times adopted a very involved psychologlcal set
\
103
and persona Il zed the process by ask ing 1 earners how he cou 1 d
f
ch4lnge the mater 1als or what they thought of the way he had
\
designed a partlcular sect1on.
There
lS
a possiblilty that
the learners Mlnimlzed crltlcal remarks in response to this
personal approach.
Furthermore, because the evaluator was
lncons1stent about thlS presentation, the influence of the
lnvolved evaluator role cannot be cons1dered a constant
across the condltlons.
The proportlon of ldentlf1ed problems which were not
fol1owed bv elther an lnvltatlon
ta
compose a solution or a
volunteered selutlon, outwelghed those problems Wh1Ch had
c0rreSpondlnq revlSlen suggestlons.
This flnding May
reflect the notlon that the expertlse of the learner
Iles in
ldentlfYlng preblems rather than performing the designer's
functlon of creatlnq effect1ve tnstruction.
However,
1f
learners are net glven the opportunlty to propose revislons
the llkelthood that thev generate solutions 15 decreased.
The methodoloqy developed ln th1S study provldes a
conversat1onal
frdmpwork wlthln which researchers can
,"
examlne and compare the verbal
out lnstructlonal
feedback from learners trYlng
materlals un der dlfferent conditions.
methedology was shawn te be rellable and
~o
have a potential
for dellncatlnq the unique contrlbutlon of learner5 te
mater1als lmpravement bv faCllltatlng compar150n wlth
(
'ex per ts' con tr lbu t 10ns.
A number ot lnfluences on learner data were apparent
when the methodology was apolied.
The
The evaluator's
104
l
Qu~t~oning style.
the group dYnam~cs, and the learners'
understand1ng of the developmental
identified as variables which
test~ng
potent~ally
task were ail
affect the amount
and quality of learner feedback.
, Sorne
\
1nherent weaknesses 1n the deslgn of the
i~ryout
sessions confounded the results.
not pretested for prlor knowledge, success on the posttest
could no( be attributed to the lnstructlonal effectlveness
of the materials.
,debrlef~ng
Second, the preset agenda for the
sesslons meant spontaneous learner comments were
constral.ned.
Th~rd,
several ltems on the attltudlnal
QUestl0nna1re were amblguous Wh1Ch ln turned Ylelded
amblguous reactlons.
Finally,
the evaluator occaslonally
posed as an 1nvolved developer.
1nterv1ewer.
overall
at other tlmes a detached
Th1S 1ncons1stency may have affected the
pattern of results.
Due ta the small sample Slze, the flnd1ngs from th1S
pilot appilcation s1gnal
further
~search
rather than
prov1de conclUSIons.
Recommendations for Future Research
It
15 net expeeted or reccmmended that the average
practltl.Oner undertake thl.S klnd of analys1s.
Rather,
effort to provlde the praétltloner with gU1deilnes
effect1ve develoomental
test1ng.
focus on such Issues as:
research efforts
ln an
for
~hould
(a) Ooes structurlnq the
att1tudlnal questlonnaire to reflect partlcular Issues,
a f fee t
1 earner feedback?;
(b) Ooes Hle 1 earner' s
understanding of thelr ro1e as crIt1C affect the quant1ty
105
•
and quallty of feedback?;
be
crlt~cs
(c) Should learners, be tralned to
durlng developmental
testlng?;
(d) Should
requestive types be 11mlted or controlled?; and
(e) Should
group lnteractl0n be encouraged durlng group sesslons?
Another dlrectlon for research concerns the
expert-derlved categorlzatlon scheme that was used.
lnvestigatlons could address questlons such as:
Future
(a) Does the
learner contrlbute unlque lnformatlon to the developmental
testlng of learnlng materlals?;
and (b)
Is
learner feedback
1.n one cateqory rather that another, more useful
to
mater1.als lmprovement?
Incons1.stencles ln the way
ln whl.ch the developmental
tcstlnq of lhe ffilcroblology unlt was carrled out suggest
other research questIons.
For example,
the fact
prf!'te\:',t was admlnlstercd suggests two( lssues:
(a)
that no
Can the
frequency of noncrltlcol remarks be reduced by pretestinq to
c:>llmlna te 1earners too advanced
for the ma ter la 1 s?; and
(b)
Ooes learner ablilty affect the amount of verbal feedback
durlnq materlal s
evaluator's role.
tryout~
Another que5tlon concerns the
Spec1.fically, does a noninvolved
evaluator's role lncrease
learner feedback?
1
Answcrlng these questlons lnvolve5 the systematlc
comparlson of verbal
,
lnformatlon and could be accompllshed
u51ng thlS methodology.
durlng developmental
Slnce optlmlzlng learner feedback
testlng 15 an lmportant contrlbution ta
the lnstruct1.onal materlals lmprovement process, contlnued
research 15 recommended.
106
REFERENCES
Abedor? A. (1971).
Deve l opmen t and va Il. da tl.on Qi. ~ mode 1
expllcatl.nq the formatlve evaluatl.on process for ~Lt.l,..:~çll.•"l.
self-l.nstructlonal learning systems.
(Doctoral
Dissertatlon. Ml.chigan State Unlverslty. Lanstng, Mlchlqan).
Abedor. A. (1972).
Second draft technology: Devclopment
field test of a model for formatl.ve evaluatlon of
self-lnstructional multl.-med1a learnl.ng systems.
ViewPolnts, 48(4), 9-43.
~nd
Adalr, J. G. (1973).
The human subJect: The soclal Q..?"ychc:?J_o_q'L
~ the Qsychologl.cal ~xperlment.
Boston: Llttle, Brown and
Company.
Andrews, D. H., & Goodson, L. A. (1980).
A comparatl.ve
analysls of models of lnstructlonal design.
JourQ.i!.l_ of
Instructional Development, ~(4). 2-16.
Baker. E. (1973).
The technology of l.nstr-uctlonal dovC'lopment.
In F. Travers (Ed.), §«cond handbook of rese~rch 9D
teac hlnq • Amer lcan Educa t lona 1 Researc" A'3s0C 1a t 1. on.
Ch~cago:
Rand McNally.
Baker, E. & Schutz, R. (1977).
Instructlon<)J_ Rr"pg~J.;J:.
ctevelopment.
New York: Van "ostr<lnd ROlnhold.
Baghdadl., A. (1980>.
A comparl.son between two formatl.ve
Evaluation methods.
Dlssertatl.on Abstracts J.D...!:eLf}!:~tJ;o~a l ,
1l.. 3387 -A.
Baker. E. (1972).
PreRar ing l.ns tr°uc t.Lona l ma t~~r)_Ls i9C
educat10nal developers.
Final Report oroJect No. 1-0027,
U.S. Dept. of H. E. W., Offl.ce oT Educatl.on. Natl.onill Center
for Educatlonal Research and Development.
Format1ve evaluation of
E. L., & Alkln, M. C. (1973).
development.
Aud10V.LSU~ CotnmunlçgS~9Q
Rev1ew, ~(4), 389-418.
Baker,
Instruct~onal
N. & Abedor, A. (1977).
Developlng audlo-vlsual
lnstruct~onal modules for vocational ~nd technicill !:..c':!LOJ.!.lq.
Englewood Cllffs. NJ: Educatlonal Technoloqy.
Bell.
Bjerstedt. A. (1972).
Edllcational technology: !D.5tr:-..!-1_CS_I_9nr~ 1
programm1ng'and d1dakometry.
New York: McGraw Hll 1.
Brown, D. (1978),
Testing the radlo and telev1sl.on proqrammes.
Vl.sual Education, Auq.-Sept., 29-30.
~
Jt.
107
W. (1987. APr:'. an. expenmen tal study' of group
and particlpants' role ln developmental ~estinq.
Paper
presented at the annual conventlon of the American
Educatlanal Research Associatlon, Washlngton, D.C.
c.
-,
& Geis, G. L. (1986. April),
GUldelines for
testing: Propased and practlced. Paper
pre ente
at the annual conventlon of the Amerlcan
Educa lanal Research Association, New Orleans.
::!II....,...~~=~pt:::!a'-!!-l
Cambre. M. A. (1981).
Historlcal overVlew of formatlve
evaluatlon of lnstructlonal medla products.
Educatlonal
Communi_~atlon ~nd TechnolQgy Journal, 29(1),
3-25.
Cou 1 thard. M. (1977).
London: Longman~
An l'ntroduction ta dlscourse analysis.
R .• Alexander, L., & Yelon. S. (1974) •. Learnlng svstem
deslQn: An approach ta the improvement of }nstruction.
New
York: McGraw-Hlll.
",'V1S.
D0ntal Auxl11arv Education ProJect. (1982).
Microblo1.Qgy.
York: Teachers College Press, Columbla Unlverslty.
New
Deterline. W•• & Lenn, P. (1972).
Coordlnate~ instructional
~yst~~~.
Palo Alto. CA: Sound Educatlon.
D.. xt:er. L. A. (1970).
Elite and speclalized_
Evanston: Northwestern Unlverslty Press.
~tervleWlnq.
--r
W. (1977).
Forrative'evaluatlon.
In L. J. Briggs (Èd.).
Instructlonal ges~gQ: Prlnclples anQ ~pllcatlons (pp.
311-333).
Enqlewood Cllffs. New Jersey: Educatlonal
Tech~alogy Publlcatlons.
D1Ck.
D1Ck.
W.
(1980).
deve 1 opmen t.
3-6.
Formatlve evaluat10n ln instructional
Journal of Instructional Development;" ;l(3)
D1Ck. W.• & Carey, L. (1985).
The systematlc des1qn of
~nstructJ.on (2nd ed.).
Glenvlew. 111.: Scott. Foresman
Company.
&
Dore. J. (1979).
Conversation and preschool language
development.
In P. Fletcher & M. Garman <Eds.). La~uaoe
Acqui"Lwon (pp. 337-361>.
Cambrldge\~\ Cambrldge UnlVerSl.tv
Press.
Dupont, D., & Stolovltch, H. (1983).
The effects of a
systematlc reV1Sion model on reVlsors in terms of student
outcomes.
National Society for Performance ~nd Instructl.on
Journal, March, 33-37.
•
108
EPIE Inst~tute
(1975>.
Ejlot gu~del~nes for lmprovlng
lnstructlonal materlals through the process ~ learner
verlficatlon~nd revislon.
New York: Author.
Engelman, Z. (1983).
Theory of ~nstructlon.
~atl0n~1
for Performance and Instruct~on, ~arch. 13-16.
~oclety
Ericsson, I~. A., & Sl.mon, H. A. (1984).
Pratacol anal.y<;l.s:
Verbal reports dS data.
Cambrldge, MA: The MIT Press.
Fletcher, P.,
& Garman, M. (Eds.). (1979).
a~u~sitl.on:
Cambr~dge:
Studles ln f~rst language
Cambr~dge University Press.
Languag~
devel~enl.
Frase, L. E., DeGrac~e, J. E. t & Poston, W. K. Jr.
Product valldatlon: Pllot test or panel rp.Vlew.
Technolo~~.
~(8), 32-35.
(1974).
Educatlonal
\
Frlesan, P. (1973).
Mliler Publlshlng.
Desl~~
lnstruct~pn.
Sant~
Monlca, CA:
GelS, G. L. (1986, Aprll).
~earners as ~~ncr~ !~
lnstructlonal dQS1®.
Paper prcsentecJ at the Annual Meetlng
of the Amerlcan Educatl.onal Research Assaclatlan, San
~ranclsco.
1
(1987>.
6. progro.Œ. of re~:Learch !!l .Lormatlv~
Manuscrlpt submltted for publLcatl.On. McGlll
UnlVerSl.ty. Montreal.
• Gel s, G.
L.
evaluat~on.
GelS, G. L. (1988, Apr~l).
Prof1les of th~ ~ct9L~ Ul formatlve
evaluatlon.
Paper presented at the Annual General Meetlnq
of the Amerlcan EducatLonal Research Assoclatlon, N0W
Orleans, LA.
GelS, G. L., Burt, C., & Weston, C. B. (1984).
Pr..Q_ç:~~du.!J~? (9J:'_
develqQmental test1rrg: Prescr1ptl.OnS pn~ p-r~s~~g2'
Manuscript subml.tted for pub11catlon, MCGlll Unlverslty,
l''lontrea 1 .
G. 'L., Weston, C. B •• & Burt, C.
t: Deve 1 op-men taL tes t.1ng.
fOr pub11cat~on. McGlll Un1vers1ty,
Ge~s,
de~ 1 opmen
( 1984) •
1 n s tI~ÇjJ.q~l~)L
Manuscrl.pt
Mon treal .
submltt~d
Gropper, G. ~ 1975) •
Dl.agnosl.s and rev lS1.0n !'Q.. th~ 9!;?'!S')
\
9-L ~nstruct.1onal mater1als.
Enqlewood Cllffs, NJ:
Educat1.onal Technology Publlcat~ons.
l
f")PF~),?n.!.
Hartley, J. (1972).
Evaluat1on.
In J. Hartley <Ed.),
Strategles for proqrammed 1nstructlon_: An educ~tl.o.naX
technology (pp. 133~173).
London: Butterworths.
Hartley, J. (1981).
E1ghty ways of lmprOVln] 1nstructlonal
texte
IEEE Transactl.ons Qrr Professlonal communlcatlon, PC
24,
1,
17-27.
109
Hartley, J., & 8urnh111, P. (1977).
F1fty gU1de11ne~ for
1mprov1ng 1nstructional text.
Programmed Learn1ng and
Educat10nal Technology, _14_< 1), 65-7/1,'
~
Hartley, J., & Trueman, M. (1980>.
Some observat1ons on
produclng and measurlng readable wrltlng.
Proqrammed
Learnlnq and Educatlonal Technology, LL(3), 164-174.
Hayes, J., Flower, L., Schrlver, K., Stratman, J., & Carey, L.
(1985).
Cognitlve processes ill revlSlon.
(CDC Techn1cal
Report No. 12).
Pittsburgh, PA: Carnegie-Mel Ion Unlversity,
Communlcatlons Deslgn Center.
Henderson, E. S., & Nathenson, M. 8. (1977).
Case study in the
lmplementatlon of innovation: A new model for developmental
testlng.
In P. Hliis. J. Gllbert. & R. E. 8. Budgett
(Eds. >. Aspects_ of educational technoloQ..Y... <,Vol. U: The
~read 9~ ~duc8tlonal technology (pp. 114-120).
New York:
NiChois Publlshlng.
'=
Hnrn. R. (1966).
Developmental test1ng.
for Proqrammed Learnlng for 8uslness .
.Johnson, R., & Johnson, S.
!~~~nl~:
Ann Arbor. MI: Center
Toward. indJ.vlduallzed
to self-tnstruction. Reading,
Addlson-Wesley Publlshlng Company.
(1975).
~ gevelo~~'~ ~~
M~55achusets:
S.
(1980).
EvalLiatl.on of lnstructlonal materials:
A synthesls of models and methods.
Educational Technology,
~(;lndasw<lmy,
?9(6).
19-26.
kandaswamv. S •• Stolovltch. H., St ThlagaraJan. S. (1976).
Lnarner verlflcatlon and reV1Slon: An experlmental
comparlson of two methods. Audiovlsual Communlcatl0n
~~Vl~~.
24(3),
316-328.
k'omrJsl<l. P. K. (1974).
An 1mbalance of product quantity and
lnstructlonal qUallty: The imperat~ve of emplric~sm.
8~~~suÈi CQm®~nlcat10n Rev1ew 22(4), 357-386.
Komoskl. P. K. (1983).
Formative evaluat10n: The emOlrl.cal
1mprovement of learn1ng mater~als.
Performance and
l ns tryc tl.an Journa 1_, ~f. (5). 3-4.
komos~~. P. K .• & Woodward. A. (1985).
The contlnulng need for
the learner verlflcation and reV1Slon of textual material.
In D.H. Jonassen (Ed.). The technology of ~: Vol. ~.
Prlnc~ples !QC structurlng, des~gnlng and displaYlng text
(pp. 396-415). New Jersey: Educat~onal Technology
Publlcations.
Locatls, C., & Sm~th, F., (1972).
GU1dellnes for develoPlng
lnstructional products.
~ducatlonal Technologv, Apr11, .
54-57.
110
1
Lowe, A. J., Thurston, W. l., & Brown, S. 8. (1983).
Cllnlcal
approach to formative evaluation.
Performance ~nd
Instrucklon Journal. 22(5), 8~lO.
Macdonald-Ross. -M. (1978).
Language ln texts.
ln L. S.
Shulman (Ed.), Revlew ~ research in educatlon <pp.
229-275).
Itasca, Ill.: F. E. Peacock Publishers, Inc.
Markle, S. (1967). Emplrlcal testlng of programs.
In P. Lange
(Ed.), Programmed lnstructlon (pp. 104-138).
The
sixtY-Slxth yearboak of the National Soc let y for the Study
of Education, Part II.
Chlcago. Ill.: Unlverslty of Chlcago
Press.
Markle. S. (1978),
Deslgns for lnstructional deslgners.
Campaign, Ill.: Stlpes Publlshing Company.
Merrlll, O. & Tennyson, R. (1977).
T~achlng concepts: An
lnstructlonal design qUlde.
Englewood Cllffs, NJ:
Educatlonal Technology.
McAlplne, L. (1987).
The thlnk-aloud protocol: A descrlptlon
of ltS use in the formative evaluatlon of learnlng
materials.
Performance and Instructlon, 26(8), 18-20.
Medlev-Mark. V •• & Weston, C. (ln press).
A comparlson of
student feedback obtalned from three methods of formatlve
evaluatlon of lnstructlonal materlals.
Instructlon~ll
SClence.
Montague, W. E .• Ellls. J. A •• & Wulfeck, W. H. (1.983).
Instructlonal quallty inventory: A formatlve ~valuatlon tool
for lnstructlonal development.
Perfor~~nc~ and
Instruction
Journal, 2Z(5), 11-14.
Nathenson, M. B .• & Henderson. E. S. (1977).
Problems and
issues ln developmental testlng.
Natlon~l Soclety for
Performance and Instructlon JournCll, L9,( 1 ), 9-10.
Robeck, M. (1965).
e. study of th~ r-eV1Slon pr-ocess ln
programmed lnstructlon.
Unpubllshed master-'s theslS,
Unlver-slty of Callfornla, Los Angeles.
)
Sanders. J. R., & Cunnlngham, D. J. (1973).
A structure for
formatlve evaluatlon in product development.
~~vlew of
Educatlonal Research, 43<2>, 217-236.
Sar-ovan, A. ~ & GeisII, G.
(ln press), An analYlsls of
gUldellnes for expert reV18wers.
Instructlonal SClence.
Schuman, H .• & Kalton, G. (1985).
Survey methods.
In G. "\
Lindzev & E. Aronson (Eds. >, The Handbook of Socl~l
Psychology: yolume l (3rd ed.), <pp. 635-698).
New York:
Random House.
111
\.
Scr.lven, M. (1967). The methodology of evaluation.
In R. W.
Tyler, R. M. Gagne, & M. Scr.lven (Eds.), Perspectives of
Currlculum Evaluation (pp. XX-XX). Chicago: Rand McNally.
Slnclair. J. McH .• & Coulthard, R, M. (1975).
Towards an
analysis Qi dlscourse: The Enqllsh used ~ teachers and
pupils.
London: Oxford Unlversity Press,
StOIOVltch, H. (1982).
Appll~ations of the lntermediate
technology of learner ver1flcatlon and reV1S10n (LVR) for
adapting lnternatlonal lrtstructional resources to meet local
needs.
Performance and Instruction Journal, ~(7), 16-22.
Thiagarajan, S, (1976).
Learner veriflcation and reV1Slon.
Audlovlsual Instruc tlon, ~(1), 18-h9.
Thlagarajan. S. (1978).
Instructlonal product verlfl.cat10n and
reV1Slon: 20 questlons and 200 speculatlons.
Educatlonal
Communlcatlon technoloqy Journal, 26(2), 133-141.
ThlagaraJan, S"
Semmel, D"
& Semmel, M. (1974).
Instructlonal development for tralnlng teachers ~
exceptlonal chl1dren: ~ sourcebook.
Bloomlngton, IN: Center
for Innovatlon ln Teachlng the Handlcapped,
Wager, J, C. (1983), One-to-one and small group formatlve
•
r
evaluat10n: An examlnatlon of two bas1c formatlve evaluatlon
procedures. Performance and Instruct10n Journal, 22(5),
5-7. Weber, R. P. (1985).
8as1c content analysls.
8everly
Hl.ls, CA: Sage.
Weston, C. 8. (1986).
Format1ve evaluation of instructlonal
materlals: An overv1ew of approaches,
Canadlan Journal of
Educatlonal Communlcatl.On, ~(1), 5-17.
Weston, C. 8. (1987).
The lmportance of 1nvolving experts and
learners in fcrmatlve evaluation.
Canadlan Journal of
Educatlonal CommUnl.Catlon, ~(1), 45-58.
Weston, C. 8.
(1988, Aprll).
~ synthesls of the data qathering
ç~~onents of formatlve Ivaluatl0n,
Paper presented at the
annual meet1ng of the Am rlcan Educational Research
ASsoc1atlon, New Orleans, LA.
Weston, C. 8., Burt, C., & Geis, G. <1984. Aprl11.
Instructlonal development: ReVIsion procedures. Paper
presented at the Annual Meetlng of the American Educatlonal
Resea~ch ASsoc1atlon. New Orleans, LA.
(
Wvdra. F. (1980),
Learner controlled instruction. Englewood
Cliffs, NJ: Educatl.onal Technology Publications.
•
,
...
APPENDIX A
Stimulus Materials
-
Il
\
..
!.d
o
.
l' •
..
,. ,
c
Stimulus
Mater~als
,
-,
/'"
'\
INSTRucnOJ\l$
The following learning materials are in firs~ draft forme
The purpose
of this session is NOT to test how weil you can learn, but how weil the
materials teach.
important.
Your feedback on the effectiveness of the material is
Go.through the rnaterial as best you can and if you encounter.
•
•
difficulty within the material, please underline the section(s) of the text,
r
diagrams, etc., that you feel are problematic.
area(s), write
cfn
After underlininq the problem
the materials tl,emselves what you think is the reason for
the di fficulty you have encountered.
Finally, if you feel that you can,
sugqest a way to revise the material so that another student would not
encounter the same problem (pl ease write out your revis ion on the materièll
itselfl.
1f you can't suggest
material.
Thank
YOU
1revision, don't worrVj just move on within the
j,
,very n:uch for your cooperation.
1
...
..
c
,
.
l
•
,
Instructi~
Please answer the folJowing questions.
The information will be kept
strictlv confidential.
.
1. Mother tongue:
"
4-
2. Aqe (years):
•
3. The number of years of schooling (not including the present
CEGEP
year):
4. Please Iist your previous courses in biology and chemistrv
{inc!uding the present CEGEP yearl:
..
,.
-
.
/
o
1
HICROEIOLOGY RELATED TD 5TERLIZATION AND DISINFECTION
•
MICROBIOLOaV
OVERVIEW
Page 5
As •
nurse
, it is very
important
'or 'l0,u to maintain the chain OT asepsis
during all clinicat
procedures.. You ",ill
learn the speciflic skill! needed to keep the
dent.l environment and instruments clean and
sterile in the module Sterilization and
Dis in ;= e'c t ion.
Th i 5 5 U b m0 d U 1 e
lU i Il
\ pro v id'";
you
",ith
background
information
and
knowledge o~. microorganisms
and
their
relationship
to
the
sterilization and
disinfection process.
"
•.• .1
..r
,
.
:'
c
1\_
MICROBIOLOGY RELATED TO STERLIZATION AND DISINFECTION
MICROORGANlSMS DESCRIBED
Page 6
1
INTRODUCTION
Microbiology ~tudie5
mlc~oorganisms,
or
microscopie
~orms
of
lire.
These
microscopie organisms can be Cld5Si~1~d
or
separated into four distinct groups.
OBJECTIVES
1.
Defin~
and spell termlnology
cocci
bacilli
h. spirochetes
i. microseop.
J. c: u 1 tur e s
mlcrobiology
b. bacterla
·'C. Vlruses
d. ~ungi
fo,
.il.
e.
2.
g.
protOlO"
Name
and
describe
classifications of
th.
four
dif.perent
mic:roorganism~.
3.
De9cribe the technio..ue~
study mic:roorganisms.
4.
Name and describe the three
of bacteria.
5.
Differentiate
between
scientists
use
ta
the
mi ero organ i sms.
.1'.
Microblology i5 the name given to the branch
~cience that studies forms or lire which
microscopie,
or
net visib le te the noiiked
MICROB IOLOGY
DEFINED
elle.
These microscopie: ilorms
of
life
,;are
c:.lled microoT'ganisms.
Microbiolo94 5tudie!i
ho", micT'oorganisms live
and
dia.
.md
the
helpful
and
harmful effact~ microorgani5m~
have on other liVIng thingli, includin~ man.
CLASSIFICATION
OF MICROORGANISMS
One useful _way ~ studying mlcroscoplC f~rm~
e~
lire i5
ta
group
tnem
inta
!1cn~ral
cla'~lflcation~.
Four
general cla~sific~­
tians WIll be disCU5S.d here:
1.
2.
3.
4.
EACTERIA
Bacteria
Viruses
Fungi
Proto%oa
Bacteria are very small micro~copic plants
èhat .ra unicvllular, which mean~ th~t tney
are composed of
Il.av.
..
rigld
dnly
cell
ooe cell.
Boiicteri.a
wall.
are
usu.lllJ
MICROBIOLOGY RELATED TO STERLIZATION AND DISINFECTION
Page 7
MICROORGANISMS DESCR!BED
triimiparent, and are no larger th an 1/:50,000
or an inch in sile. If b.cteria were larg~
enough to see with the naked eye you would
rind that there are three dlfferent shapes.
VIRUSES
1.
,phares called cocci:
2.
r 0 d t c: il 11 e d bac i Il i :
3.
spirals called
spirillum:
,pi~oçhete~,
OT'
Viruses .are submicroscopicl or even smaller
th.an bacteri..
Virus~5
@xist in several
sh.pe~ and sizes,
but the viruse~ effecting
man and
animaIs are usually spherical in
shape.
FUNGI
PROTOZOA
(
Fungi are usuall~ larger than either viruse~
or bac: ter i a, but ma n Il are 5 t i Il mi cr 0 S c: op i c. - - Fung lare p lant-l i k e mi croo-rgan i sms 5uch . as .
molds and yeasts.
Protoloa are the largest of the four type5
of
microorganisms.
They
are
usual1y
unic~llular and are sometimes
large enough
ta see with the naked eye.
--1
1
i
------- ----------- -- _.
HICROBIOLOGY RELATED TO STERLIZATION AND OISINFECTION
TtCHNIGUES USED TD STUDY MICROORCANISMS
Page 8
J
THE HICROSCOPE
STAINING
)
Most oTganisms studied in microbiologV are
so small
that theV cannot be seen b~ the
naked .eye.
TheT'efore,
the sciemtist must
use
special
instrument5
an~
special
techni~ues
ta study microorganisms.
One
instrument used
i5 th~ micro~cope.
The
mlcroscope 15 a pr~cislon instrument used ta
magnify or enlarge the microorganism ta many
tlme~ its normal size
50
that it can be
readilll
studied.
Magnification by
the
microscope can be as little as two time~ or
as much as 3,000 times the normal sile o~
the microorganism.
1
• 1
Frequp.~tly
the
microorganism
must
be
in a special way sa that it maq be
viewed with the microscope.
Sometimes i t i1
placed in water sa that it can be viewed
.111ve.
At other time~, the micoT'organism or.
parte:; oT it .. structure .;Ir" di-F-ficult to filite,.
50 it i5 stained wlth cl special solution
ta
make it more clearly Visible.
prep.r~d
\
{
For example, 2.!J!m. stain i5 .il commonllJ used
stainlng
technique.
rt
dlrferenti.etes
between bacteria by dividing them into two
groups, QrMm Positlv~ or Gr~m Np!]ativf';.
CULTURINQ
Another techni~ue used b~ sciontist~
ta
study mlçroorganism5 i5 known as cultuÎ~n9.
In this techni~ue.
5cientlsts
in
the
laboratar~
observe
the
growth
or
mlcroorganisms.on specially prepared plates,
dishes.
or
in test tubes.
A special rood
known as a cultur~ m~dium is placed
in. th. ~
dlshes.
The microorganisms are then placed )
'Y
in the culture medium and allowed to gro.t.ll._.(~
EV
using
different
culture media and
exposing
the
dishes
to
dirferent
environmental factor5~ the 5cientists le~rn
about the microorganisms.
1.
8 IOCHEMICAL
TESTS
-----....-
Ancther
way
te study mic~oorgani5ms is
through the
use o~ difrerent bioch.mic~l
tests.
Such tests d&monstrate the presence
of dif~erent systems within the ,ell.
On.
such test is ferment~tlDn,
1n which the
microorganism changes ~ugar i"ta .lcohol.
MICROBIOLOCY RELATED TD STERLIZATION AND DISINFECTION
Page 9
TECHNIQUES USED TD STUDY HICRDDRGANISMS
ANIMAL
INOCULATION
Another
technique used
te stud~ microorganisms is
te
int~oduc@
them
into
laboratory ~nimals such as mice or rats and
observe the animaIs for changes.
This is
k n 0 wn
a san i ma lin 0 cul a t ion.
Som et i mes th!!
the
animal
i5
kiiied and
examined
for
presence
of
internaI reactions to th !!
mi croerganlsm.
i
1
\
1
,
.-
-
_. - ~
nl~~U~lU~U~Y HE~ATED TO
SELF-TEST 1
1
STERLIZATION AND DlSINFECTlON
~hat are
the four
mic:roorganisms?
1.
general
Pag. 10
classifications
of
a.
b.
c.
d.
2.
Spherlcal-shaped bacteria are c.ll,d
a.
c.
bac i 11 i
spirochetes
vibrios
d.
cocci
b.
Which Or the
mic-roorganisms.
the naked etJe?
3.
4.
four
sometime~
cla~~ifica~ion~
cont.in
'large enaugh te , •• I.&dth
A precision instrument us~d b~ scientists to'
magnify mlcroorganisms ror study is known .~ the
*
5.
Fermentation i5 an
a.
b.
c.
d.
staining
a biochemical test
culturing
inoculation
"
..... '
\
~xample
Or
-------------------------------ATTITUDINAL QUESTIONNAIRE
•
.'1AME: _ _.::..-.._ _ _ _ _ _ _ _ _ _ _ _ _ DATE: _ _ _ __
LESSON 'TITLE: Sterilization and Disinfection 1
Plcase be (rant. and hanest in answering the fol1owing questions.
Re member. you are our prime source of information regarding
what needs to be revised.
t~
,\
rEY:
1. menns you strongly agree
Z means you agree
1 means you are uncertain
..f means you disagree
.1 mcans you strongly disagree
1. l had sufficient prerequisltes to
prepare me for this lesson.
2. 1 was oHen unsure of what, elactly, l
was su pposed to be Iearning.
Tz.TTT
3. After completing the Iesson, 1 feH that
what 1 learned was either directly applicable to my major interest, or provided
Important background eûncepts to me.
TTTTT
4. The way the matena1 was formatted
often distracted my attention.
TTTT-r
5. Going through the material became
tedious, or boring.
(
TTTTT
6. This lesson was very well organized. The
conee pts were highly related to eaeh other.
-,- T
J
T.T
There was tao much information being
conveyed in the lesson..
8. There was too much redundancy. l was
bored by the repetition of ide as.
',TTTT
9. There was a lot of irrelevant information
in trus lesson.
TTTT-r
10. The lesson was excellently designed. 1
could easily follow the instructions.
'TTTT
Il. This 1esson had very serious gaps and
lacked internai continuity.
12. The examples used to illustrate main
poill,ts wcre elcellent..
'TT'TT'
13. The vocabuJary used contained many unfamiliar words. l often did not und er-
TTTTT
stand what was going on.
14. The Self-Test at the end of rhe les son did
a good job of testing my knowledge of the
main points in the lesson.
TTTTT
1
1S. l had no Idea of how l was doing during
the lesson.
TT'TTT
16. At the end of the lesson l was still uncertain about a lot of things and had ta
guess on many of the questions on the
Self-Test.
17. 1 believe 1 learned a lot, considering the
time spent on this lesson.
18. l would recom mend extensive modificatlOns
ta the lesson before using il with other
students.
TTTTT
TTTTT
,.
..
19. For you, what was the mast diIficult part of the lesson? _ _ _ _ __
20. What was the easiest part of the lesson? _ _ _ _ _ _ _ _ _ _ __
21. What were the three worst things about this Iesson? _ _ _ _ _ __
.
22. 1 understood most of the concepts and
vocabulary immediately after
completing
(
the lesson.
\
TTTTT
23. 1 think thlS whole procedure of trying
out new malcrials with students ÜJ a
waste of time.
TTTTT
24. J would prefer a tenbook or lecture
version of this lesson rat~er than the
workbook version.
TTTTT
24. 1 often needed to go back Qver a portion
of the tesson ta fully understand it.
25. After completing the lesson, l was more
interested in and/or favorably impressed
wlth the general subject matter than 1
was before the lesson.
(
TTTTT
26. Please write below any comments, suggestions or changes which you
believe will improve this lesson. Thank you .
•
...
•
•
..
•
\
c
,
,
APPENDIX B
.
Compilation of Guidelines for Expert Reviewers
o
"''''
c
-
Compilation of Guidelines for Expert Reviewers
•
Guidellnes for presenullon (It.ilics l11<llc.:lte c:ort.iùcung Items)
L Phyncal AUnbutes
l. Use of space.
• Uu blanJc iln.tS for paragraph
ui~nlljicalU)fI
• Uu /J'UÛn/alumfor paragraph wUUljicauofl
• Single spac~ lut
• Double spau Iw
• ProVlde ample space when: wntten answe~ are eliclIed.
• Space should be blank !me nuJ.er than box [ormaL
• Reduce space between quesUon! and mulUpie cholce amwcn to llUIUlTIlZC VlSUaJ erren.
• Spa ce between word! should n:mllln conmtent (about the space requtred by I}owcr ca.se 'i').
2. Type sue and style.
• Use IegIbie typeface.
• MaJJl t.a.m conSlS ten cy 111 t]'pei3cc.
• Use la pol11llypc for mstrucllonai maLCnals Wlth a m=um of 60 chalOlcten per!me and 7S
words per pmgraph.
• Use sunple sen! or sam sen! type styles [or mnrucnonaJ text.
• Creas.e moderaLCly lugh bnghtness conm.st and muumwn rd1ecunce ooween pnnl and
background.
• AVOld Halle.! as they reduce reading speed.
• AVOld boldface type as lt malces t.e.xlless leglble.
3. Fonnal and !.ayoue
• Mamt.a.m a cle.1r and loglCJl [ormaL
• Orgaruz.c fonnatto adhere ta the slrucrure o[ the content bel11g present.ed
• Use vanous a.tds such as numbenng systems, headmgs, mdcnuuoo, md spaCUlR to promOLC a
Ioglc:al preS01t.1IlOfl
4. Us~ of colour
• Use colour spanngly and wuh II. purpose wtuch IS cleulv explamcd
• Ust! c%ur (0 cnhance or lugh"ght a dlSp/ay and to promolt! ducrlmJJlQ/wn brlwull cI~nu"l.l.
• Avoui use of c%ur as a typollraphle C~
• Use colour m mnple dJallratnS to mouvaLC
• Use colour 111 complex dugrams LO proVlQe more ltIformauon
• Mamuun Il correct bab.nce ot colour
'.
5. Datnty·
• AVOld crowclmg the teu or the screOt m order ta make readl11g ailer.
6. Typcface:
• Upper and Iowa CJl3e'
• Use lower ose type for rapld Idenufic:IUOn.
• Use lower ose type for hc.sdmgs and subheadl11gs.
• Use upper ose type for uutulleuen and proper nouns.
• AVOld usmg s stnng of CJpllalleu.cn, 1lI plIUcular whole paragr.lph ••
7. Typcface and leglOiliry
• AVOld ùleglOle pnnl.
,
• AVOld uSlIlg trulsp:irent or reversed leuerulg
• AVOld condensed type.
• AVOld usmg bold or e:uta bold typdace cxccpt where ernpll.uu u necded.
• Do not prmt word! aI an 8I1~le to the Ilonzonw.
• AVOld \J1COlUlSiCOt arnngemcnl of ward! and iules.
8. P3 gc SIZC 8I1d stV le:
Use stand.ud page SJZe of A4 or AS.
~amt:un a consmt'nt ~trucrure, espec:ùly m page length and vuu.a.l babm:e \.0 mue ~se:nta·
:ton acstheoCl.Ùy ple:l5mg
9.
:vur~s:
Use lalJusofied nght margIflS Rlgllt Jwuiiouon unpun readlIlg and ouses .",kw.rd ward
sp3CUl g and h yt.1lcna non.
•
•
Lumt toW margl11 space to 5010 of the total page.
~
Use Il two column struaure muud of a one or three co1un:n strucrun: (or strnghuorward prose:
10. Colwnru:
IaL
Use: Il sUlgle colwnn tex, {Of rontent wluch IJ antenuptcd by chans and tables.
11. Type of paper:
• AVOld usmg daric colourcd paper.
• OJoose corrca paper SlOClc.
-,
n. OrglJUUOon TechnIques:
1. Componenu ot 1IISuucuonaJ package:
• Include &1l !devant componc:nu ln IDStrUCUonal package: ovel'V1ew, summary, table of content.>,
InbllOgfllphy, g1ossary, Index. advance orgaruz.er, cumulauve reVlCW, test Items and mswer Key,
Uluoducoon, qUlck rcfercnce se=on, coursc gUldc and teachc:r manuaJ, Job aJd, resou'n:e malen.al, supplc:n.c:ntary malCrul, cxplanauon of error codel (for computer manual), toptca1 oudme,
quesuons aflCr le5Sons, C()lIDe schedules and nme frames, and C()urse flow or patJung
PJesent fOOUlolCs, eXlCmaJ ta the t.exl, ID 1 separalC secoon.
'l. HeadJngs and sulrheadmgs:
• Ule headlng' and rubheadmgs as they CJll clanfy and he mformauve
• PJesc:nt headUlgs and sub.headmgs ID quesuon form
• PUce It the beguuung of Ibe left hand margm ln horuootallayouu
• Use lowcr case typeface for headtngs and sub·headmgs.
• Use 1 CO!lSl5ICnt method for ailocaong space between headmgs, and sub-he.admgs
• AVOld unng allCmauve methods of dlVldmg lCXt, such as the engmeermg convenuon of numberfdecunaJ/number aJthough tlus parucular method \S prderred ta usulg alphanwnenc
_
chlnlClCn
c
); U'ffofnurnben'
• USt prose tUsCnpllOfI for mformallOfI IMI must be rtrn~mhered, ralhtr lhan UStd, such as puctnrages, probablittus and nU11U!ra/ dara
• Avold O\I~ruu of apoSUlDlt
• Use Anime flIther Iban Roman numerals for secuons, but noc. for sub SecUonl, where small
Roman numenlls are prcferred
• The use of numben 15 encouraged for a sequence of str:ps or In heu of subhcadmgs, and for duplaymg neslCd content.
• The use of Ibe number symbol rather than prose II preferable ID IDsuucuonallCxtlD pamru1ar
when presennng a senes of Hems.
4. Cucmg.
• Ule cues to anphall7r. obJecu of unportance., 10 dnlw atlCOuon, and 10 chot feedback.
• Explam the cuemg system
• Do not subsutute sp3uaJ eues W1th typograplucal cues when ksthke content 15 presented.
• Mllluple CUCiOg lS dlscouraged.
• Box unporunl concepu
Ule onu and frames 10 emphasu.e concepu
5. Tules
• ,Use
clC.!!r
",,,l conCIse ulle for chapten and secuon!
• l'not ulles m the same fashlOn throughoul the lCXL
• Make ulles ., short as posSible.
6.Usu·
• Use !Jsu whc:n severa! Items have to be ducussed
• Oivlde long lms mto groups W1th subheadmgs Idenufymg each group.
• Grammar should be COnSllICnt throughout lut HOn!.
m. Graplucs
1. illustrauons
r
• Ule grap/uCll only If they art: supporuve of content and If they aeeomp!Jsh sometJung tbat the
narranve dhnOI..
• Use illusm.uons wluch are unamblguous and c10sely llltegrated mth'the meanmg of!Cl.t.
Ponuon illustrauons rel.1uve to one mother. norm&1ly on the page followmg the tcxt
llIusuauoos shoold be appropnate {or the llllCnded audience.
• Canfy purposc of illusU'l.uons (e g • are thcy auenuonal. expucative. or retenuonal7)
• Use hoc drawmgs mstud of reahsue draW\llgs or pholOgraphs "r the salee of sunp!JClty.
-
,
"
•
2. Gnlphs 1Barcharu 1Pte Ouru:
o
Use pte charu UlStead of pletonal chans or cross sectlonal draWU\gs of \hrce dunenllonû
obJecu whc:n percenuges and quannues are esumatcd.
• Note thAl pIe chans crealC duficulucs whc:n leuenng hu 10 bc Included or proporuons have
he Judged. Use bm:hans when malung sUUc comparuons and LUle graphs for dynamlc
comparuons.
• Use clther barc:haru or grophs to dlSplay m:nds and 10 esUmale pcrcenug~s and quannucs.
3. Tables'
J.,
.
.
•
10
dlSpl~ng
M.1ke ublcs as comprehensIve 's poSSIble,
patlCms ând trends cleuly
Group Items ID ublc 3.long one 1Ul!
..
SubdlV1dc Hems when they cannaI be groupcd together.
Narrow ublcs !hould be ldt rangmg wlule Wlder tables W\th many columns should bc sun ph.
ficd by usmg row headmgs.
4 Flow Chans
• Use flow charu or dcclSlon tables Cor complclllcgal doaunenls, govcrnm'cnl plme, or spccUic
Informanon Wlth many urelevam faClOn
,
• Use flowchans when romplex mfonnauon hu to he sOneQ or chOlCC! have la he made.
• ProV1de the ruder Wlth ample mfonnauon on uSlIlg flowèhans
•
•
•
IV. McdJum of pre5t:ntauon:
• Spccfy medlwn of the IIlSlIUcuonal package.
• Funush relaled tnfonnauon such as the ulle. dale oC pubhcanon. publuher and
COlL
• Use cum:nl medta wluch arc suppomve and appropnau: {or prcscnung the subJec! matter.
• Use mulu-scnsory prcsenlAuons
1
,;,
V. ProfeSSlonal pacl:aglllg
• Mamwn lCchruCA.l accurllcy such as fOOlS. recordmg fidcLIry. edllms, mounung of p1aurcs. clant}'
of graplues, mlegrauon of mUSIC, and the COrTCCUlCSS of exposUfC and lJg/!UnI!.
• Use crcdJblc VOICX:S {or recordmgs.
• Use credlblc authon who arc Imown m the field.
• Do nOI mclude mulcadmg conLenl ornumuormauœ
• AVOld typogl'llpluc and puncruauon errors
• Ensurc dur.tbÙlry of prodUÇL
1
Guidellnes (or InstructlonaJ DC5I~n and DeveJopmcnt
L Develapmcnl CooSlderauom
..
1 AdaputbÙJey
• Can m~lcnal, be adapu=d 10 user ncals7
Can maIcnals he adaplCd t.o srudenl needs1
• WhaI Il the amounl of marupulauon or control that cuher group rrughl CllercISC ID the presentauon. general use. and mleracuon wllh the m/Ue~1
• Can the lUcher change the paramelers 10 sWlthc nccdl of the lcamen u weil as the usc of vinous presenLtuon, probmg. and remforcemenllechruqucs7
• Arc the nccessary rcqUll'Cmenu or quaWicsuons of the lcacher cie.arly 5utcd1
• Have the malenals takcn mdJV1duaJ dufercnccs InW conSlderal'onl
• Are resource mau:naLs proYlded7
• lJ therc fle:ubwlY Ul the user cnVlronmcnt7
Arc there prcscnpuons about the methodology, group nz.c. and allotu:d ume7
-
"
•
2 Performance dua:
• AscertlUl wheÙler Ùle mstrucuonal pacage mc1udes Any mfonnauon on ilS preVlOUS suecess
WlÙl represcntaUve users.
..,
• Have problcms been auocaLCd wllh Ùle matelL11s7
• Have Ùlere been regular revlSIon and updaung pracDces7
• Arc vahdi.ty and relJ.ability score, avalhble7
3. Hardware and software: (Moslly relevantto computer software) .
Docs software have bade-ups7
• Can software be copled7
• -u the hardware casy'ta use7
4 InltrucUOnal enVlronment for mat.enaIs
• Arc Ùle lUfUlIed facÙ1nes. ;;pace. and the spectIic condlnons llnder wluch Ùle matenaIs are moS!
Ifkely ta be efIecuve sUlLCd7
• Is there speçtfic mformauon on studenl/teacherrauo7
u there any mformanon on Ùle rcilability of matenaIs and Ùletr effecaveness Wlth vanoUl
groups of swdat1'l1
S. Jusaficauon of need
therc a defuute need for the matenals7
• How do matenaIs fit W1Ùlln the ove rail cumcuIum 1
• Ù 1t pomble that matenals WIll overlap WIÙI oÙler programs1
6. Cost c:ffecuveness
ù Il cost effecuve ta deve!op Ùle matenals7
....
• Arc management, mamtenance, and replacement oosu ~onablc1 f" i:S'
7. Ranonale and ptulosophy ..
u !.he rauonale and phùosophy for slUdymg Ùle pack age stated expllctùy and u u in hannony
li Wlth Ùle educauonal goals of Ùlat parucular arca of cducaoon7
8. EducaDOnal value.
• Do Ùle matenal..s have educauonal vlÙlwty7
• Lt therc eVldence of Ùle effecuveness of the mstrucaonal mate nais?
• Lt therc any value m Ùle suppomve components1
• Do matenals facÙ1tate compn:henSlon. mcnuon. and transfer of knowledge7
9. Ddiruuon of the praduct: .
u therc a compieu: descnpnon of Ùle course or product. mcludmg Ùle purpose for wluch Ùle
matcnals have been developed7
• Lt Il cuy to update and rcVlse mau:nals7
· u
,
• Wha.t u the d1ffcrcncc between Ùle CJrrcnt vemon and pl'CVlOUS cdlUoos1
il. DeSIgn G.xnponcnu
1. Target audience
Conslder grade. ability Leve!. ;md phY!lc.al. cmouonaI and mtellcctual rcadmess of Ùle audience
both at the urne of the deSign and dunng the development and reVlSlon of the matenals.
Ascertlltn matciJ belween Ùle mstnJcoona! mau:na1s and Ùle mtended ust:rs' abiliues.
l feedbau('
• ls therc a prtmpt aftd appropnat.e feed bacJc system prescot WIÙI.tn cvery aspea of Ùle mstnJcnonal progrun 1 (Prtmpt. 111 sorne mstances ha! been defined as trnmewate wlule appropnau:
type of feeaback has been uscd ta convcv any of Ùle followmg posluve or negallve rcmforcement, Ùle encouragement of accur1lcy TlIthcr Ùlan speed.. pcnonaltzed feedback. ;md conSlsu:ncy
III the language used for corrccuve feedbad )
3. Tests:
Assess performance by means of u:su w/uch arc valtd. re1.table, and ccnstnJcted on psychomct-
ne pnnClpl~.
• Usc cnt.enon rcfercnced tesU rather Ùlan norm rcfercnced lesU.
•
• Align teslS wllh suted obJea.tves .
• Ensure that the tests used are .dequate (or dllgnosmg dJ.{ficulues.
• Oearly SUte the mastery cnte non.
4. Lmlcs to pnor lcarrung
• Whcn deslgnmg matenals, select those student expenencc5 wluch nccd to be mcJudcd in the
matena.U
• Dectde m advance when to lmk new knowledge to pnor Irnowledgc thlt \S unrel.ted to the subJed. matter content.
Relate novel content to famÙlar content paruOllarly whcn aburla Ideal are lJlll'oduced.
• Ensure thal matenlÙS sumulare the appucauon of sIulli or knowledge preViously leamed
S.PnaJce:
"
•
•
Include pncnce whlch will promote the desm:d outcome behaViour
• Pracllce should pl'OVlde the opponuruly for the leamer to draw llpon all of the newly Icqulred
slal.h and mfonnauon
6. Entry Ic:vel prercqwsÎtes.
• ExpllClÙY stale prercqUlSltes for the maleruùs, such as necessary sluUs and the overalllcve1 of
'the sllldcnl, [or the teacher.
7. Evaluaaoll'
• Apply evalu.aaon to the pracess of matenJÙ devclopmcnt, the m -thodology, the eduauonal
goals, and to the matentl~
• Morutor the phyS\C:al conmUons and the procedures ta be uscd dunng evaluluon
• Morutor the frequcncy and consistency of the evaluauon schemc.
8. ~scnpuons for reVUlon
~ Idenufy areas where matenals can he Improved
• Rem81ll consIStent m the reV1ew slnltegy
• ReVIse by sunph.fymg
• ReVIse W1th a delay aiter wnun g the ongmal drafL
RcVlse W1th the help of colleagues
m. Orgaruzanon of Insweuond ContCllI
1, Allgnment of components (conmtency)
Mamwn 1 mlUeh between VlnOUS UllIruCUonal componcnta such al, oontalt and obJccnves,
obJecuves and goals, obJccuves and teslS, obJecnves and lcammg acavlUe.s, tests and goalJ,
goals and content, feedback and content, and goals and msltUCOon
2. LogIc.ù =lUClCC
Orgll.lllZ.C mSlfUcuona! matenaJs and present them !oglully, accordmg 10 some scheme or
modc! such as Gagne's Ic.ammg luerarclues, error!css msaunlllanon. llIlCrrelaUonlhlps of IUbJcas, leveu of dJ.{fieulty, and In the Cise of software, by IIsl, rather than ClYlI'11and.
3. ObJccnves.
Idenufv ooJecnves and pnonuze and lat them m SuJfiClClt detaù sa thlll '-ours~ LOruuUC.Jœ
c:an be l'omble
• Ens~ the presence of mSlfUcllonal ooJecuves
• Ensurc!hal oOJecnves are staled III pertormance lenns, parucuJarly for less struClUred matenaJI
• Ens ure !ha 1 ooJecuves lIre well wntten. rCJlsooable. alUlrulble, and swtable for thc content
Derme the malter)' entenon In advancc sO thal suceeu m aclueVing the obJecuves li cJcar.
Speafy thc source of the obJecuves and thelr parucuJar emphaSll
4 Ontponents of urut.
• MamUlll the prcsence of the foUowmg componentl toplW oUÙlne, llsu of concepts ta be
lcamed. lm, of obJccuves, demonstnuon c.xercucI, case problc:ml, analogies, mouy.uona!
oomponents, grapluc specUiauons, llutruC1lonal stntcgJcs, CliC probJem., and lUtorW. for
bands -on c.xpenence.
•
S. Prescntauon and IlILegrauon of com ponen Il:
• Make sure the overall deSign 15 appropruLe {or the cootenl.
• Ule examples, prompts, vuuals. narrauve dlSpLayS, coloUl, and sound wlth an IlIsU'UCtlonal plan
in rrund.
•
•
•
•
•
Includc p=cqUISU.cS pnor 10 the malIlldea and non'prcrcquulLeS ailer orgaruzed content.
Present summanes or outLncs elther before or aiter the comcn!.
Present declUlluve knowledge before procedural knowlcdge
IncorporaLe pncuce alter demonstnluon frames.
Emphuu.e the most1ffiporunt ldeas III the rnatenals
6.Sma1Isr.eps
DIVlde lnsU'Ucuonai unm IlIlO smail, dlScrcLe secuons, groupmg surular concepu III the same
secUon.
GuideJ/nes (or Content
L Syntacuc strucrurc of content
c
L Word leve.l.
• Do not we Jargon, acronyms, abbrcvuaau, Lechrucal or complc:.x Lerms.
• Do no~ we rrunnterpn:tAble and mappropnate word!, undefined terms, and muiuple synonyms.
• ldenufy a new Icrrn by a V1sual eue such as underilrung or unng a dtffercnt typeface. Cucmg
procedure must be explamed pnor 10 muse.
2. SenLence leve.l
Ule acUve VOlce rather than passive.
• Ule the presentlense
• Address the re.ader by name or by a pronoun
• Ule relauve pronOWlS and otber '[uncuoo word!' 10 pmmote comprehenSion.
• Use SImple and shon SeTIlences
• Limn the nwnber of clames cootamed III a sentence 10 Olle UIÙes! subormrullc clauses an: used
10 reduce ~IUon. Lurut the number of ru bord male cuuses 10 two per senlence.
• AVOld usmg negauves, m parucular double or tnple negauves, ln dlScourse.
• Use negauves ln lmperauves or when a parucular empllaslS IS 10 be made
• The sequence of evenu ln a senLence should correspond Wlth the temporal arder of the events.
• AVOld noWl Slnngs
• Use aruon veros Im~d of COIlvertlIlg nouns 10 vero,
• Ule l'CAdabÙllY (ormulae la m.atch the readmg level of the content Wlth mat of tbe anuClpated
wen
n. Scmanuc Structure of Content
1. Use of examples
• Use examplcs wluch are clanfymg, IUilluc, and purposefuL
• Exarnples should precede pncucc and feedba/±.
Present cxa..mples along mth non-examples \
• Present concrele as weil as Bbstract cxamples \
• Stan Wtth pmlUve and mnple examples and move llJ complc:.x and abslraCl ones,
2. Coherence
• Ascenam tha/. cmtent e1emenll arc properly Integrated
• Group rebt.ed lIans 10gether
• Untangle convoluled scnlenCCS.
3. Repeunoo
• Reput the unportant Ideas as often as posnble 10 enhance leanung.
1
1
•
4. Fanuhan!)"
• Mamtun a balance betwcc:n nove! and fanullar content Novd content adds lO complcxlly
• Use fanulur lenm especully when n:lauomhlp' arc delcnbed
5 Use of concn:le verrus ab~lr':lct Id~-ll
• Rewnle abstraction, mlO concrete Ideas sa !.hat re:tdcrs can percelve !.hem WI!.h case.
6 Companso!15 and COOlraslS
• ProVlde comparuons and conlnlsl! when muoduClng ncw concept!
m. Aunbutes of Instrucuonal Conlent
~.
1. Value of COOlent.:
•
Ascertau1!.har content 15 relevant. unportant, appropnate, and necesury.
2. Clin!)'
1
• Ascen1UIl!.h:u cootent 15 cleu, unannblguous, non-<ùslJ1lcttng, concue, and undersu.ndable.
• Present msuucuons ID a manner wluch 1$ casy ta follow
3. Conlent accuracy
• Ascertau1 !.hat cootent 15 aCC11rat.e and that mlegnty of !.he sabJecl mAucr 1$ mamu.med
• Venfy the source of the subJecl mauu
• Speafy whelber the contenl .. document.ed and re,earch based, and whether Il hu been
n:Y1ewed by scholan ln the field
4 Comprehcnslvc:nel!'
Ensure thal tn.truÇllOnal content n =mp"'hCn<1VeneH ln tenns of bo!.h quallly And quanu!)'
• Do nOl orna conlenl for the sake of wmprebcnSlveneH
S. Affecuve aunbule! of content
• Present cootent WhlCh 1S mt.eresung and mouvaung
Incorporait: Iltlt:nUlln'gammg schemes m Insuucuonal conlent ramllt,lnty wl!.h lhe IUllude.
and Ihe culrural comat of Ùle urget audtenl.c may ud m rnlllung conIe nt rnouvaung and
challeT! gIJl g
6 ObJecuve presenlAUon 1BLa!
RcrnlUll obJccuve and unblAsed ln the presenumon of conlent
• Do nOI wc slercolype3.
• Do nOlmclude rrusleadmg content or rrusmforrnauon
7 Reccncy.
'
PreleT!1 coment whu;h " 'st.atc-of-the·alt',!.hat n,Il represen~ currcnl trends ln !.he &lU
From "An analysis of guiaellnes for expert
reviewers" by A. Saroyan and G. Geis, 1988
(in press). Reprinted by p~rmission.