Effects of Feedback Type and Signal Probability on Quality

Western Michigan University
ScholarWorks at WMU
Dissertations
Graduate College
4-1992
Effects of Feedback Type and Signal Probability on
Quality Inspection Accuracy
Matthew A. Mason
Western Michigan University
Follow this and additional works at: http://scholarworks.wmich.edu/dissertations
Part of the Experimental Analysis of Behavior Commons
Recommended Citation
Mason, Matthew A., "Effects of Feedback Type and Signal Probability on Quality Inspection Accuracy" (1992). Dissertations. 1980.
http://scholarworks.wmich.edu/dissertations/1980
This Dissertation-Open Access is brought to you for free and open access
by the Graduate College at ScholarWorks at WMU. It has been accepted for
inclusion in Dissertations by an authorized administrator of ScholarWorks
at WMU. For more information, please contact [email protected].
EFFECTS OF FEEDBACK TYPE AND SIGNAL PROBABILITY ON
QUALITY INSPECTION ACCURACY
by
Matthew A. Mason
A Dissertation
Submitted to the
Faculty of The Graduate College
in partial fulfillment of the
requirements for the
Degree of Doctor of Philosophy
Department of Psychology
Western Michigan University
Kalamazoo, Michigan
April 1992
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
EFFECTS OF FEEDBACK TYPE AND SIGNAL PROBABILITY ON
QUALITY INSPECTION ACCURACY
Matthew A. Mason, Ph.D.
Western Michigan University, 1992
A computer simulation was developed to examine the effects of feedback type
(immediate, delayed, or none) and signal probability (p = 0.05 or 0.12) on the accuracy
of identifying signals (missing components), inspection response rate, and response
sensitivity (d'). Subjects were randomly assigned to one of six experimental groups:
(1) immediate feedback with a signal probability of 0.05 (1/0.05), (2) delayed feedback
with a signal probability of 0.05 (D/0.05), (3) no feedback with a signal probability of
0.05 (N/0.05), (4) immediate feedback with a signal probability of 0.12 (1/0.12), (5)
delayed feedback with a signal probability of 0.12 (D/0.12), and (6) no feedback with a
signal probability of 0.12 (N/0.12). In a self-paced computer tutorial, subjects learned
to identify the presence/absence of signals in a schematic diagram of a hard disk drive
on a computer screen. During experimental sessions, subjects were exposed to series
of 200 machine-paced samples and were required to indicate whether or not each
sample contained a signal. Low signal probability resulted in higher inspection
accuracy and lower response sensitivity compared to high signal probability. Type of
feedback did not affect inspection accuracy across experimental gioups. However,
some minimal effects of feedback type were evident, including (a) delayed feedback
resulted in lower inspection accuracy during earlier experimental sessions than in later
sessions (immediate and no-feedback conditions showed no such difference); and (b)
high signal probability with delayed feedback resulted in slower response rates than
high signal probability with immediate or no feedback.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
INFORMATION TO USERS
This m anuscript has b een reproduced from the microfilm m aster. U M I
films the text directly from the original or copy subm itted. Thus, som e
thesis and dissertation copies are in typewriter face, while others m ay
be from any type of com puter printer.
The quality o f this reproduction is dependent upon the quality o f the
copy su bm itted. B roken o r indistinct print, colored o r p o o r quality
illustrations and photographs, print bleedthrough, substandard margins,
and im proper alignm ent can adversely affect reproduction.
In th e unlikely event th a t th e au th o r did n o t send U M I a co m p lete
m anuscript and there are missing pages, these will be noted. Also, if
unauthorized copyright m aterial had to be removed, a note will indicate
the deletion.
O versize m aterials (e.g., m aps, drawings, charts) a re re p ro d u c ed by
sectioning th e original, beginning at the u p p er left-h an d c o rn e r and
continuing from left to right in equal sections with small overlaps. Each
o rig in al is also p h o to g ra p h e d in o n e ex p o su re a n d is in c lu d e d in
reduced form at the back of the book.
Photographs included in the original m anuscript have b een reproduced
xerographically in this copy. H igher quality 6" x 9" black an d w hite
photographic prints are available for any photographs o r illustrations
appearing in this copy for an additional charge. C ontact U M I directly
to order.
UMI
U niversity M icrofilm s International
A Bell & Howell Inform ation C o m pa ny
300 N o rtti Z e e b Road, A nn Arbor, Ml 48106-1346 USA
31 3/76 1-4 700 80 0/52 1-0 600
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
O rder N u m b er 9221696
Effects o f feedback type and signal probability on quality
inspection accuracy
M ason, M atthew A braham , Ph.D .
Western Michigan University, 1992
Copyright © 1992 by M ason, M atthew Abraham. All rights reserved.
UMI
300 N. ZeebRd.
Ann Arbor, MI 48106
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Copyiight by
Matthew A. Mason
1992
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
ACKNOWLEDGEMENTS
The preparation of a doctoral dissertation is rarely the effort of a single
individual; there are many to whom I am indebted. First and foremost, I owe gratitude
to William K. Redmon, Ph.D., for his close supervision, guidance, and keen eye for
revision. Few are as deserving of the title of “mentor” as Dr. Redmon.
Special thanks me due to Alyce M. Dickinson, Ph.D., for generously extending
the use of her laboratory facilities and equipment. Heartfelt thanks are extended to
Katie Cronin, whose dedicated assistance was invaluable to the expeditious completion
of this project.
I would also like to thank each of my dissertation committee members for their
expertise and guidance throughout my doctoral studies: Drs. Richard W. Malott, Jack
Michael, and Helen D. Pratt. My future efforts will be a reflection of your teachings.
Financial support for this dissertation was provided by the Organizational
Behavior Management (OEM) Network. The dedication of the OEM Network to
research and students of OEM is an investment in the future.
Last, but never least, a special acknowledgement is given to Asiah Mayang,
companion and best friend, for her constant encouragement, easy laugh, and generous
affection throughout our academic lives. Sayang, Asiah.
Matthew A. Mason
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
TABLE OF CONTENTS
ACKNOWLEDGEMENTS.........................................................................................
ü
LIST
OF TABLES.....................................................................................
v
LIST
OF FIG U R ES....................................................................................
vi
I.
INTRODUCTION................................................................................
1
II.
M E IH O D .......................................................................................
8
Subjects and Setting..................................................................................
8
Quality Control T a sk .................................................................................
8
Dependent Variables..................................................................................
10
Apparatus...................................................................................................
11
Independent Variables...............................................................................
11
Type of Feedback...............................................................................
11
Signal Probability...............................................
12
CHAPTER
E xperim ental D e sig n ..........................................................................
12
Procedure...................................................................................................
13
Subject Training..................................................................................
13
Experimental Conditions....................................................................
15
V a lid atio n ...........................................................................
16
R E S U L T S .....................................................................................
17
Summary of Effects on Inspection Accuracy, Rate, and Sensitivity....
17
Effects on Inspection Accuracy...............................................................
21
Effects on Inspection Response R ates............................................
24
Social
III.
iii
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Table of Contents-Continued
CHAPTER
Effects on Inspection ResponseSensitivity.............................................
26
Social V alidation R esults.................................................................
27
IV. DISCUSSION...................................................................................................
30
A P P E N D IC E S ...........................................................................................................
36
A. Informed Consent Form...................................................................................
37
B.
Reproduction of the Computer Tutorial................................................
39
C.
General Results.................................................................................................
49
D.
Statistical Calculations......................................................................................
65
B IB L IO G R A P H Y ....................................................................................................
71
IV
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
LIST OF TABLES
1. Experimental Conditions and Group Assignment..........................................
13
2. Summary of Mean Inspection Accuracy (% Correct), Response Rates
(R/s), Number of Hits (H), Correct Acceptances (CA), False
Alarms (FA), and Misses (M), Proportion of Hits (p[H]),
Proportion of False Alarms ^[F ]), and Response
Sensitivity (d') Across Experimental Groups................................................
17
3. Summary of Social Validation Questionnaire.................................................
28
4.
Percentage of Correct Inspection Responses, Response Rates (R/s),
Number of Hits (H), Correct Acceptances (CA), False Alarms
(FA), and Misses (M), Proportion of Hits (p[H]) and False
Alarms (p[F]), and Response Sensitivity (d') Across
Subjects by Experimental Session and G roup..............................................
50
Two-Factor ANOVA on Inspection Accuracy (Percentage of Correct
Responses).......................................................................................................
66
Two-Factor ANOVA on Split-Half Inspection Accuracy (Mean Percent
C h a n g e )..........................................................................................................
66
Multiple Comparisons (Tukey Procedure): Feedback Type on Split-Half
Mean Percent Change in Inspection Accuracy..............................................
67
8.
Two-Factor ANOVA on Inspection Rate (Responses per Second)
67
9.
One Factor ANOVAs: Signal Probability and Feedback Type on Mean
Response R ate..........................................................................
68
Multiple Comparisons (Tukey Procedure): Feedback Type (Signal p =
0.12) on Response Rates................................................................................
68
11.
Two-Factor ANOVA on Response Sensitivity (d ')....................................
69
12.
Two-Factor ANOVA on Number of False Alarms.....................................
69
13.
Two-Factor ANOVA on Number of M isses...............................................
70
5.
6.
7.
10.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
LIST OF FIGURES
Sample Stimulus Screen (Actual Size)...........................................................
9
2. Mean Percentage of Correct Inspection Responses by G roup......................
21
3. Mean Number of False Alarms and Misses by Group...................................
23
4. Mean Response Rates (Responses per Second) by Group............................
24
5. Mean Response Sensitivity (d') by G roup.....................................................
26
1.
VI
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
CHAPTER I
INTRODUCITON
A familiar theme in American manufacturers’ advertisements has been the
quality of their merchandise. The importance of quality control to American industry
has grown steadily over the past few decades, ostensibly for economic (e.g.,
production cost reduction and industry competition) and societal (e.g., consumer
demands) reasons. In this regard, Deming (1975) emphasized that the poor quality of
manufactured products in the United States has been responsible for the decline of the
American economy, and recommended the adoption of quality control procedures that
focus on detecting defects during the manufacturing process.
Visual inspection of manufactured products is an important part of many quality
control procedures; however, human inspection often results in low levels of accuracy
of detection of defects (Colquhoun, 1961; Drury & Addison, 1973; Drury & Fox,
1975; Fortune, 1979; Harris, 1968; Harris & Chaney, 1969; Synfelt & Brunskill,
1986; Wiener, 1984). Empirical investigations have examined factors that influence
the accuracy of visual inspection, including inspection methods, supervision (e.g.,
form of feedback used or knowledge of results regarding inspection), and the nature
and complexity of the task and stimuli (Chaney & Teel, 1967; Harris, 1968, 1969;
Harris & Chaney, 1969).
Early studies examined detection of signals in monotonous monitoring tasks
through what is popularly known as vigilance research. Mackworth (1950) developed
a continuous clock test, in which a circular dial with a moving pointer advanced one
discrete unit each second, like a clock. The pointer would infrequently and at irregular
1
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
2
intervals move two units instead of one, and subjects were required to detect such
signals during a two-hour monitoring session. In general, subjects failed to detect up
to 30% of these signals; however, when subjects were provided information regarding
their accuracy, or knowledge of results (KOR), the percentage of signals missed was
dramatically reduced. Studies have also indicated that the percentage of signals detected
decreases as the time spent at the vigilance task progresses (Bakan, 1955; Gallwey &
Drury, 1986; Jenkins, 1957) and increases as the probability of signal presentation
increases (Baddeley & Colquhoun, 1969; Colquhoun, 1961; Craig, 1980; Fortune,
1979; Fox & Haslegrave, 1969; Gallwey & Drury, 1986; Harris, 1968, 1969; Harris &
Chaney, 1969; Jenkins, 1957). A wide range of signal probabilities have been studied,
from 0.01 (e.g.. Fortune, 1979; Fox & Haslegrave, 1969), to 0.05 (e.g., Colquhoun,
1961; Craig, 1981), to 0.15 (e.g., Baddeley & Colquhoun, 1969; Craig, 1980, 1981).
The broad experimental base of vigilance research contributed to the
development of signal detection theory (SDT) and related research first introduced by
Tanner and Swets (1954). Signal detection theory places heavy emphasis on the effects
of environmental conditions (i.e., complexity of the inspection task, frequency of
signal occurrences) on the discriminability, or detectability, of a stimulus change.
Fortune (1979) investigated the effects of probability of occurrence of signals on the
accuracy of signal detection in a microscopic inspection task. Zoology graduate
students were required to deteimine, through microscopic examination, whether tissue
slides prepared from animals exposed to chronic doses of selected chemical compounds
were abnormal (signals) or normal. Fortune concluded that lower inspection accuracy
occurred when the abnormal microscopic signals were less frequent. These results
were tentatively corroborated by a subsequent field study involving experienced
parapathologists as inspectors at an independent toxicology testing center.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
3
Fortune’s (1979) findings support earlier research (e.g., Colquhoun, 1961) and
suggest that inspection accuracy increases as the probability of a signal increases. One
explanation offered by Fortune (1979) for this relationship, also suggested earlier by
Holland (1958), was that signal detection serves as a reinforcer, which motivates the
inspector to continue to effectively search for signals. Annette (1969) proposed a
similar explanation for the effects of KOR on improved signal detection in vigilance
tasks, stating that KOR functions as an incentive or motivator to perform.
Badalamente (1969) investigated the effects of vwious schedules (fixed ratio,
variable ratio, fixed interval, or variable interval) of presentation of signals (defective
printed circuits) on subjects’ inspection accuracy and response rates. Subjects were
presented with printed circuits on a simulated inspection line, and were required to
visually inspect and remove circuits containing signals (e.g., improperly soldered
circuits) according to predetermined presentation schedules. Subjects exposed to
presentation schedules with high signal frequency detected a greater proportion of
signals and maintained these levels over longer periods, supporting the theory that
signal detection may be reinforcing.
The delay of KOR presentation is also a variable that is apparently related to the
accuracy of performance, although conflicting experimental results exist regarding this
relationship. For example, Saltzman (1951) found that delayed KOR regarding the
correctness of choices in a verbal learning maze resulted in poorer learning rates.
Greenspoon and Foreman (1956) also found that the gieater the delay of KOR
regarding performance in a motor learning task (drawing lines of specific lengths while
blindfolded), the slower the learning rates of subjects. In contrast, Bilodeau and
Bilodeau (1958) found no relationship between the delay of KOR and accuracy of
performance in several types of motor tasks (knob turning, lever pulling, and stick
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
4
displacement). Furthermore, McGuigan (1959) and Bilodeau and Ryan (1960) found
no relationship between delay of KOR and learning rates using Greenspoon and
Foreman’s (1956) line drawing task. Dyal (1964), however, successfully replicated
Greenspoon and Foreman’s (1956) results. It has been suggested that these
contradictory results may be due to differences in experimental procedures, such as the
complexity or type of task utilized (Dyal, 1965; Teitelbaum, 1967). Moreover, the
effects of delay of KOR on the accuracy of inspection performance have not been
examined in orgmiizational contexts.
Researchers have been critical of laboratory vigilance and signal detection
research, stating that the tasks studied are usually not realistic, and therefore do not
permit generalization to the real world (e.g., Adams, 1987; Gallwey & Drury, 1986;
Mackie, 1984, 1987). Craig (1981) suggested that realistic tasks were more complex,
often involving more than one type of signal, and occurring under more stressful
conditions than laboratory studies. Fortune (1979) also noted that continued research is
needed to determine the conditions responsible for low inspection accuracy in actual
quality control inspection tasks. It is possible that some of the variables that have
already been studied, such as KOR or signal probability, may exhibit similar effects on
performance in more realistic laboratory tasks or in actual organizational settings, but
empirical studies are needed to determine if this holds true.
Another variable important to performance accuracy is the pace at which items
are presented. Conrad (1955) examined the effect of self-paced versus machine-paced
(conveyor belt) presentation on the number of glass jars packed, and found that more
jars were missed (i.e., not packed) when the packing task was machine-paced.
Bertelson, Boons, and Renkin (1965) studied mail sorting by addresses under
machine-paced and self-paced conditions. Increases in machine-pacing speeds were
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
5
accompanied by increases in the number of incorrectly sorted or omitted letters.
Salvendy and Humphreys (1979) found that subjects in a machine-paced task (i.e.,
marking and stapling computer cards) produced more incorrectly completed cards than
subjects in the self-paced task.
McFarling and Heimstra (1975) investigated the effect of machine-pacing
versus self-pacing and product complexity on quality inspection of simulated computer
boards. Subjects’ performance (identification of defects) under the self-paced task was
superior to perfomiance under the machine-paced condition, although performance
under both conditions deteriorated when the most complex computer board was
inspected.
Based on the research reviewed to this point, it is clear that both KOR and
pacing are critical factors in inspection accuracy in quality control. However, recent
studies in Organizational Behavior Management (OBM) have not addressed these
factors and have focused on quality issues only to a limited extent (O’Hara, Johnson &
Beehr, 1985). One early study conducted by McCarthy (1978) employed a graph
posted each day in the yarn spinning department of a textile yarn mill showing the
number of yarn bobbins improperly positioned on a yarn spinner (leading to increased
production costs). A dramatic decrease in the number of improperly positioned
bobbins occurred when the graph was posted.
Krigsman and O ’Brien (1987) compared the effects of self-monitoring with
self-monitoring plus quality circle groups on metal clip conservation and on
motivational correlates (e.g., absenteeism and lost work time). They found that while
both programs were successful in reducing metal clip waste, only the self-monitoring
feedback plus quality circle group decreased absenteeism and lost work time. Henry
and Redmon (1991) utilized supervisor feedback to increase the number of completed
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
6
work tasks in a statistical quality control program. Behaviors associated with the
collection and recording of statistical process control (SPC) data were identified and
measured. Supervisors then provided written feedback to individual workers regarding
the number of assigned daily tasks completed for that day. The introduction of
feedback substantially improved the percentage of SPC tasks completed.
As indicated in the literature above, most quality research has focused on task
features, and little attention has been given to the effects of performance feedback
(KOR) on inspection accuracy. Most perfomiance feedback reseaich in tlie OBM
literature has emphasized quantity of performance (Merwin, Thomason, & Sanford,
1989), and has not measured quality of work products (O’Hara, Johnson, & Beehr,
1985). Furthermore, quality studies that have employed performance feedback have
applied performance feedback in an all-or-none manner, and have yet to thoroughly
compare various types of feedback (Balcazar, Hopkins, & Suarez, 1986; Chhokar &
Wallin, 1984; Duncan & Bmwelheide, 1986; Prue & Fairbank, 1981).
The apparent separation of task and management variables in the study of
quality control inspection performance may have produced an overly simple picture of
what is involved in influencing performance in practical work settings. Quality
inspection is rarely, if ever, done without some form of supervision from others.
Thus, for the purpose of simulating actual inspection tasks, it is important to include
both supervisory variables and changes in task features in quality inspection research.
In this context, the purpose of the present research was to study the combined effects of
changes in signal probability and performance feedback on inspection accuracy and rate
of inspection in a simulated quality inspection task.
More specifically, the present research examined the effects of immediate (i.e.,
onset of less than 0.01 seconds following completion of a response), delayed (i.e..
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
7
onset of 30 minutes following completion of a series of responses), and no visual
feedback (correctness of responses) on the accuracy of identifying signals under low
and high levels of signal probability (0.05 and 0.12). A computer program was
developed to simulate a quality control visual inspection task, and to provide precise
control over experimental conditions. Stimuli were presented by a computer and all
experimental conditions were controlled automatically by computer software.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
CHAPTER II
METHOD
Subjects and Setting
Forty-two undergraduate students at a midwestem university served as
subjects. Before participating in the experiment, subjects were required to read and
sign an informed consent form. Each subject was paid a monetary sum of $25.00 for
participating in the study after completing all experimental sessions. All experimental
sessions were conducted in an office containing a desk, chair, and the experimental
apparatus, described in detail in the following sections.
Quality Control Task
Quality control inspection tasks typically involve the visual detection of
differences between a sample stimulus and a model stimulus, and a subsequent response
that indicates whether the sample stimulus varies in physical appearance from the model
stimulus (i.e., contains a signal). Sample stimuli were comprised of two-dimensional
depictions of hard disk drives presented on a computer screen (see Figure 1). All
inspection responses were made using a computer mouse. Each computer screen
consisted of four elements: (1) a sample hard disk drive, (2) a black rectangular region
located in the upper-right portion of the screen labelled “Percentage of Correct
Responses,” (3) an “Accept” indicator, and (4) a “Reject” indicator. The computer
functions of each of these elements are described in the procedures section.
Sample hard disk drives were composed of 40 components (see Figure 1) with
8
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
CORRECT
SSJI E #
ïtïH
Legend.
1 = Region in which feedback is displayed; 2 = Sample hard disk drive;
3 = “Accept” (no signal present) and “Reject” (signal present) indicators.
Figure 1. Sample Stimulus Screen (Actual Size).
nine different types of components, including: (1) five voltage regulators, (2) five large
screws, each set at a 45 degree angle, (3) three pairs of connected or “soldered”
memory chips, (4) four coprocessor chips, (5) four fuses, (6) two processor chips, (7)
nine left-pointing resistors, (8) seven right-pointing resistors, and (9) one small screw
set at a 135 degree angle (see Screen #6, Appendix B). A signal to be detected
consisted of any sample disk drive that was missing any one of these 40 components.
In the present study, subjects were required to indicate the presence or absence of a
signal (i.e., a missing component) in sample hard disk drives. If a signal were present,
subjects were not required to specify the component that was missing.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
10
Dependent Variables
The dependent variables included (a) the percentage o f correct inspection
responses (hits and correct acceptances), (b) the rate (number of responses per second)
of inspection responses, and (c) the level of inspection response sensitivity (d'), A
correct inspection response could occur either by rejecting a sample stimulus that
contained a signal (a hit), or by accepting a sample stimulus which containc J no signal
(a correct acceptance); correspondingly, incorrect inspection responses could occur by
accepting a sample stimulus which contained a signal (a miss), or by rejecting a sample
stimulus which contained no signal (a false alarm). Response rates were calculated on
the basis of all responses, regardless of their accuracy.
Response sensitivity (d'), or the ability to discriminate the presence/absence of a
signal, is a convenient summary measure. Response sensitivity is calculated using the
formula d' = z(H) - z(F), where H equals the number of hits divided by the number of
hits plus the number of misses (the proportion or probability of hits) of a given session;
F equals the number of false alarms divided by the number of false alarms plus the
number of correct acceptances (the proportion or probability o f false alarms) of a given
session; and z(H) and z(F) are translations of H and F into standard-deviation units (z).
When subjects cannot discriminate the presence or absence of a signal at all, H =F and
d' = 0, representing total response insensitivity. When subjects can absolutely
discriminate the presence/absence of a signal, H = 1.0 and F = 0, representing perfect
response sensitivity. However, because the proportions 1.0 and 0 do not convert into
standard-deviation units, the effective ceiling for H = 0.99, F = 0.01 (depending on the
number of decimal places used), and d' = 4.65 (MacMillan & Creelman, 1991). A
common correction to avoid conversion problems, used in the present study, is to add a
response frequency of 0.5 to all data measures (hits, correct acceptances, misses, and
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
11
false alarms) regardless of their value; this correction does not influence the accuracy of
the data (Snodgrass & Corwin, 1988).
Apparatus
An Apple® Macintosh™ Plus computer system was programmed to present
sample stimuli and to collect data on quality control performance. All subjects
responded using the Apple® Mouse; no computer keyboard was required for
responding during the experimental sessions. The inspection simulation was
programmed using Apple® Macintosh™ HyperCard™ (version 1.2.2) software (Apple
Computer, Inc., 1989).
Independent Variables
Two independent variables were manipulated in this experiment: (1) type of
feedback (immediate, delayed, or no feedback), and (2) the probability of a signal
occurring (0.05 or 0.12). Each of these conditions is described in detail below.
Type of Feedback
Subjects exposed to the immediate feedback condition o f the experiment were
presented with the visual display of cumulative percentage of correct inspection
responses on the computer screen immediately following each inspection response
(i.e., less than 0.01 seconds). Subjects in the delayed feedback condition were
presented with the visual display of cumulative percentage of correct inspection
responses following completion of the entire experimental session (200 trials, a delay
period of approximately 30 minutes). Subjects in the no-feedback condition did not
receive feedback regarding their inspection performance.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
12
Feedback appeared in the upper-right comer of the computer screen (see Figure
1). For the immediate feedback condition, after a response occurred, the cumulative
percentage of correct inspection responses (accurate to one decimal place) appeared in
the upper right portion of tlie computer screen. If a previous percentage were present,
it was removed and replaced with an updated percentage. Furthermore, the updated
percentage briefly flashed on and off in order to enhance its discriminability. At the end
of a session, for botii immediate and delayed feedback conditions, the image of the hard
disk drive was removed from the screen and replaced with a message that read “End of
Session.” The final percentage was briefly flashed on the screen three times,
accompanied by computer beeps, and then the computer screen became black, with a
message that the subject should notify the experimenter that the session had concluded.
Signal Probabilitv
In addition to the type of feedback, the probability of a signal occurring was
manipulated for each session, which consisted of 200 trials. Two signal probabilities
were examined (0.05 and 0.12). Thus, when the signal probability was 0.05, ten
signals occurred in a session of 200 trials; when the signal probability was 0.12,
twenty-four signals occurred in a session. The particular trials of a session that
contained a signal and the location of signals were randomly selected by the computer
program prior to the beginning of each session.
Experimental Design
A 2 X 3 between group design was utilized in this experiment (see Table 1),
resulting in six experimental groups: (1) immediate feedback with a signal probability
of 0.05 (1/0.05), (2) delayed feedback with a signal probability of 0.05 (D/0.05),
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
13
Table 1
Experimental Conditions and Group Assignment
Signal
Probability
Immediate
Feedback
Delayed
Feedback
No
Feedback
0.05
1/0.05
D/0.05
N/0.05
0.12
1/0.12
D/0.12
D/0.12
(3) no feedback with a signal probability of 0.05 (N/0.05), (4) immediate feedback
with a signal probability of 0.12 (1/0.12), (5) delayed feedback with a signal probability
of 0.12 (D/0.12), and (6) no feedback with a signal probability of 0.12 (N/0.12).
Subjects were randomly assigned to one of the six experimental groups, with seven
subjects assigned to each group.
Procedure
Subject Training
Subjects initially were trained to recognize the various components of the hard
disk drive during a training session tutorial programmed on the computer unit, using
the same HyperCard™ software described eailier. The training session tutorial
required approximately 30 minutes. A reproduction of the tutorial is located in
Appendix B.
At the beginning of the training session tutorial, the subject was instructed to sit
in front of the computer monitor, to which was attached a computer mouse (access to a
keyboard was neither required nor available during training or experimental sessions).
The subject was instructed in the use of the computer mouse, which had a single button
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
14
on its top surface. This mouse was easily maneuvered with the thumb and forefinger,
allowing the index finger to rest on the mouse button. Subjects were shown that by
gently pressing down on the mouse button with the forefinger and then releasing the
button, an audible click was produced; this mouse button response was referred to as
clicking. Subjects were also shown that movement of the mouse horizontally and/or
vertically resulted in a correlated movement of a pointer (the screen cursor) on the
computer screen. Once the subjects were instructed in mouse usage, they were given a
few moments to practice.
Complete instructions for using the mouse to make inspection responses were
presented sequentially on the computer screen in the foim of a self-paced tutorial. At
the beginning of the training session tutorial, the subject was presented with a written
description on the computer screen of the general purpose of both the tutorial and the
experiment. After reading the material on the first screen, subjects were instr ucted to
position the screen cursor, using the mouse, on a right-pointing arrow labeled “More”
located in the lower right comer of the screen, and to click the mouse button, which
produced the second screen of new tutorial information. On subsequent screens, a leftpointing arrow located in the lower left comer of the screen labeled “Back” appeared,
and, when clicked, retumed to the previous screen of information. In this manner,
subjects navigated through the tutorial at their own pace.
The training session tutorial described and illustrated to the subject the
following: (a) use of the computer mouse, (b) the components of sample stimuli, (c)
how to indicate whether a sample stimulus did or did not contain a signal, and (d) how
the computer displayed the percentage of correct inspection responses (for those
subjects receiving feedback only).
A brief inspection test, similar to experimental trials, was administered to all
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
15
subjects following completion of the training session tutorial to ensure that subjects
could identify signals. If a sample hard disk drive contained a signal, subjects were
required to click on “Accept”; if a sample hard disk drive did not contain a signal,
subjects were required to click on “Reject.” The test consisted of 10 sample trials, with
half of the trials containing a signal (signal probability = 0.50). Subjects were required
to inspect 80% of the trials correctly in order to complete the tutorial and continue to the
experimental phase. Subjects who did not complete 80% of the trials correctly were
required to repeat the test. If a score of 80% was not achieved after five such
repetitions, the subject was required to repeat the training tutorial.
Experimental Conditions
After completing the tutorial, each subject completed five experimental sessions.
During each session, subjects were presented with a series of 200 computer screens
(each screen representing a single trial) depicting a sample hard disk drive (see Figure
1). Subjects were required to respond by clicking on either “Accept” or “Reject” on the
computer screen, as in the tutorial test. A response on either “Accept” or “Reject”
produced an audible beep from the computer, and removed “Accept” and “Reject” from
the computer screen, preventing any further responses.
When the session involved immediate feedback, clicking on “Accept” when a
signal was present (a missing component), or clicking on “Reject” when no signal was
present (no component missing), resulted in an immediate increase in the cumulative
percentage of correct inspection responses for that session immediately following each
inspection response (as previously described in the independent variable section).
Clicking on “Accept” when a signal was present, or “Reject” when a signal was not
present, resulted in an immediate decrease in the cumulative percentage of correct
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
16
inspection responses for that session immediately following each inspection response.
When the session involved delayed feedback, the percentage was presented following
completion of the 200th trial; when the session involved no feedback, the percentage
was not presented.
Subjects had a maximum of ten seconds to complete each inspection response
before the next trial began; if the inspection response was completed in less than ten
seconds, the subsequent trial followed immediately after the response. If no response
was made within the ten-second interval, the computer advanced to the next trial and an
incorrect response was tallied.
Social Validation
Upon completion of the final experimental session, each subject was asked to
complete a brief questionnaire, designed to determine subjects’ opinions of the
experiment. Five questions were presented sequentially to each subject on the
computer using the HyperCard™ program. Subjects were provided with a keyboard,
and then typed their responses on the computer screen, which were stored by the
computer program. The five questions included:
1. What do you think was the purpose of this study ?
2. Groups 1/0.05, D /0.05,1/0.12, and D/0.12 only: Was the feedback you
received (percentage of correct responses) helpful to you in detecting missing parts?
3. At any time during the study were you frustrated or upset with the computer
program? If so, then describe how you felt and what made you feel that way.
4. During the study, did you develop any rules to guide your responses? If so,
please describe any such rule(s).
5. Please provide any general comments about this inspection study.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
CHAPTER III
RESULTS
Summary of Effects on Inspection Accuracy, Rate, and Sensitivity
The main results of this experiment include (a) the mean percentage of correct
inspection responses; (b) the mean rate of inspection responses; (c) the mean number of
hits, correct acceptances, false alarms, and misses; and (d) the mean inspection
response sensitivity (including the mean proportion of hits and false alarms) across the
six experimental groups by subject. These results are summarized in Table 2. Detailed
Table 2
Summary of Mean Inspection Accuracy (% Correct), Response Rates (R/s), Number of
Hits (H), Correct Acceptances (CA), False Alarms (FA), and Misses (M), Proportion
of Hits (p[H]), Proportion of False Alarms (p[FJ), and Response Sensitivity {d')
Across Experimental Groups
Subject
% Correct
Rate
H
CA
FA
M
p[H]
p[F]
d'
Group 1/0.05 (Signal Probability = 0.05; Immediate Feedback)
1
95.8
0.43
2.5
190.5
0.9
8.5
0.23
0.005
1.852
2
95.8
0.43
3.5
189.5
1.3
7.5
0.32
0.007
2.045
3
95.4
0.28
2.3
189.5
0.9
4.1
0.21
0.005
1.763
4
97.7
0.23
6.7
190.1
0.5
3.9
0.63
0.003
3.106
5
96.4
0.17
5.3
188.3
1.5
5.3
0.51
0.007
2.540
6
95.5
0.46
1.7
190.3
0.5
4.7
0.16
0.003
1.691
17
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
18
Table 2~Continued
Subject
% Correct
Rate
H
CA
FA
M
p[H]
p[F]
d'
Group D/0.05 (Signal Probability ==0.05; Immediate Feedback)
7
97.7
0.48
6.1
190.3
0.5
4.9
0.55
0.003
2.900
SD
0.99
0.13
2.3
0.8
0.4
1.8
0.19
0.002
0.575
Mean
96.3
0.35
4.0
189.8
0.9
5.6
0.37
0.005
2.271
Group D/0.05 (Signal Probability = 0.05; Delayed Feedback)
1
97.8
0.20
7.3
189.3
0.5
3.7
0.66
0.003
3.190
2
96.3
0.25
5.3
188.3
1.1
5.7
0.48
0.006
2.560
3
96.6
0.17
5.7
188.9
0.5
5.3
0.52
0.003
2.800
4
93.4
0.32
3.3
184.9
2.5
7.7
0.30
0.015
1.927
5
98.3
0.25
7.1
190.5
0.5
3.9
0.64
0.003
3.220
6
92.5
0.23
5.7
180.7
10.3
5.3
0.52
0.050
2.400
7
95.2
0.29
7.1
186.5
4.5
3.9
0.64
0.024
2.791
SD
2.16
0.05
1.4
3.3
3.6
1.4
0.13
0.017
0.454
Mean
95.7
0.24
5.9
187.0
2.8
5.1
0.54
0.015
2.700
Group N/0.05 (Signal Probability = 0.05; No Feedback)
1
94.9
0.37
1.5
189.3
0.9
9.5
0.14
0.005
1.494
2
96.0
0.24
3.3
190.3
0.7
7.7
0.30
0.004
2.044
3
97.2
0.25
8.1
187.3
3.7
2.9
0.73
0.020
3.200
4
93.8
0.28
4.9
183.7
7.3
6.1
0.45
0.038
1.600
5
98.1
0.20
7.9
189.3
1.7
3.1
0.72
0.009
3.180
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
19
Table 2-Continued
Subject
% Correct
Rate
H
CA
FA
M
p[H]
p[F]
d'
Group N/0.05 (Signal Probability = 0.05; No Feedback)
6
97.7
0.30
6.7
189.7
1.3
4.3
0.61
0.007
3.114
7
96.8
0.17
4.1
190.5
0.5
6.9
0.37
0.003
2.413
SD
0.16
0.066
2.5
2.4
2.4
2.5
0.22
0.013
0.745
Mean
96.4
0.26
5.2
188.6
2.3
5.8
0.47
0.012
2.435
Group 1/0.12 (Signal Probability = 0.12; Immediate Feedback)
1
94.1
0.30
13.7
175.5
1.5
11.3
0.53
0.008
2.543
2
97.2
0.25
19.1
176.3
0.7
5.9
0.76
0.004
3.434
3
92.8
0.50
11.3
175.5
1.5
13.7
0.51
0.008
2.288
4
94.5
0.23
14.3
175.7
1.3
10.7
0.57
0.007
2.736
5
95.7
0.16
16.7
175.8
1.2
8.3
0.67
0.007
2.972
6
91.0
0.37
8.1
175.2
1.8
16.9
0.69
0.012
3.241
7
97.3
0.21
19.1
176.5
0.5
5.9
0.79
0.003
3.631
SD
2.30
0.12
4.1
0.5
0.5
4.1
0.11
0.003
0.488
Mean
94.7
0.29
14.6
175.8
1.2
10.4
0.65
0.007
2.978
Group D/0.12 (Signal Probability == 0.12; Delayed Feedback)
1
94.2
0.17
14.3
175.3
1.7
10.7
0.57
0.010
2.709
2
93.8
0.22
14.9
174.3
2.7
10.1
0.60
0.015
2.591
3
97.2
0.18
19.7
175.1
1.1
5.3
0.79
0.006
3.413
4
90.9
0.21
12,1
170.3
1.9
11.7
0.51
0.011
2.462
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
20
Table 2~Continued
Subject
% Correct
Rate
H
CA
FA
M
p[H]
p[F]
d'
Group D /0.12 (Signal Probability = 0.12; Delayed Feedback)
5
95.4
0.18
21.1
170.5
0.5
2.9
0.88
0.003
4.062
6
94.8
0.15
19.9
169.7
1.9
4.7
0.81
0.011
3.408
7
92.3
0.24
11.7
173.5
2.1
13.3
0.47
0.012
2.396
SD
2.1
0.03
3.9
2.4
0.7
4.0
0.16
0.004
0.629
Mean
94.1
0.19
16.2
172.7
1.7
7.7
0.66
0.010 2.519
Group N/0.12 (Signal Probability = 0.12; No Feedback)
1
92.6
0.31
10.9
175.3
1.1
13.3
0.43
0.006
2.463
2
86.3
0.41
6.9
166.5
2.7
16.9
0.29
0.016
1.819
3
91.9
0.30
11.1
173.7
3.1
13.9
0.44
0.018
2.277
4
93.1
0.31
11.3
176.1
0.7
13.5
0.46
0.005
2.501
5
93.1
0.23
15.1
172.5
0.8
8.9
0.63
0.005
2.952
6
93.4
0.35
11.5
176.3
0.5
13.5
0.59
0.003
2.992
7
93.9
0.29
12.5
176.3
0.5
12.5
0.50
0.003
2.750
SD
2.6
0.05
2.4
3.5
1.1
2.4
0.11
0.006
0.411
Mean
92.0
0.31
11.3
173.8
1.3
13.2
0.48
0.008 2.536
data for each subject by experimental group and session are presented in Table 4 (see
Appendix C).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
21
Effects on Inspection Accuracy
Mean inspection response accuracy was high across all groups (see Figure 2),
with Group N/0.12 having the lowest mean percentage of correct inspection responses
(92.0%), and Groups 1/0,05 and N/0.05 evidencing the highest performance (96.3%
and 96.4%, respectively). Groups with the low probability (1/0.05, D/0.05, and
N/0.05) of signal occurrence evidenced higher average mean percentages of correct
inspection responses across feedback types than the groups with the high probability
VO
as
A™
:—
C
S
ON
p—I—
1/0.05 D/0.05 N/0.05 1/0.12 D/0.12 N/0.12
Group
Figure 2. Mean Percentage of Correct Inspection Responses by Group.
with Group N/0.12 having the lowest mean percentage of correct inspection responses
(92.0%), and Groups 1/0.05 and N/0.05 evidencing the highest performance (96.3%
and 96.4%, respectively). Groups with the low probability (1/0.05, D/0.05, and
N/0.05) of signal occurrence evidenced higher average mean percentages of correct
inspection responses across feedback types than the groups with the high probability
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
22
(1/0.12, D/0.12, and N/0.12) of signal occurrence; the average mean percentages of
correct inspection responses of the low signal probability groups was 96.13%,
compared to 93.6% for the high signal probability groups.
Mean inspection accuracy of the experimental groups with high signal
probability appeared to be related to the type of feedback, with higher mean inspection
accuracy in Group 1/0.12, lower mean inspection accuracy in Group D/0.12, and the
lowest mean inspection accuracy in Group N/0.12. This pattern was not the same for
groups with low signal probability; tlie highest mean inspection accuracy occurred in
Group N/0.05, slightly lower mean inspection accuracy occurred in Group 1/0.05, and
the lowest mean inspection accuracy occurred in Group D/0.05.
A two-factor analysis of variance was used to examine tire effects of signal
probability and feedback type on mean inspection accuracy (see Table 5, Appendix D)
and indicated (a) a statistically significant effect of signal probability level on mean
inspection accuracy (Fi, 35 = 17.097, p = 0.0002), (b) a non-significant effect of
feedback type on mean inspection accuracy (F 2 , 35 = 1.482, p = 0.241), and (c) a non­
significant effect of the interaction of signal probability and feedback type on mean
inspection accuracy (F 2 , 36 = 2.094, p = 0.138). Subsequent review of the data
indicated that inspection accuracy was likely to be lower when signal probability was
high than when signal probability was low, regardless of feedback type or the
combined effect of signal probability and feedback type.
A two-factor analysis of variance was also used to assess the effects of signal
probability and feedback type on mean inspection accuracy improvement between the
first two and last two sessions for each group (see Table 6 , Appendix D). This
analysis indicated (a) a statistically non-significant effect of signal probability level on
mean inspection accuracy improvement (Fi, 36 = 0.002, p = 0.962), (b) a significant
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
23
effect of feedback type on mean inspection accuracy improvement (F 2 , 35 = 9.233, p =
0.001), and (c) a non-significant effect of the interaction of signal probability and
feedback type on mean inspection accuracy improvement (F 2 , 36 = 0.037, p = 0.964).
A simultaneous mean comparison analysis (Tukey procedure) of feedback type (see
Table 7, Appendix D) indicated significant differences in inspection accuracy
improvement under immediate and delayed feedback conditions (^ 3 ,3 5 = -5.925, p <
0.05) and under delayed and no-feedback conditions (9 3 ,3 6 = -4.143,/? < 0.05). This
analysis indicated that during the last two experimental sessions mean inspection
accuracy was likely to be higher than mean inspection accuiacy during the first two
sessions when delayed feedback was provided, but remained approximately the same
when immediate or no feedback was provided.
Figure 3 illustrates the number of false alarms and misses which occurred
15 -,
5*
g
9
■
False Alarms
#
Misses
10-
b
r
1/0.05 D/0.05 N/0.05 1/0.12 D/0.12 N/0.12
Group
Figure 3. Mean Number of False Alarms and Misses by Group.
across experimental groups. A two-factor analysis of variance was used to examine the
effects of signal probability and feedback type on the number of false alarms and on
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
24
misses (see Tables 12 and 13, respectively, Appendix D). This analysis indicated (a)
no statistically significant effects of signal probability or feedback type on the mean
number of false alarms, (b) a statistically significant effect of signal probability level on
the mean number of misses (Fi, 36 = 30.596, p = 0.0001), (c) a non-significant effect
of feedback type on the mean number of misses (F 2 . 36 = 2.918, p = 0.067), and (d) a
non-significant effect of the interaction of signal probability and feedback type on the
mean number of misses (F 2 , 36 = 1.639, p = 0.208). Thus, subjects in the high signal
probability groups had lower inspection accuracy compared to the low probability
groups (regardless feedback type provided) due to a greater number of misses; number
of false alarms was comparable in both signal probability groups across feedback type.
Effects on Inspection Response Rates
Mean response rates for each group (see Figure 4) varied from 0.19 responses
0.5-1
à
0.4-
in
ro
pg 0 . 3 0>
y
I
I
0.2 -
I
0. 1 -
Pi
s
0. 0 '
m
0
R
0
o
O
n
1/0.05 D/0.05 N/0.05 1/0.12 D/0.12 N/0.12
Group
Figure 4.
Mean Response Rates (Responses per Second) by Group.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
25
per second (R/s) for Group D/0.12 to 0.35 R/s for Group 1/0.05. The low signal
probability groups had slightly higher mean response rates (averaged across feedback
types) than the high signal probability groups; the average mean response rate for the
low signal probability groups was 0.28 R/s, compared to 0.26 R/s for the high signal
probability groups.
The low and high signal probability groups exhibited different patterns of
response rates with respect to feedback type. For the low signal probability groups,
Group 1/0.05 had the highest mean response rate (0.35 R/s), Group N/0.05 had a
lower mean response rate (0.26 R/s), and Group D/0.05 had the lowest mean response
rate (0.24 R/s). For the high signal probability groups. Group N/0.12 had the highest
mean response rate (0.31 R/s), Group 1/0.12 had a slightly lower mean response rate
(0.29 R,/s), and Group D/0.12 had the lowest mean response rate (0.19 R/s).
A two-factor analysis of variance was used to examine the effects of signal
probability and feedback type on mean response rates (see Table 8, Appendix D) and
indicated (a) a statistically non-significant effect of signal probability level on mean
response rates (Fi, 36 = 1.247, p = 0.116), (b) a significant effect of feedback type on
mean response rates (F 2 , 35 = 5.917, p = 0.006), and (c) a significant effect of the
interaction of signal probability level and feedback type on mean response rates (F%, 35
= 4.894, p = 0.013). The significance of the interaction precludes the interpretation of
the main effects; therefore, separate one-factor analyses of variance were used to
compare response rates for both signal probability levels and feedback types (see Table
9, Appendix D). This analysis indicated (a) a statistically non-significant effect on
mean response rates across feedback types at a signal probability level of 0.05 (F 2 , is
= 3.333, p = 0.0587), and (b) a significant effect on mean response rates across
feedback types at a signal probability level of 0.12 (F2 , is = 5.635, p = 0.0126). A
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
26
subsequent simultaneous mean comparison analysis (Tukey procedure) for mean
response rates across feedback type at a signal probability level of 0.12 (see Table 10,
Appendix D) indicated that the mean response rates differed significantly under
immediate and delayed conditions (^ 3 , ig = 5.000, p < 0.05), and under delayed and
no-feedback conditions (^3 , ig = -6.000, p < 0.05) but did not differ significantly for
immediate and no-feedback conditions (^3 , ig = -1.000, p > 0.05). In general, mean
response rates were slightly higher in the low signal probability groups than in the high
probability groups. Furthermore, among the high signal probability groups, mean
response rates were likely to be similar under immediate and no-feedback conditions,
and both higher than mean response rates under the delayed feedback condition. Mean
response rates in the low signal probability groups were similar across feedback types.
Effects on Inspection Response Sensitivity
Mean response sensitivity (rf*) measures for each group (see Figure 5) were
4.5
-1
o
3
g
ua
i
S'
1/0.05 D/0.05 N/0.05 1/0.12 D/0.12 N/0.12
Group
Figure 5. Mean Response Sensitivity (d') by Group.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
27
moderately high across groups, with overall low variability across groups. Maximum
response sensitivity is set at 4.65 when the probability of a hit (H) equals 0.99 and the
probability of a false alarm (F) equals 0.01 (MacMillan & Creelman, 1991). Mean
response sensitivity ranged from a low of 2.271 (Group 1/0.05) to a high of 2.978
(Group 1/0.12). Mean response sensitivity measures of both the low and high signal
probability groups across feedback type were comparable, although the low signal
probability groups had slightly lower overall mean sensitivity scores than the high
signal probability groups (2.469 versus 2.675). No distinct pattern of sensitivity
related to type of feedback was apparent; sensitivity was lowest in Group 1/0.05, but
highest in Group I/O. 12. Groups D/0.12 and N/0.12 evidenced similar sensitivity
levels (2.519 and 2.536, respectively).
A two-factor analysis of variance was used to examine the effects of signal
probability and feedback type on mean response sensitivity (see Table 11, Appendix D)
and indicated (a) a statistically significant effect of signal probability level on mean
response sensitivity (Fi, 35 = 4.597, p = 0.041), (b) a non-significant effect of
feedback type on mean response sensitivity (Fi, 26 = 1.516, p = 0.233), and (c) a non­
significant relationship effect of the interaction of signal probability level and feedback
type on mean response sensitivity (F 2 , 36 = 1.050, p = 0.360). Thus, response
sensitivity was likely to be higher when the signal probability was high than when
signal probability was low, regardless of feedback type or the combined effect of signal
probability and feedback type.
Social Validation Results
When subjects were asked what they thought was the purpose of the study, the
most common responses referred to testing the ability of a human inspector to detect
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
28
missing parts (signals) under specific experimental conditions (e.g., lengthy inspection
periods, limited time periods, variable signal locations, monotonous tasks), to compare
human inspection performance to computer inspection performance, or to determine the
way in which humans learn how to visually inspect. Other major findings o f the
questionnaire are summarized in Table 3.
Table 3
Summary of Social Validation Questionnaire
Feedback Helpful?
Upset by Task?
Rules Used?
Group
Yes
No
Yes
No
Yes
No
1/0.05
7
0
5
2
6
1
D/0.05
5
2
6
1
7
0
4
3
7
0
N/0.05
1/0.12
5
2
4
3
5
2
D/0.12
6
1
4
3
7
0
...
6
1
7
0
5
29
13
39
3
N/0.12
T o ta ls
23
A majority of the subjects who were exposed to either immediate or delayed
feedback (Groups 1/0.05, D /0.05,1/0.12, and D/0.12) indicated that they found the
feedback useful in helping them detect signals (82.1%). A majority of subjects also
indicated that the inspection task upset them at some time timing the experiment
(69.0%), and a large majority of subjects indicated that they developed some sort of
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
29
rule to assist in their inspections (92.9%).
The most common reason for being upset given by subjects was that the
inspection task was monotonous. Subjects stated they were bored with the task, found
the task tedious, or were physically fatigued by the response effort (mouse clicking)
and by watching the screen for such a long period. One subject in Group 1/0.12 stated
that the feedback was distracting. Subjects also reported being upset when they
realized that a response they had just completed was incorrect, but were not permitted to
change their decision.
Subjects described a variety of rules they used during their inspection task.
Frequently, these rules described some sort of scanning methodology (i.e., examining
hard disk drive samples from left to right or clockwise; counting the ptu-ts on samples to
make sure all were present; grouping parts by location into larger sections and
examining samples by section). Other subjects reported no specific scanning technique;
instead, they memorized the location of all parts, or they simply examined the hard disk
drive samples as a whole unit, looking for conspicuous “bare spots” that did not exist
in samples without signals.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
CHAPTER IV
DISCUSSION
This study provided mixed results regarding the effects of signal probability and
feedback type on inspection performance. Low signal probability resulted in higher
inspection accuracy and lower response sensitivity compared to high signal probability.
In general, the type of feedback provided did not affect inspection accuracy across
experimental groups. However, some minimal effects of feedback type were evident,
including (a) groups that received delayed feedback evidenced lower inspection
accuracy during the first two experimental sessions compared to the last two sessions,
whereas immediate and no-feedback conditions showed no such difference; and (b)
high signal probability with delayed feedback resulted in slower response rates than
high signal probability with immediate or no feedback.
Earlier research findings showed that inspection accuracy increased as the
probability of a signal occurrence increased (e.g., Colquhoun, 1961; Fortune, 1979;
Fox & Haslegrave, 1969; Harris, 1968; Jenkins, 1957). The results of the current
study do not support these findings, and indicate the reverse. Inspection accuracy of
Groups 1/0.12, D/0.12, and N/0.12, who were exposed to high signal probability
levels (p = 0.12) was lower than Groups 1/0.05, D/0.05, and N/0.05, who were
exposed to low signal probability levels (p = 0.05). The lower inspection accuracy of
the high signal probability groups was primarily due to a greater number of misses
compared to the low signal probability groups; the number of false alarms was similar
for both the low and high signal probability groups.
Inspection accuracy in this study was generally independent of the type of
30
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
31
feedback provided, except for the minimal effects noted above. This finding is
consistent with some of the earlier laboratory studies on the effects of knowledge of
results on motor learning acquisition (e.g., Bilodeau & Bilodeau; 1958; Bilodeau &
Ryan, 1960; McGuigan, 1959), but inconsistent with others (e.g., Dyal, 1964;
Greenspoon & Foreman, 1956; Teitelbaum, 1967).
Inspection response sensitivity (d'), a measurement o f subjects’ ability to
discriminate signal occurrences, was not dependent on feedback type, and only weakly
related to signal probability. Groups exposed to high signal probabilities (1/0.12,
D/0.12, and N/0.12) were only slightly more sensitive to signal occurrences than
groups exposed to low signal probabilities (1/0.05, D/0.05, and N/0.05). Thus,
response sensitivity was moderately high, and similar, across all experimental groups.
Earlier studies in signal detection, as noted above, have suggested that higher
signal probabilities tend to improve signal detection (e.g., Colquhoun, 1961; Fortune,
1979); however, the results of this study clearly contradict this finding. One possible
reason for this contradiction was that the nature of inspection tasks examined in
previous studies differed from the task in the current study. The current study, in an
attempt to emulate an organizational quality control task, utilized a very complex
stimulus; few studies that have examined inspection accuracy have utilized such stimuli,
utilizing instead what Mackie (1984; 1987) referred to as “esoteric tasks” (i.e.,
detecting dial deflections, points of light on a simulated radar screen, or oversized
geometric figures).
It could be argued that merely exposing subjects to lower signal probabilities
permits subjects to attain high scores even if they never detect a single stimulus. For
example, if subjects in the current study exposed to the low signal probability (p =
0.05) condition always selected “Accept,” regardless of the presence or absence of
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
32
signals, they would have scored 95% correct, since only 5% o f session samples would
contain signals. If subjects in the high signal probability group (p = 0.12) responded
similarly, they would have scored 8 8 %. However, the percentages of correct
inspection responses produced in the current study did not follow this pattern; subjects
in the low signal probability groups scored an average of 96.0% correct across
feedback type, whereas subjects in the high signal probability groups scored an average
of 93.6% across feedback type.
Response sensitivity scores {d') also suggest that subjects were not responding
in a random fashion, and indicate that they did not select “Reject” or “Accept”
regardless of the presence or absence of a signal. If subjects were completely
insensitive to signal occurrence id' = 0 ) the probability of detecting the presence of a
signal (hit probability, or H) would equal the probability of not detecting the presence
of a signal (miss probability) and the probability of indicating the presence of a signal in
the absence of a signal (false alarm probability, or F). Therefore, subjects who were
relatively insensitive to signal presence would generally produce low response
sensitivity scores; however, the response sensitivities produced in the current study
were moderately high. Subjects in the low signal probability groups produced an
average sensitivity score of 2.469 across feedback types, whereas subjects in the high
signal probability groups produced an average sensitivity score of 2.678 across
feedback types. These data, as well as the inspection accuracy data, suggest that
subjects were able to discriminate between the presence and the absence of a signal, and
make appropriate inspection responses.
Some researchers have suggested that the functional effects of feedback should
be investigated more systematically so that data from feedback research can better add
to the operant conceptual base (Balcazar et al., 1986; Duncan & Bruwelheide, 1986;
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
33
Prue & Fairbank, 1981). It has been suggested that feedback might function as
reinforcement (Balcazar et al., 1986); if this were the case in the current study,
inspection accuracy, response rates, and response sensitivity should have been higher
when feedback was provided. However, the provision of feedback did not generally
affect inspection accuracy or response sensitivity, and actually was associated with
decreased response rates under the high signal probability condition. The suggestion
that feedback functions as reinforcement assumes that feedback achieves its functional
effects due to subjects’ prior similar experiences with evidence of work improvement or
failure. The data of this study suggest that immediate provision of percentage of correct
responses did not function as reinforcement for inspection accuracy; it may be that
subjects in the current study lacked the experiences assumed above, or that the
percentage of correct responses provided functioned ineffectively as feedback for
inspection accuracy.
Vigilance and inspection laboratory studies have studied the effects of
immediate versus delayed feedback on performance, but have produced mixed results
(e.g., Bilodeau & Ryan, 1960; Dyal, 1964), perhaps because complex or realistic
organizational tasks have not been used (Craig, 1981; Mackie, 1987). Organizational
research regarding feedback, on the other hand, has been rather broad, typically
involving the application of feedback after relatively long intervals (Balcazar et al.,
1986), and has indicated that feedback is generally effective in improving performance
across a wide range of complex performances (Balcazar et al., 1986; Duncan &
Bruwelheide, 1986; Prue & Fairbank, 1981). The current study addressed both of
these issues by comparing the effects of specific forms of feedback (immediate,
delayed, and none) on a complex quality inspection task, and the results suggest that
the effects of feedback type on complex inspection performances may be less clear than
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
34
suggested by previous studies. The relationship between feedback and quality
inspection needs to be clarified further.
The results of the social validation questionnaire revealed that most subjects
who were provided with the percentage of correct responses considered it helpful in
detecting missing components, even though feedback was not found to influence
inspection accuracy across groups. The questionnaire also revealed that most subjects
utilized rules to guide their performance. Inspection accuracy for the immediate
feedback and no-feedback groups (Groups Î/0.05, N /0.05,1/0.12, and N/0.12) was
similar as well as consistent across sessions, as the inspection accuracy during the first
two experimental sessions of both types of groups averaged the same as the last two
sessions. Perhaps the use of these rules superseded the effects of feedback in these
cases, functioning as a form of self-feedback. Agnew and Redmon (in press)
suggested that when no feedback is provided, subjects may develop rules to support
their behavior, although the accuracy of these rules may be variable.
In tlie present study, delayed feedback may have interfered with the
effectiveness of rules generated by subjects, possibly because such feedback was not
only delayed, but also because this feedback was based on the cumulation of all
inspection responses in an experimental session. Thus, delayed feedback did not
reflect changes in inspection accuracy as the result of individual inspection responses,
but only more global performance. Future studies might address the role of rules on
inspection accuracy directly, perhaps by examining the effects of providing subjects
with different types of inspection rules prior to performing inspections.
An examination of alternative kinds of feedback in future studies might also be
useful. For example, instead of a numerical display of cumulative correct responses,
more informative feedback might be provided (i.e., cumulative percentages of both
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
35
correct and incorrect responses; on screen indication of the location of signals, when
present, following inspection response). Graphic representation of feedback (i.e.,
charts, graphs, or symbols) could also be utilized, instead of a numerical display.
Finally, a comparison of the effects of feedback on quality performance in combination
with differential reinforcement may be a valuable addition to quality management
literature (i.e., examine the effects of types of feedback or signal probability plus
monetary reinforcement).
The social validation questionnaire also indicated that a majority of subjects
became upset at some time during the inspection task, but typically due to the nature of
the task itself, not the experiment. Subjects characterized the inspection task as boring,
monotonous, and tiring. An interesting extension to the current study would be to
examine subjects’ performance and opinion of inspection tasks that involved a greater
variety of items to be inspected, or of types of signals, or of types of responses.
This study was an attempt to emulate an organizational quality control task, and
as such may lack important characteristics of actual work environments (i.e., social and
monetary contingencies, work pressure, stimulus variability). Simulations can be
valuable to organizations in developing more effective quality inspection techniques or
in training inspectors, since they do not entail great investment or risk. The challenge is
to develop simulations which remain cost effective, yet incorporate as many of the
salient features of actual work environments as possible.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
APPENDICES
36
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Appendix A
Informed Consent Form
37
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
38
INFORMED CONSENT FOR PARTICIPATION IN A RESEARCH PROGRAM
I understand that I am being invited to participate in a research study to
investigate quality control. The purpose of this research is to examine human inspector
accuracy in a quality contiol computer simulation. Participation in this study involves
completing one training session and five subsequent test sessions. Each session should
last about 30 minutes, and breaks will be permitted between sessions.
A potential benefit of participation in this survey is to help detemiine the nature
of quality control tasks. I understand this research involves no risk to me. Any
information obtained in the course of the research will be held in the strictest of
confidence of the Principal Investigator. All information will be utilized for research
purposes only; I will not be identified by name. All stored data will be coded by
numbers with names removed to ensure confidentiality, and stored in a secure file
cabinet. Name and number codings will be available only to the Principal Investigator,
and will be destroyed after data analysis.
I understand that my participation in this research is voluntary. There is no cost
to me other than my time. I understand that I will be paid $25.00 for participating in
this survey and that I may withdraw from participation in this study at any time without
penalty or prejudice, although I will be paid only for the hours I have worked.
I understand that any questions or complaints I have now or at anytime in the
future regarding this research or my rights will be answered by contacting Matthew A.
Mason at (616) 375-5386. I may also contact the Western Michigan University faculty
advisor for this study. Dr. William K. Redmon, at (616) 387-4485. My signature
below indicates that I have read and understood the above information and have decided
to participate in this study.
Signature of Subject
Date
Matthew A. Mason
Principal Investigator
Signature of Witness
Time
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Appendix B
Reproduction of the Computer Tutorial
39
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
40
Reproduction of the Computer Tutorial
Screen #1
Welcome to tiic Quality Inspection Tutorial! The purpose of this tutorial is to
familiarize you with how this quality inspection task works and what you need to
know to make it work! It’s really quite simple and even FUN! Befoie you begin, I’d
like to Thank You for your pai ticipation in this lescarch program...
HA N K
SsiEmM
The first thing yoti need to learn is how to use the mouse (that funny looking
box to tlie riglit of the computer with the cord attached to the monitor). See the
pointing finger on the screen? It’s called a “cursor,” and by moving tlic mouse back
and forth on the desk, the cursor moves with it. Go ahead, try it! You’ve probably
also noticed tliat there is a button on the top of the mouse, as well as two arrows
below marked “More” and “Back.” You guessed it! By positioning t!ie hand over
eitiier arrow, and piessing and releasing the mouse button once, you can move
through the tutorial. Neat, Huh?
iljifcici;
m
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
41
Screen #3
You’ve just learned all there is to know about using this program! The only
responses you’ll be required to make will be positioning the cursor and clicking the
mouse button. Think of clicking the mouse button as “selecting” an object on the
screen. If you want to practice selecting a little, select a figure below. If you want to
continue, click on either “More” or on “Back”...
Select Me!
No, Select Me!
m iim
No, Me!
M ore
Screen # 4
In this quality inspection simulation, you will be presented with pictures of
computer hard disk storage drives that have just been “manufactured.” It will be your
job as a Quality Inspector to examine each hard disk drive, and determine whether or
not it any parts are missing. Before going any further, lets look at a sample of the
hard disk drive you will be inspecting...
•Back-
M ore
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
42
Sçregn.#.,g.
The hard disk drive is made up 9 different kinds of parts, and 40 total parts:
da
:'M6 ÿë'
M
Screen # 6
Look closely at each of these 9 parts and their names below:
Processor
Co-processor
Small Screw
Soldered RAM
Chips
Fuse
Right-pointing
Resister
Voltage
Regulator
O
Large Screw
40
Left-pointing
Resister
M ore
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
43
Screen #7
As stated before, your job as a Quality Inspector is to examine samples of
hard disk drives, and determine whether or not each sample is missing a part. You
will inspect five sessions of hard disk drives. Each session should take about 30
minutes to complete...
m
m
Screen # 8
During each session. You will be presented with a series of sample hard disk drives,
like the one shown earlier in this Tutorial. Below each sample, you will see two areas
labelled “Accept” and “Reject” like these below:
ITtKWWl
After you inspect each sample drive, you must click on either “Accept” if no
parts are missing, or on “Reject” if a part is missing. As soon as you click on
“Accept” or “Reject,” you cannot change your response. The program automatically
advances to the next screen...
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
44
Screen #9 (Groups 1/0.05 & 1/0.12 only)
The running percent of correct Accept-Reject responses (like the display
shown below) will be displayed in the upper right portion of the screen.
CORRECT
This number will indicate whether you have correctly inspected the hard disk drive.
It will be provided immediately after you make an Accept-Reject response. Also,
this percent will only refer to the particular session you are inspecting, not results
from other sessions...
05
iisàck
M drè
Screen #10 (Groups D/0.05 and D/0.12 only)
The running percent of correct Accept-Reject responses (like the display
shown below) will be displayed in the upper right portion of the screen.
PER
CORRECT
This number will be provided at the end of each session. It will only refer to the
particular session you are inspecting, not results from other sessions...
M
3
M ore
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
45
Screen #11
The following 10 screens are examples of hard disk drives you might inspect.
Inspect each hard disk drive, and look for missing parts. When you have decided
whether or not a part is missing, click on “Accept” or “Reject.” You must inspect 8
of the following correctly in order to complete the Tutorial; if you do not, you will be
allowed to try again! Click on “More” below to start...
a
m
Back
Screen #12
CORRECT
im
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
46
Screen #13 (Remediations only)
You have not correctly inspected at least 8 of the hard disk drives.
Continuation of the Tutorial requires that you be able to correctly inspect at least 80%
of the samples; therefore, another set of hard disk drives will be presented (click on
“Repeat”)...
Screen #14 (After five remediations only)
You appear to be having difficulty in inspecting the sample hard disk drives at
the level required to complete this Tutorial (8 of the 10 hard disk drives must be
inspected correctly). In order to help you reach this level, you need to review the
components of the hard disk drive and the instmctions for the inspection task. Click
on “Review” below...
R e v ie w !
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
47
Screen #15
You have successfully inspected at least 80% of the hard disk drives, and can
now continue through the Tutorial. During actual inspections, there will be a 10second time limit to complete your inspection and make your Accept-Reject response.
After the time limit is up, the program will automatically advance to the next hard disk
drive. If you do not make an Accept-Reject response before the time limit is up, an
incorrect score will be automatically given...
Screen #16
When you are doing actual session inspections, avoid clicking the mouse
unnecessarily. This will prevent excess wear on the mouse, and prevent you from
making accidental Accept-Reject responses...
M dre
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
48
Screen #17
That’s about it! You may wish to review one or both of the topics below. If
so, then just click on the appropriate box to indicate what you would like to review.
You will get a quick review, and then be returned here. Click on “More” if you do
not wish to review either of these topics, or are finished reviewing...
_ ^ :ïiâ r iâ ;D isIf'-ijrw
m
M ore
Screen #18
This is the end of the Tutorial. If you have any further questions or concerns
not answered by this Tutorial, feel free to ask the session coordinator! If you want to
review the previous items again, click on “Back.” Otherwise, just click on “Done”
below to end the Tutorial...
-D o iiël
® .
•Back
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Appendix C
General Results
49
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
50
General Results
Table 4
Percentage of Correct Inspection Responses, Response Rates (R/s), Number of Hits
(H), Correct Acceptances (CA), False Alarms (FA), and Misses (M), Proportion of
Hits (p[H]) and False Alarms (p[F]), and Response Sensitivity (d') Across Subjects by
Experimental Session and Group
Subject
Session
% Correct
Rate
H
CA
FA
M
p[H]
p[FJ
d'
Group 1/0.05 (Signal Probability = 0.05; Immediate Feedback)
1
95.5
0.34
3.5
188.5
2.5
7.5
0.32
0.013
1.858
2
96.0
0.39
2.5
190.5
0.5
8.5
0.23
0.003
2.009
3
96.5
0.45
3.5
190.5
0.5
7.5
0.32
0.003
2.280
4
96.0
0.44
2.5
190.5
0.5
8.5
0.23
0.003
2.009
5
95.0
0.51
0.5
190.5
0.5
10.5
0.05
0.003
1.103
Mean
95.8
0.43
2.5
190.5
0.9
8.5
0.23
0.005
1.852
1
94.5
0.16
3.5
188.5
1.5
7.5
0.32
0.008
1.941
2
97.0
0.29
5.5
189.5
1.5
5.5
0.50
0.008
2.409
3
95.5
0.43
3.5
188.5
2.5
0.32
0.013
1.858
4
96.0
0.63
2.5
190.5
0.5
8.5
0.23
0.003
2.009
5
96.0
0.66
2.5
190.5
0.5
8.5
0.23
0.003
2.009
Mean
95.8
0.43
3.5
189.5
1.3
7.5
0.32
0.007
2.045
1
95.0
0.20
4.5
186.5
2.5
6.5
0.41
0.013
2.098
2
95.0
0.26
1.5
189.5
0.5
9.5
0.14
0.003
1.668
95.0
0.28
0.5
190.5
0.5
10.5
0.05
0.003
1.103
1
2
3
7.5
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
51
Table 4--Continued
Subject
Session
% Correct
Rate
H
CA
FA
M
p[H]
p[F]
d'
Group 1/0.05 (Signal Probability = 0.05; Immediate Feedback)
4
95.5
0.33
1.5
190.5
0.5
9.5
0.14
0.003
1.668
5
96.5
0.32
3.5
190.5
0.5
7.5
0.32
0.003
2.280
Mean
95.4
0.28
2.3
189.5
0.9
8.7
0.21
0.005
1.763
1
96.5
0.16
6.5
189.5
0.5
2.5
0.72
0.003
3.331
2
98.0
0.19
6.5
190.5
0.5
4.5
0.59
0.003
2.976
3
99.0
0.20
8.5
190.5
0.5
2.5
0.77
0.003
3.487
4
96.5
0.26
4.5
189.5
0.5
6.5
0.41
0.003
2.520
5
98.5
0.36
7.5
190.5
0.5
3.5
0.68
0.003
3.216
Mean
97.7
0.23
6.7
190.1
0.5
3.9
0.63
0.003
3.106
1
95.5
0.13
5.5
184.5
2.5
4.5
0.55
0.013
2.452
2
97.0
0.14
5.5
189.5
1.5
5.5
0.50
0.003
2.748
3
98.5
0.16
7.5
190.5
0.5
2.5
0.75
0.003
3.422
4
95.5
0.24
5.5
187.5
1.5
5.5
0.50
0.008
2.409
5
95.5
0.18
2.5
189.5
1.5
8.5
0.23
0.008
1.670
Mean
96.4
0.17
5.3
188.3
1.5
5.3
0.51
0.007
2.540
1
95.5
0.46
1.5
190.5
0.5
9.5
0.14
0.003
1.668
2
96.0
0.46
2.5
190.5
0.5
8.5
0.23
0.003
2.009
3
95.0
0.51
0.5
190.5
0.5
10.5
0.05
0.003
1.103
4
95.5
0.40
2.5
189.5
0.5
8.5
0.23
0.003
2.009
3
4
5
6
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
52
Table 4-Continued
Subject
Session
% Correct
Rate
H
CA
FA
M
p[H]
p[F]
d'
Group 1/0.05 (Signal Probability = 0.05; Immediate Feedback)
5
95.5
0.46
1.5
190.5
0.5
9.5
0.14
0.003
1.668
Mean
95.5
0.46
1.7
190.3
0.5
9.3
0.16
0.003
1.691
1
97.5
0.26
6.5
189.5
0.5
4.5
0.59
0.003
2.976
2
98.0
0.41
6.5
190.5
0.5
4.5
0.59
0.003
2.976
3
97.0
0.56
4.5
190.5
0.5
6.5
0.41
0.003
2.520
4
99.0
0.58
8.5
190.5
0.5
2.5
0.77
0.003
3.487
5
97.0
0.61
4.5
190.5
0.5
6.5
0.41
0.003
2.520
Mean
97.7
0.48
6.1
190.3
0.5
4.9
0.55
0.003
2.900
SD
0.99
0.13
2.3
0.8
0.4
1.8
0.19
0.002
0.575
Grand Mean
9 6 .3
0 .3 5
4 .0
1 8 9 .8
0 .9
5 .6
0 .3 7 0 .0 0 5
2 .2 7 1
6
7
Group D/0.05 (Signal Probability = 0.05; Delayed Feedback)
1
2
1
95.0
0.19
6.5
184.5
0.5
4.5
0.59
0.003
2.976
2
97.0
0.23
4.5
190.5
0.5
6.5
0.41
0.003
2.520
3
99.0
0.25
8.5
190.5
0.5
2.5
0.77
0.003
3.487
4
99.0
0.25
8.5
190.5
0.5
2.5
0.77
0.003
3.487
5
99.0
0.31
8.5
190.5
0.5
2.5
0.77
0.003
3.487
Mean
97.8
0.20
7.3
189.3
0.5
3.7
0.66
0.003
3.190
1
93.5
0.18
7.5
180.5
3.5
3.5
0.68
0.019
2.522
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
53
Table 4~Continued
Subject
Session
% Correct
Rate
H
CA
FA
M
p[H]
p[F]
d'
Group D/0.05 (Signal Probability = 0.05; Delayed Feedback)
2
96.0
0.24
3.5
189.5
0.5
7.5
0.32
0.003
2.280
3
97.5
0.26
5.5
190.5
0.5
5.5
0.50
0.003
2.748
4
96.5
0.29
3.5
190.5
0.5
7.5
0.32
0.003
2.280
5
98.0
0.30
6.5
190.5
0.5
4.5
0.59
0.003
2.976
Mean
96.3
0.25
5.3
188.3
1.1
5.7
0.48
0.006
2.560
1
94.0
0.14
4.5
184.5
0.5
6.5
0.41
0.003
2.250
2
96.0
0.19
3.5
189.5
0.5
7.5
0.32
0.003
2.280
3
97.5
0.20
5.5
190.5
0.5
5.5
0.50
0.003
2.748
4
98.0
0.15
7.5
189.5
0.5
3.5
0.68
0.003
3.216
5
98.0
0.15
7.5
190.5
0.5
3.5
0.68
0.003
3.216
Mean
96.6
0.17
5.7
188.9
0.5
5.3
0.52
0.003
2.800
1
85.5
0.26
3.5
168.5
10.5
7.5
0.32
0.062
1.087
2
93.0
0.35
2.5
185.5
0.5
8.5
0.23
0.003
2.009
3
97.0
0.33
4.5
190.5
0.5
6.5
0.41
0.003
2.250
4
95.5
0.33
3.5
189.5
0.5
7.5
0.32
0.003
2.280
5
96.0
0.35
2.5
190.5
0.5
8.5
0.23
0.003
2.009
Mean
93.4
0.32
3.3
184.9
2.5
7.7
0.30
0.015
1.927
1
98.0
0.19
6.5
190.5
0.5
4.5
0.59
0.003
2.976
2
97.0
0.23
4.5
190.5
0.5
6.5
0.41
0.003
2.520
2
3
4
5
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
54
Table 4-Continued
Subject
Session
% Correct
Rate
H
CA
FA
M
p[H]
p[F]
d'
Group D/0.05 (Signal Probability = 0.05; Delayed Feedback)
3
100.0
0.25
10.5
190.5
0.5
0.5
0.95
0.003
4.393
4
98.0
0.27
6.5
190.5
0.5
4.5
0.59
0.003
2.976
5
98.5
0.29
7.5
190.5
0.5
3.5
0.68
0.003
3.216
Mean
98.3
0.25
7.1
190.5
0.5
3.9
0.64
0.003
3.220
1
74.5
0.21
9.5
141.5
49.5
1.5
0.86
0.260
1.723
2
98.0
0.20
6.5
190.5
0.5
4.5
0.59
0.003
2.976
3
97.0
0.23
4.5
190.5
0.5
6.5
0.41
0.003
2.520
4
96.0
0.23
2.5
190.5
0.5
8.5
0.23
0.003
2.009
5
97.0
0.29
5.5
190.5
0.5
5.5
0.50
0.003
0.748
Mean
92.5
0.23
5.7
180.7
10.3
5.3
0.52
0.050
2.400
1
93.0
0.17
9.5
177.5
13.5
1.5
0.86
0.070
2.556
2
98.0
0.18
6.5
190.5
0.5
4.5
0.59
0.003
2.976
3
93.0
0.32
3.5
183.5
7.5
7.5
0.32
0.040
1.283
4
97.5
0.27
5.5
190.5
0.5
5.5
0.50
0.003
2.748
5
94.5
0.49
10.5
190.5
0.5
0.5
0.95
0.003
4.393
Mean
95.2
0.29
7.1
186.5
4.5
3.9
0.64
0.024
2.791
SD
2.16
0.051
1.4
3.3
3.6
1.4
0.13
0.017
0.454
Grand Mean
9 5 .7
0 .2 4
5 .9
1 8 7 .0
2 .8
5 .1
0 .5 4 0 .0 1 5
2 .7 0 0
5
6
7
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
55
Table 4-Continued
Subject
Session
% Correct
Rate
H
CA
FA
M
p[H]
d'
m
Group N/0.05 (Signal Probability = 0.05; No Feedback)
1
94.0
0.16
2.5
186.5
0.5
8.5
0.23
0.003
2.009
2
95.5
0.24
1.5
190.5
0.5
9.5
0.14
0.003
1.668
3
95.0
0.32
0.5
190.5
0.5
10.5
0.05
0.003
1.103
4
96.0
0.51
2.5
190.5
0.5
8.5
0.23
0.003
2.009
5
94.0
0.61
0.5
188.5
2.5
10.5
0.05
0.013
0.681
Mean
94.9
0.37
1.5
189.3
0.9
9.5
0.14
0.005
1.494
1
97.5
0.16
5.5
190.5
0.5
5.5
0.50
0.003
2.748
2
95.5
0.26
1.5
190.5
0.5
9.5
0.14
0.003
1.668
3
97.0
0.22
4.5
190.5
0.5
6.5
0.41
0.003
2.520
4
95.5
0.25
4.5
190.5
0.5
6.5
0.41
0.003
2.520
5
94.5
0.33
0.5
189.5
1.5
10.5
0.05
0.008
0.764
Mean
96.0
0.24
3.3
190.3
0.7
7.7
0.30
0.004
2.044
1
89.5
0.15
5.5
174.5
16.5
5.5
0.50
0.086
1.405
2
100.0
0.24
10.5
190.5
0.5
0.5
0.95
0.003
4.393
3
98.5
0.37
7.5
190.5
0.5
3.5
0.68
0.003
3.216
4
99.0
0.26
8.5
190.5
0.5
2.5
0.77
0.003
3.487
5
99.0
0.25
8.5
190.5
0.5
2.5
0.77
0.003
3.487
Mean
97.2
0.25
8.1
187.3
3.7
2.9
0.73
0.020
3.200
1
93.5
0.17
4.5
183.5
7.5
6.5
0.41
0.039
1.523
1
2
3
4
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
56
Table 4-Continued
Subject
Session
% Correct
Rate
H
CA
FA
M
p[H]
p[F]
d'
Group N/0.05 (Signal Probability = 0.05; No Feedback)
2
94.5
0.22
3.5
186.5
4.5
7.5
0.32
0.024
1.586
3
92.5
0.26
4.5
181.5
9.5
6.5
0.41
0.050
1.417
4
93.0
0.37
6.5
180.5
10.5
4.5
0.59
0.055
1.417
5
95.5
0.36
5.5
186.5
4.5
5.5
0.50
0.024
2.054
Mean
93.8
0.28
4.9
183.7
7.3
6.1
0.45
0.038
1.600
1
98.5
0.18
8.5
189.5
1.5
2.5
0.77
0.008
3.148
2
96.0
0.19
7.5
185.5
5.5
3.5
0.68
0.029
2.349
3
100.0
0.14
10.5
190.5
0.5
0.5
0.95
0.003
4.393
4
99.0
0.18
8.5
190.5
0.5
2.5
0.77
0.003
3.487
5
97.0
0.29
4.5
190.5
0.5
6.5
0.41
0.003
2.520
Mean
98.1
0.20
7.9
189.3
1.7
3.1
0.72
0.009
3.180
1
96.0
0.31
2.5
190.5
0.5
8.5
0.23
0.003
2.009
2
96.5
0.25
6.5
187.5
3.5
4.5
0.59
0.018
2.976
3
97.5
0.28
6.5
189.5
1.5
4.5
0.59
0.008
2.976
4
100.0
0.33
10.5
190.5
0.5
0.5
0.95
0.003
4.393
5
98.5
0.34
7.5
190.5
0.5
3.5
0.68
0.003
3.216
Mean
97.7
0.30
6.7
189.7
1.3
4.3
0.61
0.007
3.114
1
98.5
0.16
7.5
190.5
0.5
3.5
0.68
0.003
3.216
2
96.5
0.22
3.5
190.5
0.5
7.5
0.32
0.003
2.280
4
5
6
7
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
57
Table 4-Continued
Subject
Session
% Correct
Rate
H
CA
FA
M
p[H]
p[F]
d'
Group N/0.05 (Signal Probability = 0.05; No Feedback)
3
96.5
0.14
3.5
190.5
0.5
7.5
0.32
0.003
2.280
4
96.5
0.15
3.5
190.5
0.5
7.5
0.32
0.003
2.280
5
96.0
0.18
2.5
190.5
0.5
8.5
0.23
0.003
2.009
Mean
96.8
0.17
4.1
190.5
0.5
6.9
0.37
0.003
2.413
SD
0.16
0.07
2.5
2.4
2.4
2.5
0.22
0.013
0.745
Grand Mean
9 6 .4
0 .2 6
5 .2
1 8 8 .6
2 .3
5 .8
0 .4 7 0 .0 1 2
2 .4 3 5
7
Group 1/0.12 (Signal Probability = 0.12; Immediate Feedback)
1
2
1
95.5
0.25
16.5
175.5
1.5
8.5
0.66
0.008
2.821
2
93.5
0.29
13.5
174.5
2.5
11.5
0.54
0.014
2.426
3
93.0
0.32
10.5
176.5
0.5
14.5
0.42
0.003
2.546
4
94.0
0.32
13.5
175.5
1.5
11.5
0.46
0.008
2.309
5
94.5
0.31
14.5
175.5
1.5
10.5
0.58
0.008
2.611
Mean
94.1
0.30
13.7
175.5
1.5
11.3
0.53
0.008
2.543
1
■ 96.5
0.16
17.5
176.5
0.5
7.5
0.70
0.003
3.272
2
96.5
0.19
17.5
176.5
0.5
7.5
0.70
0.003
3.272
3
98.5
0.24
22.5
175.5
1.5
2.5
0.90
0.008
3.691
4
98.0
0.32
20.5
176.5
0.5
4.5
0.82
0.003
3.663
5
96.5
0.32
17.5
176.5
0.5
7.5
0.70
0.003
3.272
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
58
Table 4--Continued
Subject
Session
% Correct
Rate
H
CA
FA
M
p[H]
p[F]
d'
Group 1/0.12 (Signal Probability = 0.12; Immediate Feedback)
2
Mean
97.2
0.25
19.1
176.3
0.7
5.9
0.76
0.004
3.434
3
1
91.5
0.22
10.5
173.5
3.5
14.5
0.42
0.020
1.852
2
94.5
0.42
13.5
176.5
0.5
11.5
0.46
0.003
2.648
3
94.0
0.55
12.5
176.5
0.5
12.5
0.50
0.003
2.748
4
90.5
0.70
6.5
175.5
1.5
17.5
0.70
0.008
1.885
5
94.0
0.59
13.5
175.5
1.5
11.5
0.46
0.008
2.309
Mean
92.8
0.50
11.3
175.5
1.5
13.7
0.51
0.008
2.288
1
93.5
0.17
14.5
173.5
3.5
10.5
0.58
0.020
2.256
2
94.0
0.19
12.5
176.5
0.5
12.5
0.50
0.003
2.748
3
96.0
0.19
16.5
176.5
0.5
8.5
0.66
0.003
3.160
4
97.0
0.25
19.5
175.5
1.5
5.5
0.78
0.008
3.181
5
92.0
0.36
8.5
176.5
0.5
16.5
0.34
0.003
2.336
Mean
94.5
0.23
14.3
175.7
1.3
10.7
0.57
0.007
2.736
1
96.5
0.11
18.5
175.5
1.5
6.5
0.74
0.008
3.052
2
96.0
0.18
17.5
175.5
1.5
7.5
0.70
0.008
2.933
3
97.0
0.13
18.5
176.5
0.5
6.5
0.74
0.003
3.391
4
96.0
0.16
16.5
176.5
0.5
8.5
0.66
0.003
3.160
5
93.0
0.24
12.5
174.5
2.5
12.5
0.50
0.014
2.326
Mean
95.7
0.16
16.7
175.8
1.2
8.3
0.67
0.007
2.972
4
5
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
59
Table 4-Continued
Subject
Session
% Correct
Rate
H
CA
FA
M
p[H]
p[F]
d'
Group 1/0.12 (Signal Probability = 0.12; Immediate Feedback)
1
88.0
0.18
8.5
168.5
8.5
16.5
0.34
0.048
1.233
2
96.5
0.27
17.5
176.5
0.5
7.5
0.70
0.003
4.030
3
91.0
0.45
6.5
176.5
0.5
18.5
0.74
0.003
3.391
4
90.5
0.46
5.5
176.5
0.5
19.5
0.78
0.003
3.520
5
89.0
0.51
2.5
176.5
0.5
22.5
0.90
0.003
4.030
Mean
91.0
0.37
8.1
175.2
1.8
16.9
0.69
0.012
3.241
1
96.0
0.15
16.5
176.5
0.5
8.5
0.66
0.003
3.160
2
97.5
0.23
19.5
176.5
0.5
5.5
0.90
0.003
4.030
3
97.0
0.24
18.5
176.5
0.5
6.5
0.74
0.003
3.391
4
99.5
0.19
23.5
176.5
0.5
1.5
0.94
0.003
4.303
5
96.5
0.22
17.5
176.5
0.5
7.5
0.70
0.003
3.272
Mean
97.3
0.21
19.1
176.5
0.5
5.9
0.79
0.003
3.631
SD
2.30
0.12
4.1
0.5
0.5
4.1
0.11
0.003
0.488
Grand Mean
9 4 .7
0 .2 9
1 4 .6
1 7 5 .8
1 .2
1 0 .4
0 .6 5 0 .0 0 7
2 .9 7 8
6
7
Group D/0.12 (Signal Probability = 0.12; Delayed Feedback)
1
1
90.5
0.14
10.5
171.5
5.5
14.5
0.42
0.031
1.679
2
94.5
0.18
13.5
176.5
0.5
11.5
0.54
0.003
2.848
3
94.0
0.16
13.5
176.5
0.5
11.5
0.54
0.003
2.848
4
94.5
0.21
13.5
176.5
0.5
11.5
0.54
0.003
2.848
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
60
Table 4-Continued
Subject
Session
% Correct
Rate
H
CA
FA
M
p[H]
p[F]
d'
Group 1/0.12 (Signal Probability = 0.12; Delayed Feedback)
1
5
97.5
0.15
20.5
175.5
1.5
4.5
0.82
0.008
3.324
Mean
94.2
0.17
14.3
175.3
1.7
10.7
0.57
0.010
2.709
1
89.0
0.23
10.5
169.5
7.5
14.5
0.42
0.042
1.549
2
92.5
0.23
13.5
173.5
3.5
11.5
0.54
0.020
2.154
3
95.0
0.21
16.5
175.5
1.5
8.5
0.66
0.008
2.821
4
96.0
0.23
16.5
176.5
0.5
8.5
0.66
0.003
3.160
5
96.5
0.22
17.5
176.5
0.5
7.5
0.70
0.003
3.272
Mean
93.8
0.22
14.9
174.3
2.7
10.1
0.60
0.015
2.591
1
95.5
0.14
19.5
170.5
3.5
5.5
0.78
0.020
2.826
2
98.0
0.19
20.5
176.5
0.5
4.5
0.82
0.003
3.663
3
97.5
0.19
19.5
175.5
0.5
5.5
0.78
0.003
3.520
4
97.0
0.19
18.5
176.5
0.5
6.5
0.74
0.003
3.391
5
98.0
0.18
20.5
176.5
0.5
4.5
0.82
0.003
3.663
Mean
97.2
0.18
19.7
175.1
1.1
5.3
0.79
0.006
3.413
1
87.0
0.16
10.5
164.5
3.5
11.5
0.48
0.021
2.004
2
89.5
0.14
12.5
167.5
4.5
11.5
0.52
0.026
1.931
3
92.0
0.14
14.5
167.5
0.5
8.5
0.63
0.003
3.080
4
92.5
0.25
11.5
175.5
0.5
13.5
0.46
0.003
2.648
5
93.5
0.34
11.5
176.5
0.5
13.5
0.46
0.003
2.648
2
3
4
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
61
Table 4-Continued
Subject
Session
% Correct
Rate
H
CA
FA
M
p[H]
PIF]
d'
Group 1/0.12 (Signal Probability = 0.12; Delayed Feedback)
4
Mean
90.9
0.21
12.1
170.3
1.9
11.7
0.51
0.011
2.462
5
1
93.0
0.13
18.5
167.5
0.5
6.5
0.74
0.003
3.391
2
96.0
0.15
22.5
170.5
0.5
0.5
0.98
0.003
4.802
3
99.0
0.17
23.5
175.5
0.5
1.5
0.94
0.003
4.303
4
93.0
0.20
21.5
165.5
0.5
2.5
0.90
0.003
4.030
5
96.0
0.25
19.5
173.5
0.5
3.5
0.85
0.003
3.784
Mean
95.4
0.18
21.1
170.5
0.5
2.9
0.88
0.003
4.062
1
86.5
0.14
17.5
151.5
6.5
7.5
0.70
0.041
2.275
2
95.5
0.15
19.5
171.5
0.5
4.5
0.81
0.003
3.626
3
98.0
0.15
21.5
176.5
0.5
3.5
0.86
0.003
0.828
4
97.5
0.16
22.5
173.5
1.5
1.5
0.94
0.009
3.921
5
96.5
0.17
18.5
175.5
0.5
6.5
0.74
0.003
3.391
Mean
94.8
0.15
19.9
169.7
1.9
4.7
0.81
0.011
3.408
1
86.5
0.17
7.5
163.9
7.5
17.5
0.30
0.440
1.227
2
93.5
0.17
12.5
175.5
0.5
12.5
0.50
0.003
2.748
3
95.0
0.28
14.5
176.5
0.5
10.5
0.58
0.003
2.950
4
93.0
0.29
11.5
176.5
0.5
13.5
0.46
0.003
2.648
5
93.5
0.31
12.5
175.5
1.5
12.5
0.50
0.008
2.409
Mean
92.3
0.24
11.7
173.5
2.1
13.3
0.47
0.012
2.396
6
7
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
62
Table 4-Continued
Subject
Session
% Correct
Rate
H
CA
FA
M
p[H]
p[F]
d'
Group 1/0.12 (Signal Probability = 0.12; Delayed Feedback)
SD
Grand Mean
2.1
0.03
3.9
2.4
0.7
4.0
0.16
0.004
0.629
9 4 .1
0 .1 9
1 6 .2
1 7 2 .7
1 .7
7 .7
0 .6 6 0 .0 1 0
2 .5 1 9
Group N/0.12 (Signal Probability = 0.12; No Feedback)
1
90.5
0.24
11.5
170.5
2.5
12.5
0.48
0.014
2.276
2
94.0
0.28
12.5
176.5
0.5
12.5
0.40
0.003
2.748
3
92.0
0.27
9.5
176.5
0.5
14.5
0.34
0.003
2.336
4
93.5
0.39
11.5
176.5
0.5
13.5
0.46
0.003
2.648
5
93.0
0.39
11.5
175.5
1.5
13.5
0.46
0.008
2.309
Mean
92.6
0.31
10.9
175.3
1.1
13.3
0.43
0.006
2.463
1
86.5
0.26
13.5
160.5
10.5
10.5
0.56
0.061
1.706
2
75.5
0.25
6.5
145.5
1.5
14.5
0.31
0.010
1.830
3
90.0
0.48
4.5
175.5
0.5
19.5
0.19
0.003
1.870
4
89.0
0.55
2.5
176.5
0.5
22.5
0.10
0.003
1.466
5
90.5
0.52
7.5
174.5
0.5
17.5
0.30
0.003
2.224
Mean
86.3
0.41
6.9
166.5
2.7
16.9
0.29
0.016
1.819
1
90.0
0.17
17.5
163.5
12.5
7.5
0.70
0.071
2.000
2
93.5
0.29
11.5
176.5
0.5
13.5
0.46
0.003
2.648
3
93.5
0.33
12.5
175.5
1.5
12.5
0.50
0.008
2.409
4
91.5
0.32
7.5
176.5
0.5
17.5
0.30
0.003
2.224
1
2
3
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
63
Table 4--Continued
Subject
Session
% Correct
Rate
H
CA
FA
M
pM
p[F]
d'
Group 1/0.12 (Signal Probability = 0.12; No Feedback)
5
91.0
0.37
6.5
176.5
0.5
18.5
0.26
0.003
2.105
Mean
91.9
0.30
11.1
173.7
3.1
13.9
0.44
0.018
2.277
1
91.5
0.22
9.5
175.5
1.5
15.5
0.38
0.008
2.104
2
92.5
0.28
9.5
176.5
0.5
14.5
0.40
0.003
2.495
3
95.5
0.33
15.5
176.5
0.5
9.5
0.62
0.003
3.053
4
93.0
0.33
10.5
176.5
0.5
13.5
0.42
0.003
2.546
5
93.0
0.37
11.5
175.5
0.5
13.5
0.46
0.008
2.309
Mean
93.1
0.31
11.3
176.1
0.7
13.5
0.46
0.005
2.501
1
92.0
0.14
15.5
170.5
0.5
8.5
0.65
0.003
3.133
2
93.0
0.24
11.5
176.5
0.5
13.5
0.46
0.003
2.648
3
95.5
0.26
18.5
173.5
0.5
5.5
0.77
0.003
3.487
4
88.5
0.23
13.5
165.5
2.5
8.5
0.61
0.015
2.333
5
94.5
0.27
16.5
174.5
0.5
8.5
0.66
0.003
3.160
Mean
93.1
0.23
15.1
172.5
0.8
8.9
0.63
0.005
2.952
1
95.5
0.21
15.5
176.5
0.5
9.5
0.62
0.003
3.053
2
93.5
0.24
12.5
175.5
0.5
12.5
0.50
0.003
2.748
3
94.0
0.27
12.5
176.5
0.5
12.5
0.50
0.003
2.748
4
94.0
0.36
12.5
176.5
0.5
12.5
0.50
0.003
2.748
5
90.0
0.66
4.5
176.5
0.5
20.5
0.82
0.003
3.663
3
4
5
6
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
64
Table 4 - Continued
Subject
Session
% Correct
Rate
H
CA
FA
M
p[H]
p[F]
d'
Group 1/0.12 (Signal Probability = 0.12; No Feedback)
6
Mean
93.4
0.35
11.5
176.3
0.5
13.5
0.59
0.003
2.992
7
1
92.5
0.21
10.5
175.5
0.5
14.5
0.42
0.003
2.546
2
96.0
0.26
16.5
176.5
0.5
8.5
0.66
0.003
3.160
3
91.0
0.21
6.5
176.5
0.5
18.5
0.26
0.003
2.105
4
93.0
0.39
10.5
176.5
0.5
14.5
0.42
0.003
2.546
5
97.0
0.39
18.5
176.5
0.5
6.5
0.74
0.003
3.391
Mean
93.9
0.29
12.5
176.3
0.5
12.5
0.50
0.003
2.750
SD
2.6
0.05
2.4
3.5
1.1
2.4
0.11
0.006
0.411
Grand Mean
9 2 .0
0 .3 1
1 1 .3
1 7 3 .8
1 .3
1 3 .2
0 .4 8 0 .0 0 8
2 .5 3 6
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Appendix D
Statistical Calculations
65
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
66
Statistical Calculations
Table 5
Two-Factor ANOVA on Inspection Accuracy (Percentage of Correct Responses)
Sum of
Squares
Degrees of
Freedom
Mean Squares
F Ratio
p value
A (Signal p)
67.640
1
67.640
17.097
0.0002
B(Feedback)
11.727
2
5.863
1.482
0.241
Interaction
16.566
2
8.283
2.094
0.138
Within-Cell
142.422
36
3.956
Total
238.356
41
Source
Table 6
Two-Factor ANOVA on Split-Half Inspection Accuracy (Mean Percent Change)
Sum of
Squares
Degrees of
Freedom
Mean Squares
F Ratio
p value
A (Signal p)
0.013
1
0.013
0.002
0.962
B (Feedback)
109.646
2
54.823
9.233
0.001
Interaction
0.438
2
0.219
0.037
0.964
Within-Cell
213.768
36
5.938
Total
323.864
41
Somee
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
67
Table 7
Multiple Comparisons (Tukey Procedure): Feedback Type on Split-Half Mean Percent
Change in Inspection Accuracy
Group Comparison
Mean Difference
<} obtained
Decision
Ho: X a = Xb
(XA - X b)
(Immediate - Delayed)
(-0.214 - 3.643) = -3.857
-5.925
Reject
(Immediate - None)
(-0.214 - 0.946) = -1.160
-1.782
Accept
(3.643 - 0.946) = 2.697
4.143
Reject
(Delayed - None)
q = ( X A ' X b)
+«/mSw + n . ; n. = 14;
+ n. = 0.651; Q (.05, 3, 36) =
3.49
Table 8
Two-Factor ANOVA on Inspection Rate (Responses per Second)
Sum of
Squares
Degrees of
Freedom
Mean Squares
F Ratio
p value
A (Signal p)
0.007
1
0.007
1.247
0.116
B (Feedback)
0.063
2
0.032
5.917
0.006
Interaction
0.052
2
0.026
4.894
0.013
Within-Cell
0.193
36
0.005
Total
0.316
41
Source
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
68
Table 9
One Factor ANOVAs: Signal Probability and Feedback Type on Mean Response Rate
Source
Sum of
Squares
Degrees of
Freedom
Mean Squares
F Ratio
p value
3.333
0.0587
5.635
0.0126
Signal Probability = 0.05
Between (Feedback)
0.05
2
0.025
Within (Feedback)
0.135
18
0.008
Total (Feedback)
0.185
20
Signal Probability = 0.12
Between (Feedback)
0.066
2
0.033
Within (Feedback)
0.058
18
0.003
Total (Feedback)
0.124
20
Table 10
Multiple Comparisons (Tukey Procedure): Feedback Type (Signal p = 0.12) on
Response Rates
Group Comparison
Mean Difference
(Xa - xb )
<1obtained
Decision
Hq: X a = Xb
(Immediate - Delayed)
(0 .2 9 -0 .1 9 ) = 0.10
5.000
Reject
(Immediate - None)
(0 .2 9 -0 .3 1 ) = -0.02
-1.000
Accept
(Delayed - None)
(0 .1 9 -0 .3 1 ) = -0.12
-6.000
Reject
f = (X A -X B ) + y M S \Y + n . ; n. = 7; / m S ^ + n . = 0.02; Ç (.05, 3, 18) = 3.61
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
69
Table 11
Two-Factor ANOVA on Response Sensitivity {d')
Sum of
Squares
Degrees of
Freedom
Mean Squares
F Ratio
p value
A (Signal
Probability)
1.452
1
1.452
4.597
0.041
B (Feedback)
0.958
2
0.479
1.516
0.233
A XB
Interaction
0.663
2
0.332
1.050
0.360
Within-Cell
11.372
36
0.316
Total
14.446
41
Source
Table 12
T wo-Factor A N O V A on Num ber o f False Alarm s
Source
Sum of
Squares
Degrees of
Freedom
Mean Squares
F Ratio
p value
A (Signal
Probability)
3.602
1
3.602
1.025
0.318
B(Feedback)
10.818
2
5.409
1.539
0.228
A xB
Interaction
4.587
2
2.294
0.653
0.528
Within-Cell
126.497
36
3.514
Total
14.446
41
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
70
Table 13
Two-Factor ANOVA on Number of Misses
Sum of
Squares
Degrees of
Freedom
Mean Squares
F Ratio
p value
A (Signal p)
282.881
1
282.881
30.596
0.0001
B(Feedback)
53.956
2
26.978
2.918
0.067
Interaction
30.31
2
15.155
1.639
0.208
Within-Cell
332.846
36
9.246
0.316
41
Source
Total
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
BIBLIOGRAPHY
Adams, J. A. (1987). Criticisms of vigilance research: A discussion. Human Factors.
22(6), 737-740.
Agnew, J. L., & Redmon, W. K. (in press). Contingency specifying stimuli: The role
of “rules” in Organizational Behavior Management. Journal o f Organizational
Behavior Management.
Annette, J. (1969). Feedback and human behavior. Baltimore, MD: Penguin.
Apple Computer, Inc. (1989). HyperCard (version 1.2.2). Cupertino, CA: Author.
Badalamente, R. V. (1969). A behavioral analysis of an assembly line inspection task.
Human Factors. 11(4), 339-352.
Baddeley, A. D., & Colquhoun, W. P. (1969). Signal probability and vigilance: A
reappraisal of the ‘signal-rate’ effect. British Journal o f Psvchologv. Éû(2), 169178.
Bakan, P. (1955). Discrimination decrement as a function of time in a prolonged vigil.
Journal of Experimental Psychology. 5Û, 387-390.
Balcazar, F., Hopkins, B. L., & Suarez, Y. (1986). A critical, objective review of
performance feedback. Journal of Organizational Behavior Management. 2(3/4),
65- 89.
Bertelson, P. P., Boons, J., & Renkin, A. (1965). Vitesse libre et vitesse imposse
dans une tache simulante le tri mechanique de la correspondance [Self-paced and
forced-paced sorting in a simulated mail sorting task]. Ergonomics. S, 3-22.
Bilodeau, E. A., & Bilodeau, I. M. (1958). Variation of temporal intervals among
critical events in five studies of knowledge of results. Journal of Experimental
Psychology. 5^(6), 603-612.
Bilodeau, E. A., & Ryan, F. J. (1960). A test for interaction of delay of knowledge
of results and and two types of interpolated activity. Journal of Experimental
Chaney, B. F., & Teel, K. S. (1967). Improving inspector performance through
training and visual aids. Journal of Applied Psychology. 2(4), 311-315.
Chhokar, J. S., & Wallin, J. A. (1984). A field study of the effect of feedback
frequency on performance. Journal of Applied Psychology. Ê2(3), 524-530.
Colquhoun, W. P. (1961). The effect of unwanted signals on performance in a
71
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
72
vigilance task. Ergonomics. 4.41-51.
Conrad, R. (1955). Comparison of paced and unpaced performance at a packing task.
Occupational Psychology. 2g, 15-24.
Craig, A. (1980). Effect of prior knowledge of signal probabilities on vigilance
performance at a two-signal task. Human Factors. 22(3). 361-371.
Craig, A. (1981). Monitoring for one kind of signal in the presence of another: The
effects of signal mix on detectability. Human Factors. 23(2). 191-197.
Deming, W. E. (1975). On some statistical aids toward economic production.
Interfaces. 5, 1-15.
Drury, C. G., & Addison, J. L. (1973). An industrial study o f the effects of feedback
and fault density on inspection performance. Ergonomics. 16(21. 159-169.
Drury, C. G., & Fox, J. G. (Eds.). (1975). Human reliability in quality control.
London: Taylor & Francis.
Duncan, P. K., & Bruwelheide, L. R. (1986). Feedback: Use and possible behavioral
functions. Journal of Organizational Behavior Management. 2(3/4), 91-113.
Dyal, J. A. (1964). Effects of delay of knowledge of results in a line-drawing task.
Perceptual and Motor Skills. l â , 433-434.
Dyal, J. A. (1965). Confounded delay of knowledge of results design revisited: A
reply to Bilodeau. Perceptual and Motor Skills. 20. 173-174.
Fortune, B. D. (1979). The effects of signal probability on inspection accuracy in a
microscopic inspection task: An experimental investigation. Academy of
Management Journal. 2.2(1). 118-128.
Fox, J. G., & Haslegrave, C. M. (1969). Industrial inspection efficiency and the
probability of a defect occurring. Ergonomics. 12(5), 713-721.
Gallwey , T. J., & Drury, C. G. (1986). Task complexity in visual inspection.
Human Factors. A (5 ), 595-606.
Greenspoon, J., & Foreman, S. (1956). Effect of delay of knowledge of results on
learning a motor task. Journal of Experimental Psychology. 51(1), 226-228.
Harris, D. H. (1968). Effect of defect rate on inspection accuracy. Journal of Applied
Psychology. 52(5), 377-379.
Harris, D. FI. (1969). The nature of industrial inspection. Human Factors. 11(2).
139-148.
Harris, D. H., & Chaney, F. B. (1969). Human factors in quality assurance. New
York: Wiley.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
73
Henry, G. O., & Redmon, W. K. (1991). The effects of performance feedback on the
implementation of a statistical process control (SPC) program. Journal of
Organizational Behavior Management. 11(2), 23-46.
Holland, J. G. (1958). Human vigilance. Science. 128. 61-67.
Jenkins, H. M. (1957). The effect of signal-rate on performance in vigilance
monitoring. Journal of Experimental Psychology. 51. 647-661.
Krigsman, N., & O ’Brien, R. M. (1987). Quality circles, feedback and reinforcement:
An experimental comparison and behavior analysis. In T. Mawhinney (Ed.),
Organizational behavior management and statistical process concrol: Theory,
technologv. and research (pp. 67-92). New York: Hawthorn.
Mackie, R. R. (1984). Research relevance and the information glut. In F. A. Muckier
(Ed.), Human Factors Review (pp. I - ll) . Santa Monica, CA: Human Factors
Society.
Mackie, R. R. (1987). Vigilance research-are we ready for countermeasures? Human
Factors. 29(6). 707-723.
Mackworth, N. H. (1950). Researches on the measurement of human performance.
Medical Research Council. Special Report, no. 268. London: H. M. Stationary
Office.
MacMillan, N. A., & Creelman, C. D. (1991). Detection theorv: A user's guide. New
York: Cambridge University Press.
McCarthy, M. (1978). Decreasing the incidence of “high bobbins” in a textile spinning
department through group feedback procedure. Journal of Organizational
Behavior Management. 1(2). 150-154.
McFarling, L. H., & Heimstra, N. W. (1975). Pacing, product complexity, and task
perception in simulated inspection. Human Factors. 17(4). 361-367.
McGuigan, F. J. (1959). The effect of precision, delay, and schedules of knowledge
of results on performance. Journal of Experimental Psvchology. 5S(1), 79-84.
Merwin, G. A., Thomason, J. A., & Sanford, E. E. (1989). A methodology and
content review of organizational behavior management in the private sector:
1978-1986. Journal of Organizational Behavior Management. 10(1). 39-57.
O ’Hara, K., Johnson, C. M., & Beehr, T. A. (1985). Organizational behavior
management in tlie private sector: A review of empirical research and
recommendations for further investigation. Academv of Management Review.
lû(4), 848-864.
Prue, D. M., & Fairbank, J. A. (1981). Performance feedback in organizational
behavior management: A review. Journal of Organizational Behavior
Management. 3(1). 1-16.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
74
Saltzman, I. J. (1951). Delay of reward and human verbal learning. Journal of
Experimental Psychology. 41,437-439.
Salvendy, G., & Humphreys, A. P. (1979). Effects of personality, perceptual
difficulty, and pacing of a task on productivity, job satisfaction, and physiological
stress. Perceptual and Motor Skills. 49. 219-222.
Snodgrass, J. G., & Corwin, J. (1988). Pragmatics of measuring recognition
memory: Applications to dementia and amnesia. Journal of Experimental
Psychology: General. 117. 34-50.
Synfelt, D. L., & Brunskill, C. T. (1986). Multipurpose inspection task simulator.
Computers and Industrial Engineering. 10(41. 273-282.
Tanner, W. P., & Swets, J. A. (1954). A decision-making theory of visual detection.
Psychological Review. 61. 401-409.
Teitelbaum, P. (1967). The effects of differential delay of knowledge of results in a
motor learning task. Unpublished master’s thesis, Western Michigan University,
Kalamazoo, MI.
Wiener, E. L. (1984). Vigilance and inspection. In J. S. Warm (Ed.). Sustained
attention in human performance (pp. 207-246). New York: Wiley.
Wiener, E. L. (1987). Application of vigilance research: Rare, medium, or well done?
Human Factor. 2£(6), 725-736.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.