A General Model of Mixed-Initiative Human-Machine

Proceedings of the Human Factors and
Ergonomics Society
Annual Meeting
http://pro.sagepub.com/
A General Model of Mixed-Initiative Human-Machine Systems
Victor Riley
Proceedings of the Human Factors and Ergonomics Society Annual Meeting 1989 33: 124
DOI: 10.1177/154193128903300227
The online version of this article can be found at:
http://pro.sagepub.com/content/33/2/124
Published by:
http://www.sagepublications.com
On behalf of:
Human Factors and Ergonomics Society
Additional services and information for Proceedings of the Human Factors and Ergonomics Society Annual Meeting can be found at:
Email Alerts: http://pro.sagepub.com/cgi/alerts
Subscriptions: http://pro.sagepub.com/subscriptions
Reprints: http://www.sagepub.com/journalsReprints.nav
Permissions: http://www.sagepub.com/journalsPermissions.nav
Citations: http://pro.sagepub.com/content/33/2/124.refs.html
>> Version of Record - Oct 1, 1989
What is This?
Downloaded from pro.sagepub.com at HFES-Human Factors and Ergonomics Society on October 2, 2014
PROCEEDINGS of the H U M N FACTORS SOClElY 33rd ANNUAL MEETING--1989
A GENERAL MODEL OF MIXED-INITIATIVE HUMAN-MACHINE SYSTEMS
Victor Riley
Honeywell Systems and Research Center
Minneapolis, MN
ABSTRACT
The increasing role of automation in human-machine systems requires modelling approaches which
are flexible enough to systematicallyexpress a large range of automation levels and assist the exploration of
a large range of automation issues. A General Model of Mixed-InitiativeHuman-MachineSystems is
described, along with a corresponding automation taxonomy, which: provides a framework for representing
human-machine systems over a wide range of complexity; forms the basis of a dynamic, pseudomathematical simulation of complex interrelationshipsbetween situational and cognitive factors operating in
dynamic function allocation decisions; and can guide methodical investigationsinto the implications of
decisions regarding system automation levels.
INTRODUCTION
As technological advances make more sophisticatedforms of
automation available, system designers are facing
increasingly difficult decisions regarding how much
automation to use and when to use it. The importance of
these decisions is demonstratedfrom time to time by
catastrophic or near-catastrophicevents in which either the
automation fails to perform properly or an unforeseen
consequenceof the interaction between human operator and
automation leads to an error. Many examples of such events
in aviation have been documented by Wiener (1986).
noted that this full representation would apply only to the
most sophisticated and complex automation. In a later
section, a process for reducing the model to apply to specific
automation levels will be described.
As mentioned before, the World node provides information to
both the machine and the operator and receives information
back from both. The contents of this node depend on the
system being represented. For an aircraft collision avoidance
system, for example, it may contain information about own
aircraft position, heading, and velocity, and the same for
nearby aircraft. The collision avoidance system would
receive this information (or some of it) through its World
Sensors node and the pilot would receive it (or another subset
of it) from vision through the windscreen (Perceive World).
In the past, automation decisions could be made by assigning
to human or machine whatever task each was better able to
perform. Now, this criterion is often difficult to apply.
Furthermore, designers must consider such issues as the
operator's loss of situation awareness due to relying too
heavily on automation, conflicts between the operator's
decisions and the machine's decisions, or tendencies of the
operator to override or defeat the automation due to lack of
trust in it. To help guide investigations into these issues, we
have developed a general model of human-machine systems
that represents the information flow between a machine, an
operator, and their environment, can be systematicallyaltered
to represent a wide range of automation capabilities, and
serves as the basis for a dynamic simulation of humanmachine interaction. This model, and the tools based on it,
are intended to be applied early in system development to help
establish the human-machineprotocol of the system.
The lower path through the Machine Input quadrant contains
nodes that act on information received from the World, and
the upper nodes in the quadrant act on information about or
from the operator. The Machine Output quadrant contains
nodes that make decisions based on goals and either perform
actions that change the state of the World (such as performing
a maneuver) or construct displays for the operator.
In the Human-Input quadrant, the operator receives
informationdirectly from the World and from the machine,
both by perceiving displays (explicit communication)and by
sensing machine actions (implicit communication). The
operator carries out a similar inference process which results
in judgments about the situation and the machine, and
performs some action which affects either the machine or the
World, or both. In each cycle of the loop, then, the World
provides information to both machine and operator, one or
both of which eventually provides an output back to the
World based on information about the World, itself, and the
other party. This action changes the state of the World, and
new information enters the loops in the next cycle.
MODEL STRUCTURE
The Mixed-Initiative Model 0is basically a loop
structure with the machine represented on one side, the
operator on the other, and the world, or situation, in the
center. Figure 1provides a highly simplified version of the
model. As shown, there are two "autonomous" loops in
which the World provides both parties with information and
both parties feed information back to the World in the form of
actions. There is also a "cooperative" loop between the
human and machine in which information is transferred
across the human-machine interface. On each side of both
loops, some type of information processing occurs.
Error checking is provided so specific nodes can query earlier
nodes in the path, or so information can be routed through
selected pathways depending on the presence of conflicts.
For example, if the operator's assessment of a situation based
on his own observations (Perceive World, Infer World State
in the Human Input quadrant) is incompatible with the
apparent behavior of the machine (Perceive Machine
Behavior, Perceive Displays, Infer Machine State), then the
operator will either change his assessment of the World or of
Figure 2 contains the full model structure. The basic loop
structure is maintained, but the information processing
functions in each of the loops have been elaborated. Before
considering the model structure in more detail, it should be
124
Downloaded from pro.sagepub.com at HFES-Human Factors and Ergonomics Society on October 2, 2014
PROCEEDINGS of the H U M N FACTORS SOCIETY 33rd ANNUAL MEETING--1989
MACHINE INPUT
MACHINE OUTPUT
U
WORLD
HUMAN OUTPUT
HUMAN INPUT
Figure 1: The general form of
the model is a set of three loops:
one for machine-world interaction,
one for human-world interaction,
and one for human-machine
interaction.
I
OPERATOR'S
Figure 2: The complete form of the model retains the general structure but with elaborated informationprocessing functions.
125
Downloaded from pro.sagepub.com at HFES-Human Factors and Ergonomics Society on October 2, 2014
PROCEEDINGS of the HUMAN FACTORS SOClElY 33rd ANNUAL MEETING-I989
the machine. If he trusts the automation more than his own
perceptions, as in the case of watching a radar screen, he will
revise his Infer World State node based on the machine's
inputs. However, if he trusts his own perceptions more than
the automation,he will revise his Infer Machine State node
and possibly override the automation.
Personalized machine can contain a static model of a
particular operator's preferences, while an Inferred Intent
Responsive machine can dynamically infer operator intent
based on the context and the operator's behavior. An
Operator State Predictive machine can also use information
about the operator's physical state, while an Operator
Predictive machine can anticipate operator actions, such as
errors. Note that each succeeding level of intelligence
subsumes all the previous levels: in order to infer operator
intent, for example, the machine must have information about
both the operator and the context.
The loop-like form of the model suggests that it may serve as
the basis for a model of human-machine interaction
dynamics. In such a model, information would flow through
the loops on a regular cycle, with information in each cycle
starting and ending in the World node. We have developed
such a model for a simple decision aid. Before describing
this model, though, we must first describe how the full MIM
can be systematicallyreduced to represent such an aid.
In the first six levels of autonomy, the machine has no
authority to act on the World; the machine is limited to
communication with the operator. Distinctions between these
levels of autonomy are made based on the sophisticationof
the machine's information processing functions and authority
to manipulate the operator's displays. The last six levels,
from Servant through Autonomous, give the machine the
ability to take actions. Distinctions between these levels are
made based on the permission and override protocol between
the operator and machine. For example, an "Associate"
machine is capable of autonomous action without explicit
operator permission, but the operator may always override or
inhibit it. A "Partner" machine has equal override authority
with the operator, and a "Supervisor" machine can override
the operator, but the operator may not override it. A good
example of the latter is the "stick shaker" which is intended to
prevent aircraft stalls by shaking when stall is imminent and
pushing itself forward with great force to recover from a stall.
A TAXONOMY OF AUTOMATION LEVELS
A taxonomy of automation levels applied to the MIM is
presented in Figure 3. Automation levels are considered
along two dimensions: how intelligent the automation is, and
how much autonomy it has. Each combination of a level of
intelligence and a level of autonomy is referred to as an
"automation state", and each automation state corresponds to
a unique, predefined form of the MIM.
The lowest level of intelligence is Raw Data, in which the
automation does no real data processing. A Procedural
machine can process information or act according to
predefined procedures. A Context Responsive machine can
change its behavior in response to its environment. A
In general, levels of intelligence are represented by the
configuration of the Machine Input quadrant of the MIM and
levels of autonomy by the Machine Output quadrant. For
example, a Context Responsive machine would contain only
the lower path through the Machine Input quadrant; those
nodes relating to information about the operator would be
absent. Further, machines which cannot act on the World
would have no action path back to the World. The form of
the MIM which is appropriate to represent any given system
concept is determined by answering a series of standard
questions about the machine's capabilities. The responses to
one set of questions fiies the machine's level of intelligence,
and those to the other its level of autonomy. Once
determined, these two levels fix the configurations of the
corresponding machine quadrants of the MIM, and they in
turn determine the human side of the model.
LEVELS OF INTELLIGENCE1
A PROOF-OF-CONCEPTSIMULATION
To illustrate, consider a simple decision aid. It cannot
perform any actions itself, other than advising the operator,
but it can process available information and provide a
recommendation to the operator. It has information about the
state of the situation available to it, but knows nothing about
the operator.
The MIM representation of such an aid is shown in Figure 4.
The nodes relating to knowledge about the World are present,
as are decisionmaking and display nodes. The operator, as
represented, is capable of directly observing the World, and
the fiial output sent back to the World represents the
decisions made by the operator after taking the aid's
recommendationsinto consideration.
Figure 3: A taxonomy of two automation levels. For any
given system concept, a level of intelligence combined with
a level of autonomy describes the system's automation "state".
126
Downloaded from pro.sagepub.com at HFES-Human Factors and Ergonomics Society on October 2, 2014
PROCEEDINGSof the HUMAN FACTORS SOCIETY 33rd ANNUAL MEETING-1989
interdependent variables governed by differential-like
equations. These variables are expressed as levels and rates,
and Dynamo computes values for each variable for each time
increment based on the interdependent relationships between
them.
The parameters used in our simulation are also shown in
Figure 4. The parameters of interest in the World node are
workload, the level of risk associated with any given
decision, task complexity, and time. The machine's
performance level is represented by a "reliability" parameter,
and the operator's assessment of the machine's reliability is
represented by the "perceived reliability" parameter in the
operator's Machine Model node. Other characteristics of the
operator that are of interest are his perceptions of risk and
own workload, his skill level, his own performance level
(decision accuracy), and his level of self confidence.
The simulation model contains fourteen variables of which
six are independent, and eleven equations. An example of
output from this model is shown in Figure 6. Under this
scenario, the operator begins with a dubious opinion of the
aids reliability, but as the aid performs well (Macc, or
machine accuracy, is high) his degree of Trust in the aid
grows. At T=80, the aid fails, and system accuracy drops
sharply as well. After realizing that the aid has failed, the
operator loses Trust in the aid and transfers decisionmaking
responsibility back to himself, accounting for the gradual rise
in system accuracy during the automation failure. When the
aid starts performing correctly again, system accuracy rises,
but not to its previous level due to the operator's low Trust in
the aid. According to our hypothesis, operator Trust takes
longer to be rebuilt than to be destroyed, and the operator's
overall opinion of the aid will be less sensitive to change as
his experience with it increases; this is reflected in the
gradual, negatively accelerated rise in the Trust curve
following the failure.
The hypothesized relations between these parameters are
shown in Figure 5. Each parameter only originating outputs
is an independent variable; the parameter only receiving
inputs is the dependent variable; and the others are
interdependent. Each parameter is scaled from 0 to 1 and is
expressed either as a percentage of a resource (such as
workload capacity) or as a probability (such as reliability). It
should be noted that the parameters and their relationships are
not intended to be accurate representations of known
processes but rather a "reasonable" hypothesis intended to
explore the potential for a pseudo-mathematical treatment of
essentially qualitativevariables. Some specific
interrelationships between variables and explanations of
hypothesized variable behavior are given in (Boettcher et. al.,
1989) elsewhere in these Proceedings.
By manipulating automation failure profiles, operator
workload levels, risk levels, task complexity, and operator
skill levels, alone or in combination, a wide range of complex
behavior can be observed. The results of this effort suggest
that this approach shows promise for addressing issues of
To construct a dynamic simulation from this structure, we
used the Dynamo (Pugh-Roberts, 1986) simulation language,
which is capable of representing the behavior over time of
system
accuracy
WORLD
MODEL
operator accuracy
compliance
INFER
WORLD
STATE
CONTRM-
MACHINE INPUT
HUMAN OUTPUT
/
I GOALS
v
PLAN
OWN
ACTION
BEHAVIOR
GOALS
DISPLAYS
error?
Figure 4. h4lM Depiction of a decision aid task. Model parameters are
indicated with arrows indicating the components they reside in.
127
Downloaded from pro.sagepub.com at HFES-Human Factors and Ergonomics Society on October 2, 2014
I
PROCEEDINGS of the HUMAN FACTORS SOCIETV 33rd ANNUAL MEETING--1989
complex automation and human-machine interaction
dynamics. The next step in this effort is to experimentally
validate such a model.
REFERENCES
Boettcher, Kevin, Robert North, and Victor Riley (1989).
"OnDeveloping Theory-Based Functions to Moderate
SUMMARY
A general model of human-machine systems that is intended
to represent a large range of automation levels and support
exploration into a large range of automation issues has been
presented. The model can be systematically modified,
according to predefined rules, to represent candidate
automation levels of specific system concepts, and is
therefore intended to be used in front-end analysis to establish
human-machine protocols and to identify automation issues
associated with those concepts. Also, a proof-of-concept
simulation which demonstrates the potential utility of the
model for investigating human-machine interaction dynamics
has been presented.
Human Performance Models in the Context of Systems
Analysis," in Proceedings of the 1989 Human Factors
Society Meeting, Denver, CO.
Pugh-Roberts Associates (1986). Professional DYNAMO
Reference Manual. Cambridge, MA: Pugh-Roberts
Associates, Inc.
Wiener, Earl L. (1986). "Fallible Humans and Vulnerable
Systems: Lessons Learned from Aviation." NATO Advanced
Research Workshopon Failure Analysis of Information
Systems, Bad Windsheim, Germany.
ACKNOWLEDGEMENT
The author wishes to acknowledge the contributions of Chris
Collins to this work.
perceived risk
perciived reliability
f
t
risk
duration
Figure 5. Functional Relationships Between Variables: Output-only variables are
independent, variables with both inputs and outputs are interdependent, and
variables with only inputs are dependent
1
-
1
-
1
- - - - - - -Machine accuracy
1
System accuracy
Trust in aid
PROBABILITY
OR PERCENT
-.
0.4
0.2:
- - Compliance with aid
I
0
I
I
I
40
I
I I I I I I
80
120
TIME (arbitrary units)
I
I
I
I
I
160
Figure 6: A sample of simulation output. Machine accuracy is an independent variable, expressed in terms
of percent correct decisions. System accuracy is the joint human-machine decision accuracy, in the same
units. Trust in aid is the operator's subjective judgment that the next decision made by the machine will be
correct, expressed as a probability. Compliance with aid is the probability that the operator will comply with
the aid's recommendation, based on the relationship between his trust in the aid and his confidence in his
own decision. If trust is greater than confidence, compliance will be high; if confidence is greater than trust,
compliance will be low.
128
Downloaded from pro.sagepub.com at HFES-Human Factors and Ergonomics Society on October 2, 2014