A THEORY OF REQUISITE DECISION MODELS Lawrence D

Acta Psychologica 56 (1984) 29-48
North-Holland
29
A THEORY OF REQUISITE
Lawrence D. PHILLIPS
London School
of Economics
DECISION MODELS
*
and Political Science, UK
A requisite decision model is defined as a model whose form and content are sufficient to solve a
particular problem. The model is constructed through an interactive and consultative process
between problem owners and specialists (decision analysts). The process of generating the model
uses participants’ sense of unease about current model results to further development of the model.
Sensitivity analyses facilitate the emergence of new intuitions about the problem; when no new
intuitions arise, the model is considered requisite. At all stages of development, the model
represents the social reality of the shared understanding of the problem by the problem owners.
The goal of creating a requisite model is to help construct a new reality, to create a future.
Validating a requisite decision model is done with reference to a requisite validation model
whose form will be recognizably multi-attributed and whose content may draw on a variety of
disciplines both scientific and clinical. A requisite model is more likely to be adequate if problem
owners contributing to its development represent a variety of views, if the adversarial process is
used to advantage, and if the specialist can provide a neutral perspective and setting. The role of
decision analysis is to provide a framework for the development of a coherent model and to
provide structure to thinking. While requisite models may be applicable in other areas of social
science, they certainly highlight the need for a psychology of what people can do.
Several years ago a visiting psychologist who was being shown an
expert system for insurance underwriting commented on the way in
which knowledge was encoded as subjectively-assessed prior probabilities and likelihoods (Phillips and Wisniewski 1983) by saying “But
Tversky and Kahneman have shown that you can’t do that”. Although
I explained that the system had been tested in the field and been found
satisfactory, I have ever since been left with a sense of unease that
judgemental modelling is not quite like other kinds of modelling in
science. For even if every study ever conducted on probability assess* I am most grateful for helpful comments on earlier drafts from Ward Edwards, Simon French,
Patrick Humphreys, John Michon, Ronald Stansfield, Stephen Watson, Stuart Wooler and two
anonymous referees.
Author’s address: L.D. Phillips, Decision Analysis Unit, London School of Economics and
Political Science, Houghton Street, London WCZA ZAE, UK.
OOOl-6918/84/$3.00
0 1984, Elsevier Science Publishers B.V. (North-Holland)
30
L. D. Phillips / Requisite decision models
ment showed the inappropriate use of heuristics leading to seriously
distorting biases, one could still not conclude that the probability
judgements of the experienced underwriters in our work were biased.
This is not merely a question of the generalizability of research findings
from one subject population to another, or from one task to a different
one. It is more fundamentally a limitation of descriptive research that
tells us only what people do do, not what they can do. There are deep
and serious issues here which I wish to explore in this paper.
A case study
It will be helpful to develop these issues from a case study. A company
has established a special study group to define the marketing opportunities for a class of products that is new to the company. Although the
group has been working for six months, they are not yet in a position to
recommend whether or not the company should try to break into this
market, and if so, with what particular product. The group is finding it
difficult to make further progress, so the Management Sciences Group
of the company recommends that potential products currently under
consideration be evaluated in a decision conference. This is an intensive
two-day problem-solving session in which the problem owners in a live
decision problem are helped by a decision analyst to develop a model
of the problem, to encode the judgements required by the model, and to
conduct numerous sensitivity analyses with the aid of an on-line
computer (Ring 1980). Through successive refinements of the model,
new intuitions invariably emerge about the problem, and often an
implementable solution is reached.
At the decision conference, representatives from the company were
able to formulate 10 possible products that could be sold. A multi-attribute value model consisting of 28 attributes, 12 relating to costs and
16 to benefits, allowed comparisons of the potential products to be
made. The initial result is shown in fig. 1, where each dot in the space
represents the weighted average cost and weighted average benefit of
each product. The ideal product would be in the upper left corner: low
cost and high benefit to the company. Realizable products usually
range from lower left, low cost-low benefit, to upper right, high
cost-high benefit. Note that one product, the circled dot, dominates all
the others at this level of the analysis. This came as a shock to the
L. D. Phillips / Requisite decision models
31
high
BENEFIT
Fig. 1. MAU evaluation
of potential
products.
group, for no-one had anticipated that this particular product would be
so attractive. Worse, it is a product that the company would not
consider actually marketing; it had been included, even though it was
given a negative evaluation when at the start one person in the group
mentioned it, because the decision analyst suggested that it would come
out poorly in the overall evaluation and would help to validate the
model. Note, too, the negative slope indicated by the cluster of dots
within the dashed line. They show that the more expensive the product
is to the company, the less benefit it will be. That result was completely
counter-intuitive.
In spite of these unanticipated results, the company did not reject the
analysis or the model. Further discussion revealed that the low-cost,
high-benefit product is an ingredient of their current products. The
company is very experienced in buying, storing and processing this
ingredient, so giving it a special treatment and wrapping a cellophane
bag around it would be very cost-effective. The negative slope was
thought to be caused by constraints in the current methods of production. Extensive sensitivity analyses examined the conditions under
which some of the dominated products could be moved upwards and to
the left, but all assumptions that improved the dominated products also
made the dominant one better, too. Partly as a result of the decision
conference, further exploration within the product class changed direction, the new products group was dissolved and responsibility for
32
L..D. Phillips / Requisite decision models
exploration of alternative new products was assigned to the Management Sciences Group.
Models and models
Decision theorists are fond of contrasting two types of decision models,
descriptive and normative. Roughly, descriptive models tell us what
people actually do, normative models what they should do. The newproduct model is neither. In no sense did it describe the behaviour of
any of the participants either before or after the decision conference.
Furthermore, the participants rejected outright possible development of
the dominant product; it was considered far too hum-drum for a
company that prides itself on selling unique, value-for-money products.
Thus, the model was neither normative nor optimal because a relatively
important attribute, company image, had been omitted. And the omission could not be attributed to any limitation due to bounded rationality, so the model was not a satisficing model.
What, then, is the nature of the model that was developed? Indeed,
could it be considered a model at all? Quite possibly it was not. The
1933 edition of the Oxford English Dictionary and its Supplement offer
17 definitions of ‘model’ considered as a noun, and the 1976 Supplement adds about 10 more. Most of these definitions fall into two
categories: those that consider a model as a representation of something, and those that take a model as a standard, usually of some ideal,
as in a ‘model mother’. We can quickly reject this latter category as
inapplicable, for the new-products model was found to be seriously
incomplete. If the former category is applicable, then we must ask,
‘Representation of what? There is no existing physical reality that can
be identified, though there are potentially-realizable products that are
being evaluated. The model is not a representation of those products.
Rather, the model attempts to capture the value judgements, and their
relative importance, of the group about the various advantages and
disadvantages of the potential products. The model is ‘about’ the
judgements of the group. Although no person in the group would
necessarily agree with all the judgements, the model expresses a social
reality that is evolving as the group works. This social reality is not an
ideal, merely the current working agreement among the members, some
of whom may temporarily be suspending their disagreements with parts
L. D. Phillips / Requisite decision models
33
of the model to see whether their differing positions will affect the
overall evaluation in subsequent sensitivity analyses. If these differences
are acknowledged by the group and are held as alternative representations, then there may be several social realities in existence at any one
time.
The model, then, is about a shared social reality. In problem-solving,
there can be no ‘objective’ problem, only, at best, a given problem
statement like that given by the company to the new-products group.
This problem statement, which may include objectives and aims, bounds
the problem but is not itself the problem. Each person creates an
internal representation of the problem (Cliff and Young 1968; Phillips
1982a, 1984) bringing to bear on the initial problem statement any
experience and knowledge that seems relevant, and it is in a group
setting like a decision conference that the differing perspectives become
apparent. But unlike the varied perceptions of a distant mountain
brought about by differences in vantage point, where problem solving is
concerned there is no real problem external to the observers. Thus, the
new-products model could not have been a representation of some
reality external to the participants, though certain aspects of potentially-realizable reality, like capital costs, were certainly considered.
Instead, the model, in its final state, represented a socially-shared view
of the problem. It was not the view itself, for each individual would
have claimed to a deeper understanding of at least some aspects of the
problem than either was or could have been captured by the model.
The Latin root of ‘model’ means ‘small measure’. In this respect, the
new-product model was truly a model; it was a small measure, a lesser
reality. Any decision model must be a small measure, for as Savage
(1954) explains in his discussion of small-worlds, a small-world consequence is actually a grand-world act. In other words, a small-world is
not just a bounded grand-world, it is also an idealization resulting from
the necessary blurring of grand-world distinctions.
This small-world model of a shared social reality has the potential
for contributing to the creation of a new reality. The new-product
model showed that the company would be ill-advised to proceed with
the manufacture and marketing of a product of this type, at least with
the present constraints on production. The models generated in other
decision conferences have been used to guide subsequent decision
making of a more positive nature. One company, for example, invested
considerable sums of money in modernizing their existing plants and in
34
L. D. Phillips / Requisite decision models
building a new one. In all cases, the models played a creative role, for
they helped to generate a subsequently-realized reality. This creative
role of the model in problem-solving would appear to differ from the
descriptive role of models in science, but it should be noted that even in
science models play a creative role in theory building, as Harre (1976)
has observed.
Even if one grants the essentially creative role of models in problem
solving, a formidable problem would appear to call into question the
use of the term ‘model’ itself. Many philosophers argue that there can
be no direct or indirect causal connection between a model and the
system it represents (Kaplan 1964; Harre 1976). Insofar as a decision
model is used by people to construct and manage a new social and
physical reality, both direct and indirect causal links are evident.
The solution to this difficulty is found in the distinction between the
subject and the source of a model. The subject dictates the content of
the model, while the source provides its form. Psychologists in particular have borrowed extensively from other disciplines for the forms of
their models (Lewin’s valences from chemistry, Hull’s various ‘forces’
from mechanics, Miller’s channel capacity from information theory),
while relying on observed behaviour of participants in their experiments
to provide the content. Models whose subject and source differ are
called paramorphs. When the subject and source are the same, as in a
behaviourist’s model of the effects of reinforcement schedules, the
model is called a homeomorph. Warr (1980) notes that this distinction
is widely accepted by philosophers, but that different labels are used for
the two types of model. His ‘Model-l’ and ‘Model-2’ attempt to
capture the sense of ‘homeomorphic’ and ‘paramorphic’ models, respectively, but his usage is broader.
In a decision conference, the subject is the social reality of the
problem, and this reality is partly constructed by using decision theory
as the source of the model. The form of the model is recognizably
decision-theoretic both in its structure (decision tree, influence diagram,
multi-attribute utility, etc.) and in its generic elements (acts, events,
outcomes, consequences, attributes, etc.). But the content of the model
(element names, element linkages, assessments of probabilities, utilities
and attribute weights) comes from the participants’ understanding of
the problem itself, an understanding that evolves during the course of
modelling. It is because the source, decision theory, has no causal
connection with the representation of the problem created during the
L. D. Phillips / Requisite decision models
course of the decision conference that the representation
mately be called a model. As Hart+ (1976) has observed:
35
can legiti-
It is just because we can form models of reality that we have the power to create a reality by
conceiving a structure which has the status of a model but whose subject we create on the basis of
the model (1976: 36).
So, the problem representation developed in the new-products decision
conference is a model. It is a paramorphic model, with decision theory
as its source, and a socially-shared understanding of the problem as its
subject. Many but not all features of the problem are captured by the
model whose chief role is to facilitate the subsequent construction and
management of a new reality.
Requisite models
We choose the term ‘requisite’ to distinguish this type of model from
descriptive, normative, optimal, satisficing or any other kind of model
commonly encountered in the decision literature. A model is requisite if
its form and content are sufficient to solve the problem. Put differently,
everything required to solve the problem is represented in the model or
can be simulated by it. A requisite model is a simplified representation
of a shared social reality. The model is simpler than the reality in three
respects: (1) elements of the social reality that are not expected to
contribute significantly to solving the problem are omitted from the
model, (2) complex relationships among elements of the social reality
are approximated in the model, and (3) distinctions in either form or
content at the level of social reality may be blurred in the model, as was
suggested in the discussion about ‘small worlds’. In the new-products
model, a criterion of major importance, company image, was omitted
from the model because any potential product that did not come up to
an adequate standard of uniqueness and value-for-money would be
screened out at an early stage. Secondly, the complex value relationships between costs and revenues were approximated in the model as
simple additive value structures, and thirdly, distinctions between present and future worth were not maintained in the model.
Requisite models are generated by the interaction between specialists
and problem owners in the problem. The specialists contribute the form
36
L. D. Phillips / Requisite decision models
of the model and the problem owners provide content, though the
specialists also assist in encoding the content to be compatible with the
form. In the new-products decision conference, two decision analysts
listened to the initial problem description given by the six main
problem owners in the problem until it become apparent that all views
could be accommodated within a multi-attribute value model. The
analysts suggested this structure to the group and eventually a
hierarchical model was developed which contained over 30 end attributes; these were sufficient to include the value dimensions of any
individual problem owner. The analysts then helped the problem owners
to generate assessments of value on each dimension for the 10 potential
products and also to assess the relative weights associated with each
attribute and higher nodes. In the course of generating these assessments some attributes were found to be unnecessary and others needed
redefinition. The final model was characterized by 28 end attributes.
The process of building a requisite model is sometimes conducted in
a group, at other times by a succession of discussions between specialists and individual problem owners. But in all cases, the process is
consultative and iterative, with the specialist acting as a group facilitator using, as necessary, techniques and procedures drawn from social
analysis (Rowbottom 1977), group feedback analysis (Heller 1969),
group process work (Rice 1965, 1969; Gustafson et al. 1973) and soft
systems analysis (Checkland 1981). Thus, the specialist has a dual role:
to facilitate the work of the group by keeping it task oriented, and to
contribute to those aspects of the task concerned with model form, but
not content.
Sensitivity analysis plays a crucial role in developing requisite models
(Phillips 1982b), and it is here that flexible computer programs greatly
facilitate in-depth analysis. Altering individual assessments allows disagreements between individuals to be examined to see if they make a
difference in the final results. Changing one or more assessments over
ranges of plausible values helps to identify crucial variables in the
model. Providing alternative analyses (such as folding a decision tree
forward to provide a risk profile in addition to folding it backwards to
give expected monetary value) allows participants to see implications of
the model. Sensitivity analyses provide new insights into the problem,
usually causing the participants to modify the model, sometimes in
content, sometimes in form. Often the existing structure is replaced by
an entirely different structure, or the original modelling reveals crucial
L.D. Phillips / Requisite decision models
37
elements that are better modelled in some different form. Thus, a
succession of models provides different perspectives which contribute
to a deepening understanding of the problem as new insights develop.
A key feature is that the modelling process uses the sense of unease
among the problem owners about the results of the current model as a
signal that further modelling may be needed, or that intuition may be
wrong. If exploration of the discrepancy between holistic judgement
and model results shows the model to be at fault, then the model is not
requisite - it is not yet sufficient to solve the problem. The model can be
considered
requisite only when no new intuitions emerge about the prob/em. This criterion of requisiteness is necessitated by a model-building
process that is generative: requisite models are not plucked from
people’s heads, they are generated through the interaction of problem
owners (Phillips 1982a, 1984). As a consequence, the developing structure of the model implies and generates its functions.
A requisite model is always conditional - on structure, on current
information, on present value-judgements and on the problem owners.
Anticipated events, new information, changes in circumstances, a new
problem owner, all can introduce a sense of unease about yesterday’s
requisite model. At best, then, a requisite model is conditionally prescriptive; if the participants hold these beliefs and make these value
judgements within this structural representation, then here is the logical
result. But those ‘ifs’ are rarely satisfied because the shared social
reality of the problem at hand is always larger than its iconic representation in the model.
Let me elaborate, for the consequences are far-reaching. I have
argued that a requisite model is a small-world representation of a
shared social reality, and that the shared understanding of the problem
is used to create a new reality. This is represented in fig. 2. The first
point to note is that the to-be-created reality is more complex and so
contains more elements than the shared representation of the problem
which, in turn, is more complex than the requisite model. Overlapping
portions represent similarities or the ‘positive analogy’, to use the
description of Hesse (1966), while non-overlapping areas indicate dissimilarities or the ‘negative analogy’. It is possible to have common
elements which have not been explored; they constitute the ‘neutral
analogy’ because it is not yet known whether they can be classed as
belonging to the positive or negative analogy. However, this is exceptional in creating requisite models because exploration of the model has
L, D. Phillips / Requisite decision models
38
to-be-constructed
Fig. 2. Relationship
of three systems in constructing
requisite
models.
been so thorough that all similarities and dissimilarities have been
identified; if this were not the case, the sense of unease would likely
remain and the model would not yet be requisite. The neutral analogy
between the socially-shared reality and the to-be-created reality eventually disappears because the creative relationship to the larger reality
requires the decision makers to attend to all elements of the positive
and neutral analogies and consider how these are to be implemented, if
at all.
Fig. 2 shows the relationship between the model and its referents at a
particular point in time: just after the requisite model has been completed. During the creation of the model the sizes of the sets and their
relative degree of overlap change, and the neutral analogy may be very
much in evidence. A key role of sensitivity analysis is to determine the
degree of overlap between the model and the socially-shared reality,
that is, to identify the important variables that will influence subsequent decisions. In a decision conference, for example, the initial model
may be wholly contained within the social reality; each problem owner
has contributed elements that he or she feels to be important, so the
collective social reality may be quite complex. Sensitivity analyses often
show that the model is unnecessarily complex, a judgement that can
usually be made only post hoc. The model is then simplified, with
further sensitivity analyses showing which variables are the crucial ones
in resolving any remaining conflict. At the end of the conference, the
problem owners know which elements of the model must be attended to
in their subsequent decisions. In other words, removal of irrelevant
elements reduces the sizes of the sets, while identification of insensitive
L. D. Phillips / Requisite decision models
39
elements reduces the overlap between sets. In short, the key role of
sensitivity analysis is to reduce the size of the neutral analogy by
ensuring that all elements are assigned to either the positive or negative
analogy. It is in this way that new insights are generated about the
problem, and creative thinking is stimulated.
It should now be clear why I said above that the requisite model can
at best be conditionally prescriptive. The model provides the means for
the problem owners to achieve a common understanding of the problem and to develop new insights about it. But more is usually involved
in creating a new reality - a myriad of details of implementation,
elements deliberately omitted from the model because they are irrelevant to issues of strategy or tactics but not of operations, as well as
features that remain unexplicated, part of the ‘seat-of-the-pants’
or
‘gut-feeling’ experienced by decision makers at all levels in the organization. The requisite model does not prescribe action, even conditionally; rather, it is a guide to action. That is all that could be said of the
new-products model, because its implied prescription, to develop the
product that dominated all others in the analysis, was summarily
rejected. There was no need to modify the model to include the missing
attribute for it had served its purpose in its incomplete form. The model
was requisite even though the conditioning assumptions were incomplete, for it enabled the problem to be solved.
By providing a guide to action, requisite decision models help
decision makers to achieve an important goal: to construct a new
reality. This goal is partly achieved in the process of generating the
model: insights emerge which clarify and extend participants’ understanding of the problem. When model content and form have been
revised to the point where the sense of unease has dissipated and no
new insights are generated, then the model can be considered requisite.
At this point, problem owners will know what to do next, even if it is to
gather additional information to resolve any remaining ambiguities.
The requisite model itself does not necessarily prescribe action, and it is
rarely descriptive, normative, optimal or satisficing. The focus in creating requisite models is on analysis. Perhaps this is what Ron Howard
had in mind when he coined the term ‘decision analysis’ (Howard
1966). In any event, most of the objections of critics of decision analysis
(Tocher 1976, 1977) disappear when the goal is seen to be analysis, not
prescription, and when it is realized that a requisite model facilitates
the synthesis of a new reality.
40
L. D. Philiips / Requisite decision models
Discussion
By now it should be apparent that requisite decision models and the
process of creating them are characterized by features, summarized in
table 1, that in combination are unique to this class of model. These
features raise several major issues. How is the validity of a requisite
model to be judged? How can we ensure that the model will be
adequate? What is the role of decision analysis? Are requisite models
applicable to other higher mental processes? Each of these questions
could be the subject of a separate paper, so I can only briefly deal with
them here.
The question of validity has exercised decision analysts for some
time, and they have provided a variety of answers. Howard (1973)
points out that before decision theory arrived on the scene, the question
could not be answered. Now, validity is to be judged by the coherence
of the process by which a decision is taken, not by the consequences.
The standard of coherent decision making is embodied in decision
analysis, and requires the decision maker to attend explicitly to information, embodied in structure and assessed as probabilities, and to
preferences, be they time preferences, risk preferences or utilities for
consequences, and the decision maker must apply the expected utility
rule.
An approach occasionally used to validate multi-attribute utility
models that purport to describe people’s judgements is to compare the
recomposed evaluations of the model with the holistic judgements of
the people. Although high correlations are interpreted as validating the
model, Humphreys and McFadden (1980) report that a MAU-based
decision aid is judged to be most useful in cases where the correlation
Table 1
Features of requisite decision models and the process of generating them.
Definition
Representation
Generation
Process
Criterion
Model status
Goal
Model is requisite when its form and content are sufficient to solve the
problem
Requisite model represents a shared social reality
Through iterative interaction among specialists and problem owners
Uses sense of unease arising from discrepancy between holistic judgements and model results in sensitivity analyses
Model is requisite when no new intuitions arise
Requisite model is at best conditionally prescriptive
To serve as guide to action, to help problem owners construct new reality
L. D. Phillips /
Requisife decision models
41
between initial model results and holistic judgement is low. When the
correlation is high, the resulting MAU model provides a good description of judgements that are already so well-understood by the people
making them that the model is of little use in helping them to construct
a new reality. But when the correlation is low, people experience a sense
of unease and usually ask for further sessions with the aid, especially if
they cannot immediately see how to resolve the discrepancy between
their holistic judgement and the result from the model. After several
sessions with the aid, the model has usually seen several revisions, and
at the end the sense of unease has gone. It is then that people report
how helpful the aid was, that it helped them to identify and resolve
inconsistencies and conflicts in their thinking, and that they now know
what to do next. In short, a requisite model was developed. In descriptive modelling, high correlations may well be indicative of model
validity. But in requisite modelling, low correlations are expected at the
start of the modelling process, with high correlations necessarily emerging at the end. Since the discrepancy between holistic judgement and
model results is used to refine the model, validity cannot be judged by
reference to the correlation between these variables.
Parallels between decision analysis and psychotherapy have been
identified by Fischhoff (1983) who raises serious difficulties of validation if the analogy is taken seriously; Buede (1979) discusses similar
issues in contrasting decision analysis as engineering science or clinical
art. In reviewing research on decision aids, Slavic et al. (1977) and
Jungermann (1980) find no completely satisfactory answer to the question of whether the aids really improve the quality of decisions.
Another approach to validity was proposed by Edwards et al. (1968).
They suggested that validity be judged by comparing alternative approaches. Lacking any external standard against which to judge the
validity of a model, the best that can be done is to choose that model
which performs best as measured by pragmatic criteria. Extending their
argument reduces the problem of validity to one of evaluation: establish criteria, and proceed as in multi-attribute utility analysis (Ford et
al. 1979; Keeney and Raiffa 1976).
In a sense, any attempt at validation is a more or less complete
multi-attribute utility evaluation. All validation studies use some mixture of criteria (attributes) against which data provide measures of
performance that are associated with value or utility scales of differing
relative importance. ‘Scientific’ approaches to validation emphasize
42
L. D. Phillips / Requisite decision models
public, repeatable procedures for obtaining data, objective measures of
performance, and agreed (statistical) methods for interpretating performance measures. Mapping performance measures to value or utility
scales, assessing the relative importance of each scale and combining
measures to form overall conclusions are all left to unaided human
judgements - but they are always done, if not by the author of the
study, then by the serious reader. The ‘clinical’ case-study approach to
validation provides substantial amounts of data and establishes their
relevance to a large number of criteria, but leaves much of the evaluation to the reader’s judgement. Both approaches consider performance
measures in a relative sense: experimental-group compared to controlgroup, and before-treatment to after-treatment, though many quasi-experimental design are of the before-after
variety as well. Without
elaborating further, I believe a convincing case can be made for
considering validation as a special case of multi-attribute utility evaluation.
Seen in this light, validating requisite models requires the application
of evaluation models, which may themselves be requisite or not insofar
as they solve the ‘problem’ of validation. Thus, continuing debate
about the validity of a particular model (or of a type of model for a
class of problems, e.g., MAU models for evaluation problems) indicates
that a requisite validation model has not yet been developed. Validating
a requisite decision model requires the development of a requisite
evaluation model. Even if the model to be validated is itself an
evaluation model, there is no circularity here for although the generic
forms of the models are the same, their actual structures and content
are different. In addition, the decision model would be requisite for one
group of problem owners, while the evaluation model would be requisite for a different group. This difference in the referent groups also
prevents an infinite regress of requisite validation models.
All that has been said here about developing requisite models applies
to the creation of a validation model. Although I have suggested that
multi-attribute utility theory is a generic source of all validation models,
the form particular to a specific validation model might be drawn from
some additional source. This might be a branch of science, or some
aspect of clinical art, or any discipline or body of knowledge deemed
relevant. The positivist or hypothetico-deductive school of science would
partially impose both form (e.g., only deductive structures) and content
(e.g., only observation statements), but as that view of science has given
L. D. Phillips / Requisite decision models
43
way, particularly in the social sciences, to the ‘new empiricism’ over the
past twenty years (Hesse 1976) alternative forms have been recognized
as characterizing
the actual progress of scientific enquiry (Kuhn 1970).
The view taken here is more consistent with that of the ‘new empiricism’; we would expect requisite validation
models to exhibit a
variety of forms, some looking more ‘scientific’, others more clinical, in
the criteria they use, but all appearing multi-attributed
in fundamental
structure. Thus, requisite validation models may rely on experimental
and control groups to provide data for comparing
performance
on
certain criteria and they may use human judgement
to assess performance on other criteria. Criteria may be objective, as when measurable
consequences of decisions are included, or relatively subjective, as when
participants
in a decision conference claim that decisions they subsequently took were different from those they would have taken without
the benefit of the conference.
It may not be necessary, in a particular case, for critics to agree on
all aspects of a validation model; sensitivity analyses can be conducted
to examine the effects on validity of different criterion weights, for
example, putting all the weight either on ‘objective’ or on ‘subjective’
criteria. More generally, sensitivity analysis plays as important a role in
developing a requisite validation model as it does in generating any
requisite decision model. One could argue that for science the to-and-fro
of confrontations
in seminars, conferences and scientific journals, attest
to adversarial processes operating like sensitivity analyses on validation
models.
I have argued that the validity of a requisite decision model is to be
judged by applying a requisite evaluation
model which will usually
include a mixture of ‘hard’ and ‘soft’, ‘objective’ and ‘subjective’
criteria against which judgements
are made about the process of decision making as well as about subsequent consequences.
But since all
this must occur after the requisite decision model has been developed,
what guidelines can be applied while the model is being generated to
ensure its subsequent adequacy? After all, a group of psychotics could
develop a model that would be considered requisite by the standards
proposed here.
The main safeguard against the creation of idiosyncratic,
even eccentric, models is the same as for science: reliance on adversarial
processes. A key requirement
of decision conferences is that the problem owners must represent
a variety of viewpoints.
In considering
44
L. D. Phillips / Requisite decision models
whether or not to develop and perhaps market a new product, the
design engineer will concentrate on function, the production manager
on efficient methods of manufacture, the sales manager on product
appeal, the financial controller on costs, the personnel manager on
staffing requirements, the managing director on profitability, market
share and future growth. Many of the concerns conflict: some desirable
functions may be difficult to produce efficiently, profitability may have
to be sacrificed to achieve market share, etc. By bringing all these
problem owners together, it is far more likely that all viewpoints will be
fairly represented in the model than if only ‘yes-men’ are in attendance.
Experience with decision conferences suggests that the adversarial
process helps participants to broaden their individual perspectives on
the problem, to change their views, to invent new options acceptable to
everyone, in short, to create a model that fairly represents all perspectives. But not even the adversarial process can prevent biases entering
the model through the influence of an ever-presentvariable: the climate
of the organization. A pervading complex of norms, values, expectations, the acceptable and the unacceptable, influences decisions
throughout the organization (Peters and Waterman 1982), often in
helpful ways, but sometimes in unhelpful ways. It is here that the
decision analyst’s role can provide some correction to the unhelpful
effects of climate. First, the experienced analyst who has worked with
different organizations has both the perspective and detachment to
detect and reflect back many of the biases that climate can introduce.
Secondly, an analyst who has minimal investment in the consequences
of the decision and who works with problem owners away from their
company, creates a reasonably neutral territory, thus minimizing the
effects of the company’s climate. This neutral climate also helps the
group to maintain a more balanced view of the problem, thus making
them more impervious to manipulation by particularly persuasive individuals who do not have a good case, especially when the analyst,
detecting attempts to manipulate the group, asks less vocal participants
for their views.
A final guideline to be considered when developing a requisite model
is discussed by Humphreys and Berkeley (1983). Following the model
of depth structure in organizations developed by Jacques (1976) and
Rowbottom and Billis (1978), Humphreys and Berkeley show, in effect,
that a model which is requisite at one level in an organization will
typically not be requisite at higher levels. This is because there are in all
L. D. Phillips / Requisile decision models
45
organizations qualitative shifts in the nature of work as one moves from
one level to the next higher one, so that a model which supports work
at one level may not satisfy work at the next level. Differences in
requisite models at the’various levels are matters of both form and
content, while the process of generating the model requires the decision
analyst to shift to the appropriate level in facilitating the work of
constructing each part of the model.
I have assumed that decision analysis plays an important role in
generating requisite models as the source for the form of the model. Of
course, it does more than that: it also provides guidelines for assessing
input quantities so that they will be compatible with the model’s
structure. But most important, decision analysis polices coherence while
the model is being constructed. Participants need not act as the
idealized individuals who are the starting point for normative decision
theory, they need only subscribe to coherence as a desirable characteristic for a requisite model. In this way, decision theory serves in an
advisory function (French 1983): it guides the construction of an
internally-consistent model. A major benefit is that it becomes easy to
pass from one structural form to another, for all structures will be
consistent with the general form of decision theory. Thus, posterior
probabilities from one model can serve as inputs to a decision tree
whose consequences might be modelled using multi-attribute utility
theory. Flexibility in adopting different structural forms is an essential
feature of tactical and strategic decision making in all organizations, as
Humphreys and Berkeley show.
Thus, my choice of decision analysis as the source for requisite
models is predicted on the assumption that model coherence is a major
attribute of importance in constructing as well as validating the model.
It is important to recognize that this is a value judgement on my part,
one which the reader may not share. The theory of requisite decision
models presented here does not require agreement with my judgement;
other model sources may be found to be useful.
Finally, we might ask whether requisite models might apply more
generally in the social sciences. One reason why requisite models are
useful is that they provide structured ways of thinking about problems.
Perceptual processes benefit from constitutional, prewired structures
that can be located in particular areas of the brain. The same appears to
be true of language. But problem solving is not localized in the cortex,
nor is there any evidence of associated constitutional structures. To
46
solve
L. D. Phillips / Requisite decision models
problems,
import
structure
we rely
from
ingly rich in possible
of problems
(Ulvila
analysis
on
memories
outside
ourselves.
structural
1982).
creative
and
we
is increas-
within that framework
structure
problem
problems
analysis
so that a vast number
usefully
By providing
can greatly facilitate
Decision
representations,
can now be conceptualized
and Brown
of similar
to thinking,
solving.
Insofar
decision
as other
sources are used for models, requisite models could well have a wider
impact in the social sciences. In one area, they have been in use for
many years, well before the advent of requisite decision models. That is
the study of the structure of organizations
where social analysis is one
method for bringing about change in an organization’s
structure, and
where the creation of requisite organizations
facilitates
getting work
done in harmonious
and creative ways (Jacques
1976; Rowbottom
1977).
Whether or not requisite models prove useful in other areas of social
science, their use in decision-making
and problem-solving
poses a new
challenge to investigators
of people’s capacity to judge, to evaluate, to
assess and to decide. The current vogue in judgement
research for
describing
the glass as half empty (Christensen-Szalanski
and Beach
1984) not only sees people as limited and biased in their judgements,
but is itself limited and biased in its presumption
that what people do
do is all that they can do. The lesson from requisite decision models is
that when paramorphic
models are developed by drawing on decision
analysis for coherent
structural
representations
within which judgements can be generated, with computers used to process information,
people are capable of constructing
futures that deal adequately,
even
well, with uncertainty,
risk, multiple objectives and structural complexity. Michon has pointed out that requisite models provide an integration of structure and function, of competence
and performance.
The
challenge, then, is to use this integrated viewpoint to find the conditions
in which people can be intellectual
athletes rather than intellectual
cripples, to discover how intellectual
functioning
create a psychology of what people can do.
can be extended,
to
References
Buede, D.M., 1979. Decision analysis: engineering science or clinical art? Technical Report
TR79-2-97, McLean, VA: .Decisions and Designs, Inc., Nov.
Checkland, P., 1981. Systems thinking, systems practice. Chichester: Wiley.
L. D. Phillips / Requisite decision models
47
Christensen-Szalanski, J. and L. Beach, 1984. The citation bias: fad and fashion in the judgement
and decision literature. American Psychologist 39(l), 75-78.
Cliff, N. and F.W. Young, 1968. On the relation between unidimensinal judgements and multidimensional scaling. Organizational Behavior and Human Performance 3, 269-285.
Edwards, W. and J.R. Newman, 1982. Multiattribute evaluation. Beverly Hills, CA/London: Sage
Publications.
Edwards, W., L.D. Phillips, W.L. Hays and B.C. Goodman, 1968. Probabilistic information
processing systems: design and evaluation. IEEE Transactions on Systems Science and
Cybernetics SSC-4, 248-65.
Fischhoff, B., 1983. ‘Decision analysis: clinical art or clinical science?’ In: L. Sjoberg, T. Tyszka
and J. Wise (eds.), Human decision making. Bodafors: Doxa. pp. 68-94.
Ford, C.K., R.L. Keeney and C.W. Kirkwood, 1979. Evaluating methodologies: a procedure and
application to nuclear power plant siting methodologies. Management Science 25, l-10.
French, S., 1983. ‘A survey and interpretation of multi-attributed utility theory’. In: S. French, R.
Hartley, L.C. Thomas and D.J. White (eds.), Multi-objective
decision making. London:
Academic Press.
Gustafson, D.H., R.K. Shukla, A. Delbecq and G.W. Walster, 1973. A comparative study of
differences in subjective likelihood estimates made by individuals, interacting groups, Delphi
groups, and nominal groups. Organizational Behavior and Human Performance 9, 280-291.
Harre, R., 1976. ‘The constructive role of models’. In: L. Collins (ed.), The use of models in the
social sciences. London: Tavistock Publications.
Heller, F., 1969. Group feed-back analysis: a method for field research. Psychological Bulletin 72,
108-117.
Hesse, M., 1966. Models and analogies in science. New York: University of Notre Dame Press.
Hesse, M., 1976. ‘Models versus paradigms in the natural sciences’. In: J.N. Wolfe (ed.), Social
issues in the Seventies. London: Tavistock Publications.
Howard, R.A., 1966. ‘Decision analysis: applied decision theory’. In: D.B. Herty and J. Melese
(eds.), Proceedings of the Fourth International Conference on Operational Methods. New
York: Wiley-Interscience. pp. 55-71.
Howard, R., 1973. ‘Decision analysis in systems engineering’. In: R.F. Miles (ed.), Systems
concepts: lectures on contemporary approaches to systems. New York: Wiley.
Humphreys, P.C. and D. Berkeley 1983. ‘Problem structuring calculi and levels of knowledge
representation in decision making’. In: R.W. Scholz (ed.), Decision making under uncertainty.
Amsterdam: North-Holland.
Humphreys, P.C. and W. McFadden, 1980. Experiences with MAUD: aiding decision structuring
versus bootstrapping the decision maker. Acta Psychologica 45, 51-69.
Jacques, E., 1976. A general theory of bureaucracy. London: Heinemann.
Jungermann, H., 1980. Speculations about decision-theoretic aids for personal decision making.
Acta Psychologica 45, 7-34.
Kaplan, A., 1964. The conduct of inquiry: methodology for behavioral science. New York and
London: Harper and Row.
Keeney, R.L. and H. Raiffa, 1976. Decisions with multiple objectives. New York: Wiley.
Kuhn, T.S., 1970. The structure of scientific revolutions. (2nd ed.) Chicago, IL: The University of
Chicago Press.
Peters, T.J. and R.H. Waterman, 1982. In search of excellence. New York and London: Harper
and Row.
Phillips, L.D., 1982a. ‘Generation theory’. In: L. McAlister (ed.), Research in marketing, Supplement 1: Choice models for buyer behavior. Greenwich, CT: JAI Press. pp. 113-139.
Phillips, L.D., 1982b. Requisite decision modelling: a case study. Journal of the Operational
Research Society 33(4), 303-311.
48
L. D. Phillips / Requisite decision models
Phillips, L.D., 1984. ‘A theoretical perspective on heuristics and biases in probabilistic thinking’.
In: P.C. Humphreys, 0. Svenson and A. Vari (eds.), Analysing and aiding decision processes.
Amsterdam: North-Holland.
Phillips, L.D. and T.K. Wisniewski, 1983. Bayesian models for computer-aided underwriting. The
Statistician 32 252-263.
Rice, A.K., 1965. Learning for leadership. London: Tavistock Publications.
Rice, A.K., 1969. Individual, group and inter-group processes. Human Relations 22, 565-584;
errata Human Relations 23, 498, 1980. Reprinted in: E.J. Miller (ed.), Task and organization.
London: Wiley, 1916.
Ring, R., 1980. A new way to make decisions. Graduating Engineer, Nov. 46-49.
Rowbottom, R.W., 1977. Social analysis. London: Heinemann.
Rowbottom, R.W. and D. Billis, 1978. ‘The stratification of work and organizational design’. In:
E. Jaques, R.O. Gibson and D.J. Isaac (eds.), Levels of abstraction in logic and action. London:
Heinemann.
Savage, L.J., 1954. The foundations of statistics. New York: Wiley.
Slavic, P., B. Fischhoff and S. Lichtenstein, 1977. Behavioral decision theory. Annual Review of
Psychology 28, 1-19.
Tocher, K.D., 1976. Notes for discussion on “control”. Operational Research Quarterley 27,
231-240.
Tocher, K.D., 1977. Discussion on control - reply to comment on my paper. Operational Research
Quarterley 28, 107-109.
Ulvila, J.W. and R.V. Brown, 1982. Decision analysis comes of age. Harvard Business Review,
Sept.-Oct.
Warr, P.B., 1980. ‘An introduction to models in psychological research’. In: A.J. Chapman and
D.M. Jones (eds.), Models of man. London: Clark Constable.