Decision Quality, Confidence, and Commitment

Decision Quality, Confidence,
and Commitment with Expert Systems:
An Experimental Study
David Landsbergen
Ohio State University
David H. Coursey
Florida State University
Stephen Loveless
Florida International University
R.F. Shangraw, Jr.
Project Performance Incorporated
ABSTRACT
This experiment investigates the relationship between results
produced by an expert system (ES), its physical structure and
features, and the user's commitment and confidence in its solutions. One hundred one subjects among four MPA programs were
asked to evaluate ten applicants for a government budget officer
position. Among the findings: decision makers who utilized the ES
made higher quality decisions but exhibited less confidence and
commitment than did those who worked without an ES; decisionmaker involvement in the decision process corrected for this
disparity. We discuss the implications for ES design and future
research.
The authors dedicate this article to our
late colleague Stephen Loveless.
J-PART 7(1997): 1:131-157
An expert system (ES) is a computer program that emulates
human expertise whether it is obtained directly from experts or
from written sources such as regulations. The claimed benefits
are many: improved decision-making quality and consistency,
cost savings, expansion of organizational knowledge, among
others (Hadden 1986; cf. Gill 1995). These programs are quite
varied in their application, which includes training new employees, user-friendly front-ends to databases, and even the making of
decisions for employees by using the expert's reasoning (Coursey
and Shangraw 1989; Wong and Monaco 1995). During the late
1980s, expert systems began diffusing throughout government.
Descriptions of successful projects are prevalent, especially for
131 /Journal of Public Administration Research and Theory
Symposium on Public Management Information Systems
technical tasks in environmental policy and defense. There are,
however, serious problems such as the IRS's cancellation of its
Check Handling Enhancements and Expert System (CHEXS)
after severe criticism from the Government Accounting Office
(GAO) because of project mismanagement (Quindlen 1992).
Although his study focused on provate sector applications, Gill
(1995) found over 66 percent of the expert systems that were
developed in the late 1980s were abandoned within five years.
While there are other, relatively new information technologies
with higher diffusion rates (Fletcher, Bretschneider, and
Marchand 1992), the high costs and possibly exaggerated benefits
require research clarifying the case study findings.
Relatively little research outside the case study literature
directly evaluates the use of expert systems in decision making.
This is partly attributable to the traditional view of management
information systems (MIS) researchers that expert systems are
simply a subset of decision support systems (Davis and Olson
1985). A decision support system (DSS) is designed specifically
to support decision makers in areas where they have been shown
to perform poorly, such as in underutilizing base rate information
(Meehl and Rosen 1955); ignoring fundamental principles of
sampling (Tversky and Kahneman 1974); showing an inability to
improve predictive accuracy with increased amounts of information (Oskamp 1965, and compare Sage 1981; Tversky and
Kahneman 1974). ESs and DSSs are similar in that they assist
with less structured problems and target improved decisionmaking quality, efficiency, and consistency. However, it is
generally argued that DSSs do not form conclusions but simply
supply information to decision makers (Efroymson and Phillips
1987) unlike some ESs. DSSs usually require quantification of a
problem while expert systems rely on heuristic reasoning. Hence
it is argued the ES is often more suitable than the DSS for illstructured problems. This is significant since it is commonly
asserted that computerized decision aids may be more effective
with problems that are more structured (e.g., Power, Meyeraan,
and Aldag 1994). However, typical ES applications address
problems that are more structured than are DSSs since many of
the programs emphasize actually forming the decision for the
user whereas DSS work stresses a more informative role. These
differing philosophies are attributable to the emphasis ES
developers place on patterning an expert's reasoning as the best
way to make decisions while DSS programmers are typically
more concerned with helping users expand or clarify the way
they view problems. Simon (1977) describes three phases of
decision making: intelligence (searching for conditions that
require a decision), design (developing and evaluating alternatives), and choice. ES programs are likely to be aimed at the
132/J-PART, January 1997
Symposium on Public Management Information Systems
choice phase while DSS programs are likely to be aimed toward
intelligence and design.
There is considerable ambiguity concerning ES and DSS
differences. This is hardly surprising, given the diversity of the
programs in these domains. Expert systems are more typical in
relatively well-structured problems, but the technology is far
more flexible than are traditional DSSs. The alleged differences
are probably due more to the application than to the actual capabilities of the two technologies. ES writers have had an unfortunate tendency, in their glowing promises of potential benefits
and completed applications, simply to dismiss as irrelevant
computer-assisted decision-making studies in the mainstream MIS
literature.
Many authors have investigated whether DSSs actually result
in improved decisions (Aldag and Power 1986; Cats-Baril and
Huber 1987; Christen and Samet 1980; Gallupe, DeSanctis, and
Dickson 1988; Goslar, Green, and Hughes 1986; Goul, Shane,
and Tonge 1986; Joyner and Tunstall 1970; Power et al. 1994;
Sharda, Barr, and McDonnel 1988; Steeb and Johnson 1981;
Turoff and Hiltz 1982; Ye and Johnson 1995). It is not clear,
however, how decision makers integrate an essentially rational
component like a DSS into their sometimes irrational choice
processes. How do decision makers commit themselves to a
solution proposed by a DSS? How do decision makers justify
agreeing widi a DSS suggestion when the decision maker must
consider several reasons the solution might be rejected even
though, on average, the DSS should improve performance? For
example, decision makers should consider if the DSS is offering
a solution outside its limited programmed expertise. This danger
is especially relevant to an ES where system performance often is
reduced drastically with just small departures from the
programmed knowledge. There are situations where decision
makers would have to overrule the ES, and it is expected that
they would do so.
Here we investigate the relationship between decision
quality, confidence, and commitment in a well-structured decision
task assisted by an ES. If decision makers are not committed to
their decisions, or are inappropriately confident, any objective
improvement in decision making can be obviated by the realworld factors affecting confidence and commitment. For
example, Christen and Samet (1980) found that decision makers
who used a DSS outperformed unassisted decision makers, but
because the assisted decision makers disagreed with DSS output
their actual performances were identical. If computer-assisted
decision making is to be effective, the amount of confidence and
133/J-PART, January 1997
Symposium on Public Management Information Systems
commitment placed in a decision should be appropriate to the
quality of that decision.
In this article we survey previous research on DSS decision
quality, confidence, and commitment and propose a credibility
model where decision makers use credibility factors to determine
the level of confidence in and commitment to ES/DSS solution.
After that we experimentally test hypotheses on the relationship
between decision quality, confidence, and commitment as well as
some of the real world factors that affect confidence and
commitment.
PRIOR RESEARCH ON DECISION QUALITY,
CONFIDENCE, AND COMMITMENT
Decision Quality and Decision Commitment
Understanding the relationship between the use of ESs and
decision quality and commitment begins with the existing DSS
literature. Despite significant work on the importance of commitment in other decision-making contexts there has been little
empirical research on commitment to DSS decisions (see Bazerman 1990). The study of commitment is a vitally important area
of research because of the dieoretical and practical benefits
in understanding how decision makers become committed and
remain committed to a decision, even when that decision may be
failing. An understanding of the significant factors and processes
would help leaders to harness and preserve commitment and,
conversely, to discontinue dysfunctional commitment to failing
courses of action.
Schwenk (1986) distinguishes between two types of commitment: commitment to a course of action and commitment to an
organization or leader. This study examines commitment to a
course of action. Using this definition, Staw (1976; 1981; 1984)
found that more resources are committed when a project is failing
than when it is successful. Some other factors responsible for an
increased commitment of resources are whether a decision is selfterminating—less commitment—or not—more commitment (Staw
and Ross 1978); whether there is social anxiety and an audience—increased commitment (Brockner et al. 1986); and whether
decision makers perceive that the poor results are due to bad
luck—lower commitment—or reflect negatively on themselves—
higher commitment (Brockner et al. 1986).
Several behavioral explanations are offered. Whyte (1989)
argues that many studies (Aronson 1976; Festinger 1957; Staw
and Ross 1978) stem from the psychological need to justify past
134/ J-PART, January 1997
Symposium on Public Management Information Systems
actions. A second approach uses cognitive science literature,
arguing that commitment can be explained by the framing of
decisions. Conlan and Wolf (1980) examine the relationship
between the nature of the problem-solving task and the likelihood of commitment. They find that decision makers who use a
"calculating strategy" to determine the causes of the failure of a
decision tend to be less committed to a failing course of action.
Prospect theory (Kahneman and Tversky 1979 and 1982) was
adapted by Whyte (1989), who argues that the way risk is
perceived by decision makers will affect how much someone
remains committed. Undoubtedly, both the psychological and
cognitive science approaches would apply to commitment and
DSSs when decision makers can vary the ways they use a DSS,
their reliance on it to determine a course of action, and how they
eventually justify their decision when they are presented with
contradictory information. The framework suggested in the next
section will examine the relationship between decision commitment and decision quality when the decision maker is supported
by an ES/DSS.
Decision Quality and Decision Confidence
A few experimental studies imply that there is not a positive
relationship, as intuition would suggest, between decision confidence and decision quality from DSSs. Indeed, the research is
quite mixed (for a summary, see Benbasat and Nault 1990).
There is still a tendency, though, to believe in the logic of a
positive relation (cf. Power and colleagues 1994). In some cases,
DSSs improve the quality of decisions without a concomitant rise
in confidence (Cats-Baril and Huber 1987; Christen and Samet
1980; Gallupe, DeSanctis, and Dickson 1988; Sharda et al.
1988). Alternatively, where DSSs do not improve the quality of
decisions, decision makers still have great confidence (Aldag and
Power 1986; Goslar et al. 1986). Some authors find lower quality
decision making with a computerized aid (e.g., Power and colleagues 1994). As mentioned above, Christen and Samet (1980)
found that decision makers who utilizeed a DSS outperformed
unassisted decision makers, but because the assisted decision
makers disagreed with the advice given by the DSS the actual
performance was the same.
In an experimental study on unstructured decision making
that involves career planning, Cats-Baril and Huber (1987)
partitioned the attributes of a computer into heuristics, interactiveness, and delivery mode (computer vs. paper). They found
delivery mode had no effect on the productivity of ideas or the
quality of decision making. Heuristics and interactiveness, however, did improve the quality of the decision while decreasing
\35/J-PART, January 1997
Symposium on Public Management Information Systems
confidence. It is unclear whether the reduced confidence was due
to the increased number of alternatives generated, the loss of
control over the decision, or the inherently threatening nature of
career planning. By dividing heuristics and interactiveness from
computerization it is unclear what qualities are left to describe
computerization other than the affective qualities associated with
a computer (or DSS). If the use of heuristics and interactiveness
is an essential quality of DSSs, the Cats-Baril and Huber finding
is consistent with the other research indicating the lack of a
positive relationship between confidence and quality.
Research on group decision support systems (GDS) yields
similar results. Sharda and associates (1988) found that groups
who worked on a financial analysis model improved their
decision-making performance by posting higher profits than did
their unassisted competitors. However, no statistically significant
relationship was exhibited between profits (quality) and confidence. Gallupe and colleagues (1988) also found that, despite the
effect of increasing decision quality, the use resulted in decreased
confidence. This finding is contrary to others, where DSSs
tended to be associated with increased decision confidence (Steeb
and Johnson 1981; Turoff and Hiltz 1982). The system in the
Gallupe experiment increased decision quality by increasing the
number of alternatives available to the decision maker. Perceived
confidence decreased, however, due to the perceived lower probability of picking the correct alternative because of the higher
number of quality alternatives generated. Presumably, the number of alternatives was used by decision makers as a cue to how
much confidence they should have in their decisions.
In a study by Aldag and Power (1986), strategic decision
makers who utilized DSSs had confidence in their decisions, but
did not enjoy a corresponding rise in die quality of their decisions as measured by independent raters. According to Aldag and
Power, "confidence was placed more in the process adequacy and
problem-solving ability ratlier than on specific outcomes"
(p. 585). Goslar and associates (1986) found neither the subjects'
perceived confidence nor their performance level was affected by
the availability of a DSS, the amount of data training, or the
amount of available data.
This first wave of empirical research suggests that there is
little prospect that either decision quality or decision confidence
would automatically improve if decision makers were given
DSSs. But this preliminary conclusion needs qualification. In
addition to perhaps describing a behavioral phenomenon, this
result may be due to different decision tasks and experimental
designs. Many other factors can mediate the impact of DSS tools,
136/J-PART, January 1997
Symposium on Public Management Information Systems
such as the nature of the decision task and the DSS itself (Alter
1977; Gallupe et al. 1988; Yoon and colleagues 1995) or the
usual organizational factors such as managerial support and end
user involvement (cf. Gill 1995). While this empirical finding is
disappointing, further research in normative prescriptions for
decision making may affect this curious result. Advances in
hardware and software will allow improved system friendliness
and fidelity to normative prescriptions, thus removing some of
the confounding factors in determining if DSSs result in increased
performance and an appropriate level of confidence.
In the meantime, however, a general theoretical framework
is needed to examine the reasons for the disparity between the
quality of decision making and decision confidence and commitment. A systematic examination of the factors that moderate their
relationship would provide deeper insights into human machine
decision making. Such a framework also could provide a method
for cataloguing results and it could suggest new areas for
research.
A Credibility Assessment of Quality
The belief that the results, suggestions, or solution of an
ES/DSS in an intelligence, design, or choice process (Simon
1977) are acceptable necessarily involves intuitive decisions. This
belief does not result exclusively from revisiting the activities
in the various phases of decision making, because ES/DSS are
designed to do this and probably do it much better than the
decision maker. This may be especially true of task execution
expert systems that handle very well-structured tasks and practically make the decision for the user (Coursey and Shangraw
1989). Here the user does utilize his own decision-making model
but is guided through the expert's model toward what is hoped to
be a similar solution. While expert systems offer explanation
utilities and other features to help the user understand the
expert's reasoning, it is unclear how much a user really adopts
the expert's framework. The development emphasis stresses validating and verifying the expert's decision model and involves the
user only in training and encouraging him to use the system. The
assumption is usually made that a user naturally will defer to the
program since it represents the way an acknowledged expert
would decide.
However, this assumption is questionable since the decision
maker has relatively little involvement in the decision model and
even may question the expertise or the appropriateness of the
expert's advice in a given decision. Making a decision is seldom
a singular process; it includes many decision subtasks
137/J-PART, January 1997
Symposium on Public Management Information Systems
(Weiss 1982). These include decisions as to the accuracy of facts,
the relevance of decision rules, and the conclusion that a sufficient amount of information has been gathered. Since a decision
maker has less than perfect information, having relied on the
expert's decision model, intuition will usually serve as a bridge
for the information gaps that the decision matter must sometimes
leap. Rather, this intuitive decision about confidence looks at
credibility factors associated with the decision process. The
decision maker will rely on these credibility factors to assess the
amount of confidence and commitment to vest in an ES or DSS
proposed solution at the choice phase.
Credibility factors are qualities or attributes which, when
associated with a claim, immediately increase or decrease the
credibility or believability of that claim (Bozeman and Landsbergen 1989). They may relate directly to the quality of the
information (e.g., how the data was collected) or they may relate
rather indirectly through attributes of the data source and the
medium (perceived expertise and neutrality, use of graphics, oral
versus written presentation, difficulty in understanding data)
among many others (see Bozeman 1986; Landsbergen 1987).
Credibility factors possess utility because their face value at
increasing the plausibility of a claim is not based on a thorough
model or explanation that shows how they lend credence to a
claim. There may be an explanation, but this connection is taken
for granted because of time, information, or other resource constraints. For example, if the decision maker encounters difficulty
in the making of a decision, the decision maker may conclude it
is a low quality decision and therefore have little confidence and
commitment.
The problem for the decision maker is that it is not possible
to make optimal decisions with problems such as missing, unreliable, or biased information. Therefore, decision makers must rely
on these credibility factors to decide whether they should commit
themselves to a decision. The credibility model has been used as
an explanation for the perceived underutilization of policy analysis (Bozeman 1986; Bozeman and Landsbergen 1989). Coursey
(1992) found empirical evidence that credibility was influential
enough to cause rejection of clearly optimal alternatives.
A program, whether it assists decision makers in the definition of a problem (intelligence phase of decision making), the
identification of alternatives (design phase), or the final choice
(choice phase) focuses entirely on the rationality of the decision
model, algorithm, or incorporated process. Given a set of initial
conditions for a bounded problem the program will yield a solution the decision maker can accept, reject, or modify. The
138/J-PART, January 1997
Symposium on Public Management Information Systems
difficulty occurs in deciding whether to commit to the solution
and how much confidence should be attached. The decision
maker can derive confidence and commitment by two routes. One
way is to reflect on the model incorporated within the ES/DSS;
because of cognitive or time constraints, however, the decision
maker will probably supplement this route by utilizing credibility
factors to arrive at a conclusion about the confidence in and
commitment to the solution proposed by the ES/DSS.
In some cases, these credibility factors will be related to the
rationality of the model or technique (e.g., sample size or the
reliability of the data). In other cases, decision makers will see
extrarational factors (the term rational is difficult to define, but
here a decision is rational if it has a ratio or reason to support
it). This reason would consist of a model or minimally of an
explanation (Rescher 1988). Given the plausibility of this framework, we will now discuss the hypotheses on the factors, including difficulty, that may affect decision confidence and commitment.
RESEARCH HYPOTHESES
Decision Quality, Confidence, and Commitment
While many of these empirical studies involve ill-structured
decisions, ES could aid decision makers to make well-structured
decisions by relieving managers of technical, clerical, or computational distractions (Aldag and Power 1986) and by reducing
variance in decisions (Sharda et al. 1988). There is also the
difficulty in generalizing across the various roles these programs
take in the decision process. Here we focus on an expert system
assisting in well-structured problems. Specifically, we examine
the use of an expert system for a typical applicant evaluation for
a budget officer position in a local government. This program
was designed to actually make the evaluation decision based on
an expert's interpretation of regulatory defined requirements, and
dierefore it is considered a task execution system. The procedures and criteria are clearly stipulated by a real local civil service
commission, and we ask if die ES helps decision makers do a
better job when they evaluate applicants by these organizationally
defined criteria.
Hypothesis 1: Decision makers assisted by the expert system will
select a higher percentage of qualified candidates.
Gallupe and associates (1988) found, in a group setting, that
a DSS acted as a memory repository, speeded up the necessary
clerical work, and "provided a suggested structure to the decision
139/J-PART, January 1997
Symposium on Public Management Information Systems
process, and that groups which used the system features in a
systematic fashion were most successful in their decision making"
(p. 293). Sharda and colleagues (1988) see the DSS serving the
individual decision maker in the same capacity.
The increased uniformity and quality, however, are potentially costly. A benefit is derived from the ES providing a
memory repository and handling the calculations. However, such
features reduce the incentives for the decision maker to personally model the information, especially when the program is to
deliver the solution. No longer must the decision maker mentally
retain information or the model of that information. Consequently, the ES may make it easy for the decision maker to
withdraw from the decision process, reducing effort and commitment. Working with the ES may increase the quality of the decision, but the decision maker may be removed from the decision,
thereby reducing the sense of responsibility. The result is a
decline in the decision maker's commitment and confidence.
When asked the reason for the decision, the decision maker has
no justification because the model is captured in the ES/DSS.
Here, the credibility factor, the "lack of an explicit model or
reason" decreases the ultimate confidence a decision maker has in
the final decision because the decision maker has little "internal
justification" (i.e., justification related to the information itself)We would expect, however, that the use of an ES would have
factors that increase decision confidence and commitment. One of
these credibility factors is the mere "face validity" (Aldag and
Power 1986). Given the conflicting reasons on the amount of
confidence and commitment a decision maker could be expect to
have in an ES decision, we test the following hypotheses:
Hypothesis 2: Decision makers assisted by the expert system will
tend to display less confidence in and commitment
to their decisions than will unassisted decision
makers.
Hypothesis 3: Decision maker confidence and commitment will
not be associated with decision quality (the
number of highly qualified candidates selected).
If it is true that decision confidence and commitment are not
directly affected by decision quality, then what are some of these
credibility or real-world factors that may influence decision
quality, confidence, and commitment?
140/J-PART, January 1997
Symposium on Public Management Information Systems
Decision Difficulty
In one of the first public administration experimental works
on the effect of computers on commitment, Bozeman and Shangraw (1989) found that difficulty in accessing information results
in reduced commitment. This result seemed to corroborate earlier
findings by O'Reilly (1982), Bozeman and Cole (1982), and
Allen (1977) on the importance of availability of information.
Gallupe and associates (1988) also examined the relationship
between difficulty and the level of confidence, but they argue that
decision difficulty indirectly affects confidence and commitment
through the quality of the decision. This idea relates to the belief
that less structured problems, perceived as more difficult, are less
successful candidates for ES and DSS assistance (Power and colleagues 1994). In the framework proposed here, the decision
maker cannot adequately assess the quality of the decision, since
the model is relatively hidden. Rather, the decision maker uses
difficulty as a surrogate measure for decision quality. As the
difficulty of the decision process increases, the decision maker
feels less confident about the quality of the decision process and
consequently less committed to the decision.
Hypothesis 4: Among those using the expert system, decision
makers who express difficulty with the decision
process will have less confidence and commitment
than decision makers who experience no difficulty.
Personal Involvement and Personnel Experience
If reduced confidence and commitment in a ES/DSS assisted
decision can be explained by loss of control and reduced responsibility, it may be interesting to examine the factors that increase
the perception of control and responsibility. When a decision
maker is involved with the decision process instead of relegating
many decisions to the ES, confidence in and commitment to the
decision may increase. One factor that may draw the decision
maker into the decision process is the influence of his personal
information. When personal information is important in the
decision process, we would expect decision makers to actively
evaluate, in terms of their own experience, why an ES asks particular questions and whether these questions should be asked at
all. Use of their personal information increases the decision
makers' interest in the reasoning of the problem, and the decision
makers begin to build their own models to justify the decision.
This active interest in the process of model construction also
could be expected where the decision maker has experience in the
problem domain: in this case personnel management. Again,
when the decision maker has personnel experience, we would
\A\IJ-PART, January 1997
Symposium on Public Management Information Systems
expect decision makers to be actively engaged with the model
incorporated into the ES.
Hypothesis 5: The expert system effect of reducing confidence
and commitment will be lessened as the decision
maker uses personal information and models in the
decision.
Hypothesis 6: When a decision maker has domain knowledge
(personnel experience), the expert system effect
of reducing confidence and commitment will be
decreased.
The Use of an Explanation Utility
Some researchers have suggested that ES could improve
decision making where the logic behind a decision algorithm is
transparent to the user (Waterman 1985). Developing expansion
utilities would allow decision makers to involve themselves in the
decision-making activity and to better understand the process.
These expansion utilities will, upon request, provide more information to the decision maker. For example, the expansion utility
may explain why a question was asked, tell where information
could be found, or give explanations for key assumptions. Sharda
and colleagues (1988) suggest that builders need systems that
offer more power and flexibility to users by enabling them to
better understand and modify model relationships. Implicit in this
suggestion is that explanations of the model will improve decision
quality and confidence. Ye and Johnson (1995) found a positive
relation between decision quality, confidence, and use of explanation facilities in a small experimental study that reviewed
twenty practicing auditors' evaluation of an ES's output.
Expansion utilities also could be viewed simply as vehicles
to provide additional detail. Although they did not directly test
the expansion utilities, Senn and Dickson (1974) examined the
relationship between confidence and the level of detail supplied to
a decision maker. They found that confidence was not affected by
the level of detail. The information given to the decision maker,
however, was not about the decision rules that were employed,
but rather it was more detailed information to be processed.
Schroeder and Benbasat (1975) found a very weak association
between the amount of information requested by decision makers
and the resulting confidence in their decisions. They also found,
however, that decision makers in an uncertain environment tend
to have highly varying information demands. With an expansion
feature, the expert system may provide the decision maker with a
personalized system, but whether this has no effect on performH2/J-PART, January 1997
Symposium on Public Management Information Systems
ance, as Schroeder and Benbasat (1975) suggest, is a question
that merits further study.
Hypothesis 7: Decision makers who use an expansion utility will
display more confidence and commitment to their
decisions than those individuals who do not use an
expansion utility.
METHOD
Experimental Decision and Subjects
The experimental decision involved ranking the top three
applicants from a pool of ten for a budget officer position in a
local government, a classic rule of three determination. The
experiment incorporated the actual documents, procedures, and
criteria currently used in real personnel hiring decisions at a
county civil service board. The decision was highly structured,
with clear, legally required evaluation criteria (e.g., bachelor's
degree in accounting or equivalent experience, "considerable
knowledge of labor law and relations"). Since the design of the
ES followed a paper-based system currently in use, an expert
involved with administering the system was asked to validate
design and results of the personnel system. Being mindful of the
accumulated experience gained in the Minnesota experiments
(Jarvenpaa et al. 1985), a rigorous pilot test was conducted to
improve internal validity. Ten individuals, four of whom were
midcareer students with experience in personnel decision making,
were asked to participate in the pilot study. Their comments, and
the comments of four additional experts in personnel from state
and county governments, were used to validate the realism of the
materials and the selection process.
After this pilot test, 101 master's and doctoral students from
four public administration programs (Alabama-Birmingham,
Florida International, Ohio State, and Syracuse) participated in
the research as an ungraded, in-class exercise. Students had various kinds of work experience. Some entered their programs
directly from college while others were in continuing education
programs. The subjects had a mean 7.8 years of work experience, with roughly half the participants reporting budgeting or
personnel experience.
Treatments
Decision makers were randomly assigned to three treatment
groups: a paper-treatment group (n = 38), an ES without explanation utilities group (n = 33), and an ES with expansion utilities
143/ J-PART, January 1997
Symposium on Public Management Information Systems
group (n = 34). Subjects in the paper-treatment group were
given a worksheet with the major criteria and calculations
required to rank the applicants. Subjects in the ES treatment
groups were given an ES using the same evaluation criteria,
scoring system, and explanations utilized by the paper-treatment
group. All decision makers were expected to apply a worksheet
with evaluation criteria (quality ranking factors) to job candidates
when they screened applicants for a final hiring decision. While
decision makers in the paper-treatment group were free to work
in any manner they wished, decision makers in the ES treatment
groups were solicited systematically by the ES to complete the
worksheet. For example, the decision maker would first be asked
to rate, on the basis on the applicants' files, the level of
budgetary experience and other minimal qualifications. The ES
would inform the subject if an applicant failed to meet minimal
requirements. If the applicant passed minimal requirements, the
ES would move through the quality ranking factors and produce
a final evaluation score for that applicant. These factors are a
series of skills validated by detailed job analyses of the position,
such as "considerable knowledge of the principles of accounting."
The two ES treatment groups differed in that one offered the use
of on-line expansion utilities. At any point in the working
session, a subject in this treatment group could make use of an
expansion utility by pressing a command key. The expansion
utility would help the subject by explaining the meaning of a
question, what information to look for, or where subjects could
locate information in the applicant's file. The expansion utility
also provided guidance to subjects on how they might score a
particular quality-ranking factor by providing examples and
typical ratings.
The information given to the paper-treatment group and the
ES group without expansion utilities treatments were identical.
The only differences between these two treatment groups were
that some information appeared on the computer instead of on
paper, and the decision maker was systematically cycled through
all the ranking factors. The information, however, given to the
ES group with expansion utilities differed due to the explanatory
capabilities incorporated within the expansion utilities. In most of
the analysis, where ES-assisted decision making was compared
with unassisted decision making, the two ES groups were combined in order to increase sample size. The results of these
analyses did not change when the two groups were kept disaggregated. The only time the two ES groups were separated was
when the effects of expansion utilities were tested. ANOVA was
used to evaluate the random assignment of subjects to treatment
groups. There was no statistically significant difference among
144/J-PART, January 1997
Symposium on Public Management Information Systems
the groups as to computer experience (p = .74) and completion
time (p = .24).
Experimental Protocol
Each subject received a packet that contained a pretest
questionnaire; a candidate ranking sheet; a description of the
decision task; a job classification description for the budget
officer position; a copy of the job opening announcement; ten
applicants' folders, each containing an employment application
with supporting documentation; and a sheet of scratch paper.
Subjects worked in a group setting, and each experimental session lasted approximately two hours. The subjects began the
session with a pretest questionnaire to collect data on the control
variables (e.g., age, attitude, education, personnel experience,
computer experience, and so forth). They were given twenty-five
minutes to complete this section.
After all of the subjects finished the pretest a brief statement
was read, which described the purposes and objectives of the
working session. The subjects had one hour and forty-five minutes to complete this part of the experiment. Because the rating
system is automated for the ES treatment groups, the ESs also
first briefly described the purpose of four function keys. Subjects
utilizing the ESs were free to return to a previous question and
change their answers or to completely restart their sessions.
Upon request they also could receive assistance with program
operation.
When subjects finished the treatment portion of the experiment and submitted a final decision, they were always handed a
contradictory solution in order to determine decision commitment. This solution was presented as the actual personnel board
decision. The subjects then were handed the first posttest questionnaire, which asked whether they would like to change or
reexamine their decisions as well as how much confidence they
had in their decisions. The subjects were then given the second
posttest questionnaire, which asked about decision difficulty and
included a memory test to determine which information was
salient to the subjects and whether the subjects actively participated in the experiment. They had seven minutes to complete this
final questionnaire. Subjects were then debriefed for their personal observations.
Measures
Two dependent variables were the focus of this experimental
study. One binary dependent measure, commitment, records
H5/J-PART, January 1997
Symposium on Public Management Information Systems
whether decision makers would be willing to change their minds
if they were presented, with information contradictory to their
expressed decisions. Staw and Fox (1977) have shown that commitment to a course of action is greatest immediately following
negative consequences. Commitment, therefore, identifies those
individuals who are definitely not committed to their decision.
The second binary dependent variable, confidence, measures
whether the subjects were confident in their decisions. This
binary is a transformed Likert variable, which asked the decision
makers to report how much confidence they had in the decision
as compared to the decision "eventually made by the personnel
advisory board." No significant differences in the results were
obtained between diese two alternative measures of confidence.
The binary measure results are reported for ease of presentation
and understanding. Multivalued limited dependent variable
models, like ordinal probit, that could be used for the ordinal
case have rather complicated interpretations. The variable ES
indicates the treatment groups (0 = paper; 1 = either of two ES
groups). To test the expansion utility effects, the ES observations
are categorized by expansion (1 = has expansion utility available, 0 = no expansion utility available).
Decision quality is measured as the number of candidates
correctly selected by the subject, and correctness is defined as
matching the top three candidates as determined by an expert in
personnel decision making. This expert was a leading official in
the county's personnel system, from which the experimental
materials were derived, and had been instrumental in the development of the evaluation criteria for the position and of the
expert system. While there could be some differences among
expert evaluations of the applicants, our measure of decision
quality emphasizes adherence to organizationally defined
evaluation criteria. These criteria were developed using detailed
job analyses, and they yield a very structured evaluation. Consistency in application of the criteria is an important element of
decision quality. Additionally, it reflects the standard practice of
building expert systems from one expert's knowledge.
Three different five-point Likert measures of difficulty were
utilized. Decision makers were asked to report their difficulty in
accessing information (how difficult it was to locate information
on the candidates); in making a decision (how difficult it was to
make a decision); and in working with the ES system (how much
difficulty they had in working witii the quality-ranking factors).
Domain experience (in this experiment, experience with
personnel) is captured by asking the subjects whether they make
hiring and firing decisions or had substantial input into hiring and
146/J-PART, January 1997
Symposium on Public Management Information
Systems
firing decisions in a previous or current professionally related
job. The subject could give a dichotomous yes/no response to this
question (1 = domain experience [personnel experience]; 0 = no
domain experience).
Influence of personal information includes not only
personnel experience a subject might possess but any kind of
personal information or experience that entered into the decision
process. This was captured by asking the decision makers: To
what degree did your personal experiences and background influence your decision? The measure was captured by an ordinal
variable but was recoded as a dichotomous variable for ease of
presentation. Results did not differ when the analysis used the
original ordinal scale (1 = influenced by personal information;
0 = not influenced by personal information). Exhibit 1 shows the
intercorrelation matrix of these variables.
Exhibit 1
Intercorrelations of Major Variables
Variable
1
2
Number of Observations
4
5
6
3
7
8
9
Confidence
Commitment
.44*" i
(100)
Quality
.08
(97)
-.08
(97)
ES Treatment
-.17t
(100)
-.27**
(101)
.23**
(101)
Decision Difficulty
.11
(100)
-.031
(101)
• 15t
(101)
.007
(101)
Accessing Information
Difficulty
.12
(100)
.057
(101)
.05
(101)
.009
(101)
.23**
(105)
Difficulty Using the
Rating Criteria
-.14
(99)
-.12
(100)
.08
(100)
.01
(101)
(101)
Domain Experience
-.11
(80)
-.25*
(81)
.08
(81)
.16
(83)
.11
(83)
• 18t
(83)
-.05
(82)
Personal Information
Influence
.10
(100)
.08
(101)
.17f
(101)
-.05
(101)
.03
(101)
-.0003
(101)
.01
(101)
.11
(83)
Expansion Utility
.021
(100)
-.13
(101)
.15
(101)
.52*** .016
(101)
(101)
-.02
(101)
-.11
(101)
.15
(83)
. 3 3 * * * -.17*
(101)
Key: Kendall's Tau-B Coefficient (PROB > Tb; tp < -10; *p < .05; •*p < .01 ; -
Ul/J-PART,
January 1997
P
<: .001)
.01
(101)
Symposium on Public Management Information Systems
RESULTS
Decision Quality, Confidence, and Commitment in ES
Hypothesis 1 holds that use of an ES is associated with a
higher-quality decision. As exhibit 1 indicates, there is a significant positive relationship between the availability of an ES and
choice of the top three qualified candidates (TauB = .23,
p < .01). Decision makers who worked with an ES did make
higher-quality decisions.
Hypothesis 2 suggests that an ES should affect decision
confidence and commitment. The hypothesis is that decision confidence and commitment suffer when a decision maker utilizes an
ES. As indicated by exhibit 2, there is a statistically significant
inverse (negative sign) relationship between availability of an ES
and confidence and commitment. When an ES is available, the
decision maker is less confident and committed to his decision.
Hypothesis 2 is supported. Decision makers assisted by an ES
displayed less confidence and commitment than those decision
makers who were not assisted by an ES.
Hypothesis 3 holds that the increase in quality affects
confidence and commitment. Decision quality, however, does not
have a statistically significant Kendall's Tau-B correlation with
confidence or commitment (for the ES-assisted group, n = 65;
and see exhibit 1 for all subjects, n = 97).
The results suggest that ES can improve personnel decision
making but may actually reduce confidence and commitment.
Commitment and confidence, however, are not affected by decision quality. Organizations should not just supply a decision
maker with an ES and assume that confidence and commitment
will automatically increase along with the quality of the decision.
Exhibit 2
Logit Response Model for Decision Confidence and
Commitment by Use of ES
Model (dependent)
Beta
Prob
X2
N
Commitment
Confidence
-1.23
- .77
.006
.08
7 .57
2 .92
101
101
Note: ES "1" if available, "0" if not available; commitment and confidence both
"1" if confident/committed . "0" if not.
148/J-PART, January 1997
Symposium on Public Management Information Systems
Before a decision maker will successfully act upon a high quality
decision, the findings suggest that the decision maker also must
have confidence and commitment to that decision. If decision
makers using an ES exhibit a negative relationship between
decision quality and decision confidence and commitment, what
factors can moderate or explain this negative effect?
Factors Affecting Decision Confidence and Commitment
Contrary to earlier research findings (Bozeman and
Shangraw 1989), decision difficulty was not a factor moderating
decision confidence and commitment. This was true of all of die
difficulties a decision maker might encounter: difficulty in
accessing information, making the decision, and in working with
the ES system. Whether a person reported difficulty with a
decision or not, there was no effect on decision confidence or
commitment. Even though decision quality was negatively correlated with difficulty (decision difficulty, exhibit 1), it did not
prove to be a moderating variable between ES and confidence
and commitment, the focus of this study. None of the logit
response models were statistically significant (p > .10 with
n = 65). Hypothesis 4 is not supported. These findings on decision difficulty substantiate the earlier findings of Christen and
Samet (1980), Gallupe and colleagues (1988), and Weiss (1982).
Hypothesis 5 suggests that when decision makers report high
influence of their personal information, they have more decision
confidence and commitment. Once decision makers use personal
information, it is likely they will have some personal justification
for the final decision. Confidence and commitment did vary
depending on whether persons were heavily influenced by personal information (see exhibit 3). When decision makers reported
low influence of personal information the use of an ES reduced
confidence and commitment, as was previously found in testing
hypothesis 2. The decision makers had low confidence if they
worked with a computer and higher confidence if they worked on
paper. When decision makers reported high influence of personal
information, however, the effect was not statistically significant.
This is evidence that the influence of personal information has a
moderating role in counteracting the ES effect of reducing confidence and commitment.
Having domain experience (here, experience with personnel
decisions such as hiring and firing) was also a significant
moderating factor of ES upon decision confidence and commitment (see exhibit 4). When an individual had experience with
hiring or firing, the use of an ES no longer affected confidence
or commitment. If the decision maker, however, had no such
U9/J-PART, January 1997
Symposium on Public Management Information Systems
Exhibit 3
Logit Response Model for Decision Confidence and
Commitment by Use of ES Moderated by Influence of
Personal Information
Model (dependent)
Beta
Prob
X2
N
When subjects reported low influence of personal information
Commitment
-2.39
.001
10.12
Confidence
-1.95
.008
6.97
47
47
When subjects reported high influence of personal information
Commitment
.37
.53
.39
Confidence
.19
.74
.11
54
54
Note: ES "1" if available, "0" if not available; commitment and confidence both
"1" if confident/committed, "0" if not.
Exhibit 4
Logit Response Model for Decision Confidence and
Commitment by Use of ES Moderated Domain Experience
X2
N
When subjects reported low domain experience
Commitment
-1.80
.01
Confidence
-1.54
.04
5.59
4.05
40
39
When subjects reported high domain experience
Commitment
- .46
.52
Confidence
- .27
.69
.42
.15
41
41
Model (dependent)
Beta
Prob
Note: ES "1" if available, "0" if not available; commitment and confidence both
"1" if confidentycommitted, "0" if not.
experience, then the effect of the ES was strong in decreasing
decision confidence and commitment. Hypothesis 6 is supported.
One interpretation is that when the decision makers had domain
experience or were influenced by personal information, this
provided a justification for the decision makers when they were
confronted with contradictory information, and thus moderated
the effect of the ES in reducing confidence and commitment.
Another interpretation is that these factors keep decision makers
involved in the decision process, and when they are confronted
with contradictory information they have confidence and commitment because they have worked with the information and may
even have a model to justify the conclusion. The ES effect of
150/ J-PART, January 1997
Symposium on Public Management Information Systems
reducing confidence and commitment also was moderated when
decision makers had domain experience.
Finally, one of the attractive features of expansion utilities is
that they can illuminate the algorithm underlying the ES. If this is
the case, one would expect that the use of expansions would
enable the decision maker to better understand the decision process. This increased understanding would make the model more
transparent, involve the decision makers, and provide them with
a justification to support the choice suggested by their ES. However, the logit model testing the effect of the availability of an
expansion utility was not statistically significant in moderating
the ES effect on decision confidence or commitment (n = 24,
p > .10). Perhaps the results would change if the expansion
system provided information on the decision model in addition to
the information on the procedural aspects of the decision process.
SUMMARY AND DISCUSSION
Most people assume that the level of confidence and
commitment should be appropriate to the quality of the decision.
If one displays confidence and commitment to a decision this
should indicate that one has a quality decision. This assumption,
when it comes to ESS/DSS, is questionable based on this study
and on previous DSS research.
Intuitively, it would seem that if an ES improves the quality
of decision making, confidence should increase and there should
be more commitment to one's decision. This study did find that
an ES improved decision quality. However, as in past studies that
focused on ill-structured decision tasks, this study found both the
absence of a positive relationship and also a negative relationship
between decision quality and decision confidence and commitment.
This result is troublesome. If ESs improve decision making
without a corresponding rise in confidence and commitment, the
theoretical advantages of an ES will remain unmet, since commitment and confidence in the decision is necessary. This is an especially important finding for government decision makers utilizing
an ES, given their typical low confidence, risk-averseness, and
the "gold fishbowl phenomenon" (Bozeman and Bretschneider
1986). In some cases, commitment and confidence may be more
important factors, given the difficulty of finding optimal solutions
in public policy and management decisions (Appleby 1953). The
ES becomes a black box. The rationale for the ES suggested
solution is no longer generated by the decision maker and so the
decision maker does not have a justification for the decision when
\5\IJ-PART, January 1997
Symposium on Public Management Information Systems
confronted with contradictory information, thus reducing confidence and commitment. While the ES relieves the decision maker
from the burden of computational and clerical detail, the ES also
can reduce the incentive of the decision maker to model the
information. No longer must the decision maker retain
information or the model of that information. The decision maker
merely punches in the numbers without actually thinking about
what the ES is doing. Expert systems are claimed to help nonexperts make expert decisions. The model the expert uses is in
the computer program. While some expert system writers claim
educational uses of this technology, the overriding belief has
been that the expert decision-making model can be invoked by
lesser trained colleagues (Coursey and Shangraw 1989). Here, we
find that those with less experience in personnel may formulate
the same decision using the expert's model, but they are not
likely to be very confident with the results. When asked the
reason for a decision, the decision maker has no justification,
because the model is captured in the expert system. When decision makers are confronted with this contradictory information,
this decreases their ultimate confidence and commitment to the
final decision. Instead of appearing as reduced confidence and
commitment once a decision has been made, this phenomenon
may manifest itself as resistance to change, as in instances where
decision makers would show unwillingness to trust a decision
produced by a DSS/ESS by refusal to use such a system.
We suggested a theoretical framework to organize previous
work, the results of this study, and new lines of inquiry in order
to explain computer-aided decision making. The central theme of
this credibility factor framework is that decision makers cannot
always step outside of themselves and judge the quality of their
decisions when they work with an ES. Therefore, one cannot
expect a direct relationship between quality and decision confidence and commitment. Rather, decision makers use indicators of
the quality of their decisions. In some cases, these credibility
factors have a logical relationship to the model or technique that
is utilized by the ES. Some of the hypothesized intuitive indicators tested in this study include difficulty; the availability of an
expansion utility; and involvement of the decision maker, which
includes influence of personal information and domain experience.
In previous studies, difficulty was associated with reduced
confidence. These findings were not confirmed. Difficulty was
not a credibility factor that moderated the ES effect upon confidence or commitment. Heavy influence of personal information
and domain experience, however, eliminated the ES effect of
reducing confidence and commitment. Using the theoretical
152/J-PART, January 1997
Symposium on Public Management Information Systems
construct, influence of personal information and domain experience involves the decision maker in the decision process and
thereby counteracts the black box phenomenon. Influence of personal information and domain experience help the decision maker
to become a more involved, critical user of the expert system.
Interestingly, when a decision maker reported high influence
of personal information, the quality of the decision was not
influenced, and presumably it can be inferred that this influence
did not translate into bias. For subjects who were assisted by the
ES, the Tau-B correlation between the personal information
influence and the number of candidates selected correctly was
-.09 (p = .41). The interpretation is that the level of personal
information influence did not affect the propensity to pick the
best candidates. When the decision maker uses the expert system,
the decision maker can draw upon vivid personal experience
without necessarily producing a biased result.
When a decision maker reported experience in personnel
decision making, experience also tended to moderate the ES
effect of reducing confidence and commitment. Experience with
personnel decision making did not affect decision quality. The
benefit of experience, in working with ESs, is not improvement
of the quality of decision making, but rather the ability to adjust
the level of confidence and commitment to control for the effects
of the ES.
If domain experience is important to confidence or commitment, the casual or infrequent user may not exhibit the same
level of confidence as do conscious users, because casual users
do not have the necessary domain experience to adjust the level
of confidence and commitment. Two interesting implications
emerge from these results.
The first implication is for the hope that an ES/DSS can
substitute for domain experience. Some managers hope for the
day when computer programs can substitute for some areas of
human decision making, as is certainly the case in expert system
development. If domain experience or involvement is important,
then the promise of a well-structured ES aiding the casual user
may be threatened, at least as far as productivity gains through
duplicating an expert's decision-making prowess. These experimental findings suggest that while ESs can improve decisionmaking results, the ES does not improve confidence and commitment where there is no domain experience. Further research is
needed to focus on the implications for ESs designed for infrequent users. The second implication is for the role of the user's
personal involvement with the decision process. One of the
153/J-PART, January 1997
Symposium on Public Management Information Systems
claimed benefits of the ES, especially in the public sector, is that
it will enable decision makers to apply only objective criteria
uniformly to all cases. If decision makers report that personal
information influenced their decision process, this could be
interpreted as bias. From the results obtained here, however, the
influence of personal information did not affect decision quality
(bias), but it increased the level of user involvement, confidence,
and commitment in decision makers working with an ES.
The empirical results emphasize the importance of the user
taking control, or at least believing that he has influenced the
decision process, when computers are used to structure that decision. ESs may not help improve decision making unless they
have mechanisms to continually involve the user in the decisionmaking process. This belief stems from the typical finding that
users prefer written and verbal information over computerized
sources (e.g., Zeffane and Cheek, 1995). Builders of ESs should
stress systems that encourage decision makers to become
involved in the decision process rather than to allow decision
makers to remain passive users. This would be more akin to the
standard practice in DSS work. Currently most ES development
emphasizes experts, knowledge verification, and validation
issues. ES design could focus on some of the variables that
proved significant in adjusting the level of confidence and
commitment appropriate to decision quality (like personal information or domain experience—for instance, holding the user
accountable through assignment of an identification number or
asking the decision maker to append written comments in an ES
session).
On a theoretical level, Johnson-Laird (1988) has developed
the notion of mental models to explain the differences between
what strict logic would dictate and what decision makers actually
conclude. Namely, the logic found in logic textbooks is different
than the logic used by decision makers. After conducting many
experiments, Johnson-Laird concluded that decision making is not
carried out through the application of a mental propositional
calculus to a set of statements but rather through the examination
of the correspondence of a claim to the decision maker's mental
model of the problem domain. If there is an inconsistency
between the claim and the mental model—whether that mental
model is visual, experiential, or verbal—the claim is judged to be
illogical. Like the rational model utilized in the experimental ES,
many of the normative, prescriptive models utilized in ES are
rigorous analytical models, clarifying basic assumptions and
developing the logical implications of these assumptions. Yet if
we believe the Johnson-Laird findings, decision makers would
not exclusively judge the ES decision models through a testing of
154/J-PART, January 1997
Symposium on Public Management Information Systems
the logic but through supplementing these judgments through the
use of mental models.
While we would hope for a convergence of mental model
and normative model solutions, it is not clear that the methods
also should converge. Provision should be made for the decision
maker to use personal information and experience to guide the
decision process as well as to use the solutions suggested by an
ES. In fact, ES designers may benefit from investigating how
decision makers cope with the complexity of some decisions by
using these credibility factors.
The central result of this study concerned the ES effect of
reducing decision confidence and commitment. One interpretation
of this finding was that the ES becomes a black box and that
decision makers become less involved with the decision process
when they are assisted by an ES, resulting in less confidence and
commitment. Future research utilizing "process-tracing" (Todd
and Benbasat 1987) should focus on this particular interpretation
by measuring the extent to which noninvolvement occurs, under
what circumstances it occurs, and how it leads to lowered confidence and commitment. The findings of this study together with
the results of previous studies indicate that in a broad range of
ES/DSS and decision tasks there is the lack of a positive relationship between decision quality and decision confidence and
commitment. Further research should focus on tasks, unlike personnel selection, for which the computer is especially equipped
(e.g., computing probabilities or forecasts). Studies of lessstructured problems and ES designed for different roles in the
decision process are also vital.
REFERENCES
Aldag, R.J., and Power, D.J.
1986 "An Empirical Assessment of
Computer-Assisted Decision
Analysis." Decision Sciences
17:573-88.
Appleby, P.H.
1953 "Government is Different." In
D. Waldo, ed. Ideas and Issues
in Public Administration. New
York: Greenwood.
Benbasat, I., and Nault, B.
1990 "An Evaluation of Empirical
Research in Managerial Support
Systems." Decision Support
Systems 6:203-26.
Allen, T.
1977 Managing the Flow of Technology. Cambridge, Mass.: MIT
Press.
Aronson, E.
1976 The Social Animal. San
Francisco: W.H. Freeman.
Bozeman, B.
1986 "The Credibility of Policy
Analysis: Between Method and
Use." Policy Studies Journal
14:519-39.
Alter, S.
1977 "A Taxonomy of Decision Support Systems." Sloan Management Review 19:39-56.
Bazerman, M.
1990 Managerial Decision Making.
New York: Wiley.
155/J-PART, January 1997
Bozeman, B., and Bretschneider, S.I.
1986 "Public Management Information
Systems: Theory and Prescriptions." Public Administration
Review 46:475-87.
Symposium on Public Management Information Systems
Bozeman, B., and Cole, E.
1982 "Scientific and Technical Information in Public Management:
The Role of 'Gatekeeping' and
Channel Preference." Administration and Society 13:479-93.
Bozeman, B., and Landsbergen, D.
1989 "Truth and Credibility in Sincere
Policy Analysis: Alternatives to
the Production of PolicyRelevant Knowledge." Evaluation Review 13:355-79.
Bozeman, B., and Shangraw, R.F. Jr.
1989 "Computers and Commitment to
a Public Management Decision."
Knowledge 2:42-56.
Brockner, J.; Houser, R.; Bimbaum,
G.; Lloyd, K.; Deitcher. J.;
Nathanson, S.; and Rubin, J.
1986 "Escalation of Commitment to
an Ineffective Course of Action:
The Effect of Feedback Having
Negative Implications for SelfIdentity." Administrative Science
Quarterly 31:109-26.
Cats-Baril, W.L., and Huber, G.
1987 "Decision Support Systems for
Ill-Structured Problems: An
Empirical Study." Decision
Sciences 18:350-72.
Christen, F.G., and Samet, M.G.
1980 Empirical Evaluation of a
Decision-Analytic Aid. Filial
Technical Report PFTR-106680-1. Woodland Hills, Calif.:
Perceptronics.
Conlan, B., and Wolf, G.
1980 "The Moderating Effects of
Strategy Visibility and Involvement on Allocation Behavior: An
Extension of Staw's Escalation
Paradigm." Organizational
Behavior and Human Performance 26:172-92.
Coursey, D.
1992 "Information Credibility and
Choosing Policy Alternatives:
An Experimental Test of Cognitive-Response Theory." Journal
of Public Administration Research and Theory 2:315-31.
Coursey, D., and Shangraw, R.
1989 "Expert System Technology for
Managerial Applications."
Public Productivity Review
12:237-62.
Hadden, S.
1986 "Intelligent Advisory Systems
for Managing and Disseminating
Information." Public Administration Review 46:572-83.
Davis, G., and Olson, N.
1985 Management Information Systems: Conceptual Foundations,
Structure, and Development.
New York: McGraw-Hill.
Jarvenpaa, S.L.; Dickson, G.W.; and
DeSanctis, G.
1985 "Methodological Issues in Experimental IS Research: Experiences and Recommendations."
MIS Quarterly 9:141-56.
Efroymson, S., and Phillips, D.
1987 "What is an Expert System
Anyway?" Information Center
3:29-31.
Festinger, L.
1957 A Theory of Cognitive Dissonance. Stanford, Calif.:
Stanford University Press.
Fletcher, P., Bretschneider, S., and
Marchand, D.
1992 "Managing Information Technology: Transforming County
Governments in the 1990s."
School of Information Studies,
Syracuse University, New York.
Gallupe, R.B.; DeSanctis, J.; and
Dickson, G.W.
1988 "Computer-Based Support for
Group Problem-Finding: An
Experimental Investigation."
MIS Quarterly 12:276-96.
Gill, T.
1995 "Early Expert Systems: Where
Are They Now?" MIS Quarterly
19:51-68.
Goslar, M.D.; Green, G.I.; and
Hughes, T.H.
1986 "Decision Support Systems: An
Empirical Assessment for
Decision-Making." Decision
Sciences 17:79-91.
Goul, M.; Shatie, B.; and Tonge, F.M.
1986 "Using a Knowledge-Based
Decision Support System in Strategic Planning Decisions: An
Empirical Study." Journal of
Management Information
Systems 2:70-84.
156/J-PART, January 1997
Johnson-Laird, P.N.
1988 The Computer and the Mind.
Cambridge, Mass.: Harvard
University Press.
Joyner, R., and Tunstall, K.
1970 "Computer Augmented Organization Problem Solving." Management Science 17:212-25.
Kahneman, D., and Tversky, A.
1979 "Prospect Theory: An Analysis
of Decisions Under Risk."
Econometrica 47:263-91.
1989 "The Psychology of Preferences." Scientific American
246:160-74.
Landsbergen, D., and Bozeman, B.
1987 "Credibility Logic and Policy
Analysis: Is There Rationality
without Science?" Knowledge
8:625-48.
Meehl, P.E., and Rosen, A.
1955 "Antecedent Probability and the
Efficiency of Psychometric
Signs, Patterns, or Cutting
Scores." Psychological Bulletin
52:194-216.
O'Reilly, C.
1982 "Variations in Decision Makers'
Use of Information Sources: The
Impact of Quality and Accessibility of Information." Academy
of Management Journal 25:75671.
Oskamp, S.
1965 "Overconfidence in Case-Study
Judgments." Journal of Consulting Psychology 29:261-65.
Symposium on Public Management Information Systems
Power, D.; Meyeraan, S.; and
Aldag, R.
1994 "Impacts of Problem Structures
and Computerized Decision Aids
on Decision Attitudes and
Behaviors." Information and
Management 26:281-94.
Quindlen, T.
1992 "IRS Plans to Start Over on Buy
to Take Place of Canceled
Chexs." Government Computer
News. August 17:10.
Rescher, N.
1988 Rationality—A Philosophical
Inquiry into the Reason and
Rationale of Reason. New York:
Oxford University Press.
Sage, A.P.
1981 "Behavioral and Organizational
Considerations in the Design of
Information Systems and Processes for Planning and Decision
Support." IEEE Transactions on
Systems, Man, and Cybernetics.
SMC-11:9:640-78.
Schroeder, R.G., and Benbasat, I.
1975 'An Experimental Evaluation of
me Relationship of Uncertainty
in the Environment to Information Used by Decision Makers."
Decision Sciences 4:555-67.
Schwenk, C.R.
1986 "Information, Cognitive Biases,
and Commitment to a Course of
Action." Academy of Management Review 11:298-310.
Senn, J., and Dickson, G.W.
1974 "Information System Structure
and Purchasing Decision Effectiveness. " Journal of Purchasing
and Materials 10:52-64.
Sharda, R.; Barr, S.H.; and
McDonnel, J.C.
1988 "Decision Support System Effectiveness: A Review and an
Empirical Test." Management
Science 34:139-59.
Simon, H.A.
1977 The New Science of Management
Decision Making. New York:
Harper and Row.
Tversky, A., and Kahneman, D.
1974 "Judgment Under Uncertainty:
Heuristics and Biases." Science
185:1124-31.
Staw, B.M.
1976 "Knee-Deep in the Big Muddy:
A Study of Escalating Commitment to a Chosen Course of
Action." Organizational Behavior and Human Performance 16:
Waterman, D.A.
1985 A Guide to Expert Systems.
Reading, Mass.: AddisonWesley.
1981 "The Escalation of Commitment
to a Course of Action." Academy
of Management Review 6:57787.
1984 "Counterforces to Change." In
P. Goodman, ed. Change in
Organizations. San Francisco:
Jossey-Bass.
Staw, B.M., and Fox, J.
1977 "Escalation: The Determinants
of Commitment to a Chosen
Course of Action." Human Relations 30:431-50.
Staw, B.M., and Ross, J.
1978 "Commitment to a Policy Decision." Administrative Sciences
Quarterly 23:40-64.
Steeb, R., and Johnston, S.C.
1981 "A Computer-Based Interactive
System for Group Decision
Making." IEEE Transactions on
Systems, Man, and Cybernetics.
SMC-11:544-52.
Todd, R., and Benbasat, I.
1987 "Process Tracing Methods in
Decision Support Systems
Research: Exploring the Black
Box." MIS Quarterly 11:492512.
Turoff, M., and Hiltz, S.R.
1982 "Computer Support for Group
Versus Individual Decisions."
IEEE Transactions on Communications. Com-30:82-90.
\51IJ-PART, January 1997
Weiss. J.A.
1982 "Coping with Complexity: An
Experimental Study of Public
Policy." Journal of Policy
Analysis and Management 2:6687.
Whyte, G.
1989 "Groupthink Reconsidered."
Academy of Management Review
14:40-56.
Wong, B., and Monaco, J.
1995 "Expert System Applications in
Business: A Review and Analysis of the Literature." Information and Management 29:14152.
Ye, L., and Johnson, P.
1995 "The Impact of Explanation
Facilities on User Acceptance of
Expert System Advice." MIS
Quarterly 19:157-73.
Yoon, Y.; Guimaraes, T.; and.
O'Neal, Q.
1995 "Exploring the Factors Associated with Expert System Success." MIS Quarterty 19:83-107.
Zeffane, R., and Cheek, B.
1995 "The Use of Differential Information Channels in an Organizational Context." Management
Research News 17:3/4:1.
Comparative
Politics
Editor-in-Chief: Dankwart A. Rustow
(City University of New York)
Comparative Politics is an international journal containing articles devoted to comparative analysis of political institutions
and behavior. Articles range from political patterns of emerging
nations to contrasts in the structure of established societies.
Comparative Politics communicates new ideas and research
findings to social scientists, scholars, and students. The journal
is indispensable to experts in research organizations, foundations, consulates, and embassies throughout the world.
1996 Issues to Include:
Henry Bienen and Jeffrey Herbst, "The Relationship between Political and
Economic Reform in Africa" . . . John T. S. Keeler, "Agricultural Power in
the European Community: Explaining the Fate of CAP and GATT
Negotiations" . . . Dennis McNamara, "Corporatism and Cooperation
among Japanese Labor" . . . Ian Brodie, "The Market for Political Status" .
. . Patricia L. Hipsher, "Democratization and the Decline of Urban Social
Movements in Chile and Spain" . . . Donald C. Williams, "Reconsidering
State and Society in Africa: The Institutional Dimension in Land Reform
Policies" . . . Torben Iversen, "Power, Flexibility, and the Breakdown of
Centralized Wage Bargaining: Denmark and Sweden in Comparative
Perspective"
Published quarterly.
Subscripton rates: Individuals $30/1 yr., $55/2 yrs., $77/3 yrs.; Institutions
$55/1 yr., $101/2 yrs., $140/3 yrs.; Students $12/1 yr.; Outside the U.S.A.
add $8.50/1 yr., $17/2 yrs., $25.50/3 yrs., Airmail add $23/yr.; Single
copies $9/individuals, $15 institutions.
ISSN 0010-4159.
Comparative Politics
Subscription Fulfillment Office
49 Sheridan Avenue
P.O. Box 1413
Albany, NY 12201-1413
158/J-PART, January 1997