On the Treatment of Uncertainty and Variability in Making Decisions

Risk Analysis, Vol. 33, No. 10, 2013
DOI: 10.1111/risa.12071
On the Treatment of Uncertainty and Variability in Making
Decisions About Risk
Vicki M. Bier1,∗ and Shi-Woei Lin2
Much attention has been paid to the treatment of dependence and to the characterization
of uncertainty and variability (including the issue of dependence among inputs) in performing risk assessments to avoid misleading results. However, with relatively little progress in
communicating about the effects and implications of dependence, the effort involved in performing relatively sophisticated risk analyses (e.g., two-dimensional Monte Carlo analyses
that separate variability from uncertainty) may be largely wasted, if the implications of those
analyses are not clearly understood by decisionmakers. This article emphasizes that epistemic
uncertainty can introduce dependence among related risks (e.g., risks to different individuals, or at different facilities), and illustrates the potential importance of such dependence in
the context of two important types of decisions—evaluations of risk acceptability for a single technology, and comparisons of the risks for two or more technologies. We also present
some preliminary ideas on how to communicate the effects of dependence to decisionmakers
in a clear and easily comprehensible manner, and suggest future research directions in this
area.
KEY WORDS: Dependence; uncertainty; variability
1. INTRODUCTION
ponents, and the impact of such dependence on risk
estimates. In the area of environmental risk assessment, Cullen and Frey(4) provide guidance about the
treatment of uncertainty and variability (including
the issue of dependence among inputs). By now, it
has been clearly demonstrated that failure to take either causal or probabilistic dependence into account
in PRA can lead to misleading results(5) —typically,
underestimates of the true risk (although overestimates are also possible, depending on the structure
of the risk model and the nature of the dependence).
One of the key reasons for distinguishing between uncertainty and variability is that they have
different implications for dependence, which in turn
has importance for decision making. However, the
effects of dependence on decision making are still
not always well understood, even by risk analysts,
and especially by mathematically unsophisticated
decisionmakers. As a result, the effort involved in
performing relatively sophisticated risk analyses
Much attention has been paid to the treatment of
dependence in performing risk assessments. For instance, causal dependencies (e.g., common cause failures, cascade failures, and intersystem dependencies)
have been taken into account in probabilistic risk assessment (PRA) beginning with the Reactor Safety
Study,(1) although the techniques for doing so have
gotten more sophisticated over time. In addition, beginning in the early 1980s, attention began to be paid
to the issue of probabilistic dependence between the
failure rates(2) or seismic fragility(3) of similar com1 Department of Industrial and Systems Engineering, University of
Wisconsin–Madison, WI, USA.
2 Department of Industrial Management, National Taiwan Univer-
sity of Science and Technology, Taipei, Taiwan.
∗ Address correspondence to Vicki M. Bier, 3270A Mechanical En-
gineering Bldg., 1513 University Ave., Madison, WI 53706, USA;
tel: 608/262-2064; fax: 608/262-8454; [email protected].
1899
C 2013 Society for Risk Analysis
0272-4332/13/0100-1899$22.00/1 1900
Bier and Lin
(e.g., two-dimensional Monte Carlo analyses that
separate variability from uncertainty) may be largely
wasted, if the implications of those analyses are not
clearly understood by decisionmakers. The objectives of this article are (1) to illustrate the potential
importance of dependence in making decisions about
risks and (2) to present some preliminary ideas on
how to communicate the effects of dependence to
decisionmakers in a clear and easily comprehensible
manner, and suggest future research directions in
this area.
For the sake of simplicity, we will discuss only
two types of decisions here, both of which arise frequently in practice. The first involves evaluating the
acceptability of the risk posed by a single technology
that is in widespread use; for example, at multiple
facilities, or by multiple people. The second type of
decision involves comparisons of two or more alternative technologies; for example, two alternative designs, or a base case and various possible risk reduction strategies.
nology, state-of-knowledge uncertainty is generally
taken to include those uncertainties that are (at least
in principle) reducible by further research. By contrast, variability reflects differences among the members of a heterogeneous population. Thus, variability cannot be reduced by research alone, but only by
actually changing the circumstances for one or more
members of that population (e.g., by implementing
risk reduction measures at those facilities with the
highest risk levels).
Note, however, that population variability can
give rise to state-of-knowledge uncertainty about the
risk for a particular member of the population; for
example, if one is unsure how the risk at one facility
compares to the average risk level for a population of
similar facilities. For example, there may be variability among the risks posed by different nuclear power
plants, state-of-knowledge uncertainty about the average risk in a population of similar power plants,
and also state-of-knowledge uncertainty about the
risk at any particular power plant in that population.
2. EVALUATIONS OF RISK ACCEPTABILITY
FOR A SINGLE TECHNOLOGY
2.2. Value of Information and Implications of
Uncertainty and Variability for Decision
Making
2.1. Uncertainty and Variability
A state-of-the-art risk assessment today typically
involves some statement about the magnitude of the
uncertainty that exists about the final result of the
assessment. For example, in a risk assessment of
a nuclear power plant, the results would typically
include a range (i.e., a probability distribution) of
possible core melt frequencies, rather than a single
value. Similarly, in an assessment of the risks associated with a toxic or carcinogenic chemical, the results would typically include a range for the number
of people affected, rather than a point estimate.
However, such assessments sometimes do not
specify whether the uncertainty is associated primarily with population variability(6) or, instead, is due
to a general lack of knowledge about the technology
being evaluated (although this problem is much less
common than in the past).
In particular, Kaplan(6) distinguishes between
“state-of-knowledge” uncertainty and “population
variability” (sometimes referred to simply as “uncertainty” and “variability”). (Similarly, Paté-Cornell(7)
refers to “state-of-knowledge” uncertainty and variability as epistemic uncertainty and aleatory uncertainty, respectively.) Using Kaplan’s choice of termi-
State-of-knowledge uncertainty (i.e., assessment
uncertainty) and population variability can have
different implications for decision making. This is
well explained by the National Research Council:(8)
“uncertainty forces decisionmakers to judge how
probable it is that risks will be overestimated or underestimated for every member of the exposed population, whereas variability forces them to cope with
the certainty that different individuals will be subjected to risks both above and below any reference
point one chooses” (emphasis in original). Morgan
and Henrion(9) and Cullen and Frey(4) both provide
useful discussions of the distinctions between stateof-knowledge uncertainty and population variability. For example, Morgan and Henrion(9) (pp. 62–63)
note:
A common mistake is failure to distinguish between
variability due to sampling from a frequency distribution and empirical uncertainty that arises from incomplete scientific or technical knowledge . . . the scientific
uncertainty can be reduced only by further research . . .
But this sort of uncertainty analysis is impossible unless
the two kinds of uncertainty are clearly distinguished.
Both the definition of state-of-knowledge uncertainty and its application in decision making are
context-dependent, and depend in particular on the
Treatment of Dependence in Making Decisions
nature of the decision(s) to be made in a particular
situation (e.g., whether the decision will apply to an
entire population, or only to an individual within that
population). For example, consider a decision about
whether to approve a proposed new regulation that
would be applied to all plants in a particular population (i.e., “one size fits all”). In this context, stateof-knowledge uncertainty might be defined as uncertainty about the average risk level of plants in the
population, which would be relevant in determining
whether to accept the proposed regulation, reject it,
or postpone the decision. In particular, if the state-ofknowledge uncertainty were large enough, it might
well be worthwhile to invest effort in reducing this
uncertainty before making a final decision.
By contrast, differences among plants in the regulated population would reflect population variability. The existence of substantial population variability would imply that the new regulation might yield
greater risk reduction at some plants than at others,
but would not affect estimates of the overall benefits of the proposed regulation. In addition, as noted
above, the effects of population variability could
not be reduced merely by further research, although
other types of changes (e.g., a decision to enforce the
new regulation only at high-risk plants) could reduce
the effects of variability.
The existence of population variability might
also create state-of-knowledge uncertainty about the
risk level at a particular plant. For example, a facility
manager might know the average risk level for plants
of similar design and vintage, but not how the risk
level at his or her plant compares to the industry average. This state-of-knowledge uncertainty would be
relevant, for example, in deciding whether to install
a particular safety improvement at the plant in question. The hypothetical facility manager in this case
might well want to gather more information about
the level of risk at his or her plant in particular before making a decision about the desirability of the
proposed safety improvement.
Whether it is worthwhile to gather additional information in the examples above would presumably
depend not only on the extent of state-of-knowledge
uncertainty, but also on the importance of the decision, the extent to which new information could reduce the current level of uncertainty, and the information’s cost. Moreover, some types of uncertainty
may be essentially irreducible by research today, but
could eventually become amenable to further research using technologies that have yet to be developed. Thus, today we typically consider uncertainty
1901
about the failure rate of a population of pumps to be
reducible by conducting additional pump tests, but
we model the variable failure times of pumps with the
same failure rate as reflecting inherent “randomness”
(which is assumed not to be reducible by research).
However, better methods of nondestructive evaluation could eventually make it possible to substantially
reduce the uncertainty about the time of failure for a
particular pump. Such subtleties are not captured by
the simple dichotomous distinction between state-ofknowledge uncertainty and population variability.
Brown and Ulvila(10) help to clarify these issues
by distinguishing between “outcome uncertainty”
(“what might actually happen and with what probability,” which reflects both state-of-knowledge uncertainty and population variability) and “assessment
uncertainty” (i.e., state-of-knowledge uncertainty—
how much the results of the analysis might change
with additional information). The authors note that
the distinction between assessment uncertainty and
outcome uncertainty may not be relevant to decisionmakers who must reach a final decision immediately, but is relevant to decisionmakers who have the
option of collecting more information before deciding. In discussing assessment uncertainty, they also
distinguish between “unlimited [or perfect] information” (i.e., information that would allow one to ascertain the “true risk”) versus amounts of new information that might result from a realistic research
effort (“information or analysis that might actually
become available, say, by waiting a few years”). Of
course, the definition of what constitutes a “realistic”
research effort will again depend on the importance
of the problem, and on how long a decision can realistically be deferred.
Cox et al.(11) argue that “characterization of
uncertainties about probabilities often carries zero
value of information and accomplishes nothing to
improve risk-management decisions.” More specifically, they claim that if the final decision (or action) must be based on the information available
right now, then characterizing the uncertainty about
the probabilities of various outcomes is of little use.
However, we believe strongly that they overstate
their case. In particular, their argument does not
hold in the important case where the outcome measure of interest (e.g., expected time until failure, or
the probability of two consecutive failures) is nonlinear in the uncertain probability because in that case
the expected value of the outcome measure of interest will depend on the entire distribution of the
uncertain probability, not just its expected value.(12)
1902
Bier and Lin
(Of course, an appropriate characterization of
uncertainty could also help decisionmakers understand the strength of the guidance available to support their decisions, but this seems less important in
cases where the uncertainty normatively has no impact on the selection of the optimal decision.)
Thus, distinctions such as those above (between
state-of-knowledge uncertainty and population
variability, or unlimited versus realistic amounts
of information) are perhaps best viewed not as
fundamental differences, but rather as heuristics to
help analysts approximate the results that would
be obtained from a complete value of information
analysis.(9,13,14) A value of information analysis takes
into account not only whether a particular uncertainty is in principle reducible by further research,
but also factors such as the cost of that research and
its likelihood of success in reducing the uncertainty.
Value of information analysis can also be supported
by decomposing the overall uncertainty into its dominant contributors, as is done, for example, in Ref. 15.
2.3. Implications of Dependence Induced
by Uncertainty
Bearing these caveats in mind, let us consider in
more detail the effects of different types of uncertainty on typical risk acceptability decisions. In particular, we will argue here that it can be important to
distinguish between population variability and stateof-knowledge uncertainty (which is assumed here to
affect all members of a population equally) even
when decisionmakers do not have the luxury of collecting more information before deciding. The reason for this is that unlike population variability, such
state-of-knowledge uncertainty gives rise to dependence between members of the exposed population,
which can have important effects on the range of possible outcomes.
To show the dependence between members of
the exposed population, let us consider the risk yi of
member i drawn at random from the population. Using a simple additive model for illustrative purposes,
yi could be represented as:
yi = μ + A+ ei ,
(1)
where the constant μ is the grand mean of the risk in
question for the population as a whole, random variable A (with mean zero and variance σ A2 ) represents
state-of-knowledge uncertainty, and ei is the difference between the risk experienced by individual i and
the (uncertain) population risk μ + A (where the ei
are independent random variables with mean zero
and variance σ 2 ). Note that even though the individual differences ei are independent, common element
A representing state-of-knowledge uncertainty leads
the risks faced by different individuals in the population to be dependent, with covariance given by:
Cov(y1 , y2 ) = Cov(A+ e1 , A+ e2 ) = σ A2 .
(2)
To illustrate the fact that uncertainty and variability have different effects on the distribution of
possible outcomes, first consider the case where
the total outcome uncertainty Var(yi ) stems largely
from state-of-knowledge uncertainty (i.e., where
σ 2 is small). In this case, regardless of whether the
uncertainty can realistically be reduced by further
research, the risk levels borne by different members
of the exposed population will be highly correlated,
since the covariance σ A2 will be a large fraction of the
variance σ A2 + σ 2 . Moreover, this can be important
in practice. For example, if a hypothetical PRA concludes that the risk of core melt is between 10−5 and
10−3 per year for a particular type of nuclear power
plant, but the same (uncertain) risk applies to all
similar plants, then a high risk at one plant will most
likely not be canceled out by a low risk elsewhere,
and the maximum possible societal consequences
could be quite severe. Thus, for example, the chance
of having multiple core melt accidents over a period
of a few years will be greater in the correlated case
than if the core melt frequencies at different plants
were independent. If society is risk averse in the total
number of accidents, then the technology in question
might be much more undesirable on a societal basis
than would be indicated simply by multiplying the
mean core melt frequency per plant-year times the
number of plants (a procedure that would be appropriate if the core melt frequencies at the different
plants were independent). (Note also that such risk
aversion is likely to hold in practice. For example,
if public opinion is more forgiving of a first accident
than of subsequent events, this would essentially
translate into risk aversion over the number of
accidents.)
Similarly, the back-fit costs resulting from an accident will tend to be greater in the correlated case
than in the case of independence, since the increased
risk estimates resulting from the accident will apply to a larger number of facilities. To see this, consider the different impacts of an accident caused by
a generic industry-wide design flaw (assumed to be
correlated across all plants in the population) versus
an accident caused by a site-specific feature such as
Treatment of Dependence in Making Decisions
soil subsidence or flooding. An accident caused by
a generic design flaw is likely to result in increased
risk estimates (and hence required back-fits or safety
improvements) at all other facilities using the same
design, leading to a large total societal cost. By contrast, if it can be shown that the cause of an accident
was truly site-specific (and hence unlikely to occur at
other plants), safety improvements may be needed at
only a single plant, for a much lower total cost. Thus,
if one wishes to ensure that the total back-fit costs for
a particular technology will not exceed some maximum sustainable level, it will be especially important
to prevent even a single accident if risks at different
facilities are strongly correlated because doing so will
prevent not only that one accident, but also the need
for costly back-fits at other facilities affected by similar risks.
Similarly, consider the consequences of the U.S.
housing bubble in 2008. Because different subprime
mortgage borrowers had highly correlated probabilities of default, the consequences of the collapse of the
bubble were much greater than just the expected financial losses per loan times the number of loans.(16)
Much the same type of reasoning applies to
the assessment of health effect risks; for example,
risks from exposure to toxic chemicals. Here, the
question is whether the assessed uncertainties represent primarily variability in susceptibility among
individuals in the exposed population or, instead,
are due primarily to a general lack of knowledge
about the overall level of risk posed by the substance
being evaluated. If the uncertainties are due largely
to a general lack of knowledge, then the risk levels
experienced by different members of the exposed
population will tend to be highly correlated, and
all individuals in the population can be expected
to experience similar levels of risk. As before, this
means that under the reasonable assumption of
risk aversion with respect to the number of health
effects, the substance in question will be much less
desirable in this case than would be suggested merely
by the expected total societal risk. This is because
of the large number of health effects that would
be experienced if the risk of the substance turned
out to lie toward the high end of its assessed range.
Note also, as pointed out by one reviewer, that some
highly uncertain low-probability, high-consequence
risks (such as the possible risks of electromagnetic
fields, or the risk of a “nuclear winter” after the use
of nuclear weapons) are often dismissed altogether,
rather than weighting their large potential consequences by a small probability that the hazard exists.
1903
This runs the risk (admittedly perhaps a small one)
that the actions adopted by decisionmakers could
cause truly severe social consequences.
The above examples differ from a situation with
high population variability, in which high risks to
some individuals (e.g., people with respiratory problems) will tend to occur in tandem with lower risks
to less vulnerable individuals. In this latter case, the
magnitude of the total societal risk would be well
represented by the risk to an “average” individual,
multiplied by the size of the exposed population.
Variability between individuals might well create significant concerns about equity, but will tend to reduce the chances of unexpectedly large total societal
impacts (e.g., large numbers of health effects). Thus,
to pose an extreme example, society might be more
willing to accept use of a new chemical that would
cause health problems for 1% of the U.S. population (i.e., roughly 3,000,000 people) than a chemical
that had a 1% chance of causing health problems for
the entire population of 300,000,000 people (due to
risk aversion with respect to large numbers of health
effects(17,18) ). (However, equity concerns could counteract or even outweigh the common tendency to risk
aversion.)(19)
Of course, most risk assessments will yield results
that reflect a combination of population variability
and state-of-knowledge uncertainty. For instance, a
PRA of a nuclear power plant will typically include
some highly plant-specific features, but also some uncertainties that might pertain to all plants of similar
design. Similarly, health risk assessment must deal
with variability among individuals, and also with uncertainty about the overall risk levels posed by particular substances.
2.4. Two-Dimensional Monte Carlo Simulation for
Characterizing Uncertainty and Variability
A recent report by the National Research
Council(20) also points out the importance of distinguishing uncertainty from variability to support riskmanagement decision making. In particular, they recommend that risk assessments “should characterize
and communicate uncertainty and variability in all
key computational steps of risk assessment,” and
that “the level of detail used for uncertainty analysis and variability assessment should be an explicit
part of the problem formulation and planning and
scoping.”
Two-dimensional (or second-order) Monte
Carlo simulation was developed to address the needs
1904
for better characterizing uncertainty and variability.
Unlike other methods for uncertainty analysis (such
as one-dimensional Monte Carlo analysis,(21) probability bounds or probability boxes,(22) or even just
sensitivity analysis),(23) in two-dimensional Monte
Carlo simulation, the “outer” simulation is used to
obtain sample values of the epistemically uncertain
parameters, which are then used to specify the
distribution functions from which the values of the
variable quantities in the “inner” simulation are
sampled. As a result, the method is uniquely suited
to quantify the variability and uncertainty in risk
analyses. Cullen and Frey(4) provide straightforward
guidance about the use of two-dimensional Monte
Carlo simulation and its utility in characterizing
uncertainty and variability. Potential benefits and
sample applications of two-dimensional Monte Carlo
simulation are also reviewed by Zach and Bier.(24)
For example, in an assessment of mycotoxin
exposure for preschool children consuming apple
juice in Belgium,(25) variability (in particular, interindividual differences in juice consumption) was
found to dominate scientific uncertainty. Therefore,
the authors suggest that risk reduction strategies
specifically targeting the variability among individuals (e.g., consumption advisories) would be more effective than reductions in the allowable levels of contamination in juice.
By contrast, Cummins et al.(26) develop a model
to predict Escherichia coli O157:H7 contamination of
beef trimmings in the food supply chain, and find that
uncertainty has a bigger effect on the predictions of
their model than variability. Therefore, they suggest
that further experiments are needed to reduce uncertainty about factors such as microbial test sensitivity
and microbial transfer mechanisms in food processing before determining which risk reduction actions
would be justified.
However, in many real-world applications, only
a few sources of epistemic uncertainty (e.g., parameter uncertainty) are considered in two-dimensional
Monte Carlo simulations, with other sources of uncertainty (especially model uncertainty) often being
ignored. The results obtained from such analyses
might therefore underestimate the true uncertainty,
and convey misleading information regarding the accuracy of the output measures. For example, Linkov
and Burmistrov(27) investigate radioactive deposition
on fruit growing in the vicinity of nuclear facilities.
They compare six deposition models, and find as
much as seven orders of magnitude difference in predicted radioactive concentration. Even though the
Bier and Lin
range of model predictions was substantially reduced
after several rounds of discussion among the modelers, this study still confirms the need to address
model uncertainty in quantifying risk. Along these
lines, Dubus et al.(28) refer to the challenge of properly characterizing uncertainty as the “uncertainty
iceberg,” and caution against “the tendency to model
only those probabilistic aspects that we think we
know how to analyze.”
2.5. Communicating Risk Analysis Results
Even when methods such as two-dimensional
Monte Carlo analysis are effectively used to
quantify uncertainty and variability, this can still
create challenges in communicating risk analysis
results to decisionmakers. On the one hand, a onedimensional uncertainty analysis that produces only
a single overall probability distribution may obscure
many important distinctions, such as those discussed
above. However, many decisionmakers are likely to
be perplexed by the results of two-dimensional analyses, and statements such as “sixty-five percent of the
total outcome uncertainty for this chemical is due to
population variability among the exposed individuals” may not help to clarify the situation.
Brown and Ulvila(10) suggest several possible
graphical representations for distinguishing between
outcome uncertainty and assessment uncertainty.
However, they do not provide empirical results about
the effectiveness of these presentation formats, so it
is difficult to determine whether the suggested formats are in fact an improvement over other methods
of displaying uncertainties. Thompson(29) points out
the infeasibility of using a fixed criterion such as the
99th percentile, emphasizes the importance of getting past “the legacy of the point estimate approach,”
and argues that risk communication can be improved
by better characterizing variability and uncertainty.
However, she concludes that “we have a long way
to go in developing effective ways to present the results of PRAs and sensitivity analyses to risk managers and to public.”
At present, perhaps the best approach is simply
to discuss with the decisionmaker any important
sources of nonlinearity in his or her utility function;
for example, risk aversion with respect to large
numbers of health effects, or an aversion to multiple
accidents. Risk assessment results can then be presented in terms of the most relevant attributes, rather
than relying on the decisionmaker to subjectively
translate the results from one assessment endpoint to
Treatment of Dependence in Making Decisions
another (a process that may be prone to errors or biases). For example, if decisionmakers are risk averse
with respect to multiple fatalities, then a probability
distribution for the total number of fatalities that
could result from a particular hazardous substance is
likely to be more meaningful than a probability distribution for the level of risk to a randomly selected
individual. Similarly, value of information analysis
could be used to explicitly quantify the benefits of
collecting additional information in cases where that
option is being seriously considered, rather than
expecting decisionmakers to assess the desirability
of further research based solely on the breadth of
the distribution for the overall risk.
According to this view, the risk analyst should
aim to provide as much assistance as possible to
the decisionmaker, and reduce the amount of judgment needed in interpreting the results of the analysis. This contrasts with the usual practice in many
areas of risk analysis, where conventional figures of
merit (e.g., annual core melt frequency) are typically
used to describe the risks of hazardous technologies,
in some cases without much regard for the specific
needs of decisionmakers in the particular situation at
hand.
1905
Fig. 1. Comparison of the risks of two alternative designs.
3. COMPARISONS OF THE RISKS FROM TWO
OR MORE TECHNOLOGIES
Comparisons of the risks posed by two or more
options arise quite frequently in practice. In some
such comparisons, the technologies being compared
are so different that their risks are unlikely to be
correlated. For example, although the risks from nuclear power plants and coal-burning power plants
may both be quite uncertain, these uncertainties arise
from different sources, and are likely to be (at least
approximately) independent; for example, safety system reliabilities at nuclear power plants are unlikely
to be correlated with the health effects of sulfur dioxide emissions from coal plants. In this case, if risk can
be quantified using a single metric (such as fatalities),
then simply subtracting the expected risks of the two
technologies can adequately describe their difference
in risk, at least under the assumption that they are independent.
In other cases, however, the designs or technologies being compared may be quite similar, with only
incremental differences between them. Examples of
this include assessing the risk reduction achieved by
installing more reliable equipment, adding an extra
safety system, or eliminating one particular accident
Fig. 2. Difference between the risk levels of Designs A and B.
scenario by correcting a design flaw. In this case,
simple risk comparisons such as the one shown in
Fig. 1 are subject to possible misinterpretation. For
instance, the comparison shown in Fig. 1 suggests
that Design B may not necessarily be better than Design A, since point B1 represents a higher risk level
than point A0 . In this case, simply subtracting the two
risk levels under the assumption of independence
would give a result similar to the broader of the two
distributions shown in Fig. 2. According to that distribution, there would appear to be a significant chance
that changing from Design A to Design B could actually lead to a risk increase (as represented by the lefthand tail of the curve extending below zero), rather
than the anticipated decrease.
However, we may actually know virtually for certain that Design B represents a reduction in risk.
For example, Design A may represent the risk posed
1906
by a particular plant with a two-train auxiliary feedwater (AFW) system, while Design B represents the
risk from the same plant with a more reliable (and
more heavily redundant) three-train AFW system. In
this case, much of the uncertainty reflected in Fig. 1
is likely to stem from factors that are highly correlated between the two designs; for example, uncertainty about initiating event frequencies, about the
response of safety systems other than the AFW system, or about containment response to a core melt.
The risks of Designs A and B in this case would
clearly not be independent, and the narrower distribution in Fig. 2 might therefore be a better description for the difference in risk between the two technologies. Thus, while the expected value of A–B does
not depend on whether these two designs are independent, the distribution of A–B does. (This is analogous to the idea of paired comparisons in statistics,
where greater statistical power can be achieved if the
dependence between two sets of sample values is recognized, rather than treating them as independent.)
The two distributions shown in Fig. 1 would still
be valid, of course, but may give a misleading impression. For example, many decisionmakers may interpret Fig. 1 as indicating that the benefit associated
with Design B is highly uncertain. However, the situation depicted in this figure is actually consistent
with, say, a guaranteed factor-of-three improvement
in a highly uncertain initial risk level. This would be
the case, for instance, if high risk for Design A (point
A1 ) occurred always in conjunction with high risk for
Design B (point B1 ), and similarly for lower risk levels (e.g., A0 and B0 ).
To avoid such possible misinterpretations, risk
analysts may wish to present not only “before-andafter” comparisons such as Fig. 1, but also distributions for the actual magnitude of the risk reduction
associated with a particular design change. This risk
reduction can be represented either by an arithmetic
difference or by a ratio of the two risk levels. In
this case, uncertainty analysis would need to be performed for the overall risk reduction itself, as well as
for the risk levels of the two options individually.
Similar arguments can be applied to risk comparisons in which a large number of risks or regulatory interventions are evaluated and compared. As
pointed out by Cox,(30) such comparisons often involve “correlated consequences, due to uncertainties
about common elements”; in other words, the presence of epistemic uncertainty can introduce dependence in the outputs of the assessments. As a result,
Cox points out that “methods for optimizing selec-
Bier and Lin
tion of a portfolio (subset) of risk-reducing opportunities can often achieve significantly greater risk reductions for resources spent than can priority-scoring
rules.” This provides one more example of the ways
in which better treatment of dependence can lead to
improved regulatory and risk management decisions.
4. CONCLUSIONS
Dependence can be important both in evaluations of risk acceptability for a single technology that
is in widespread use (e.g., at multiple facilities, or
by multiple people) and in comparisons of two or
more alternative technologies; for example, a base
case and one or more possible risk reduction strategies. Although the paradigm shift away from the use
of point estimates to the use of distributions has been
dramatic,(29) the role of dependence in making decisions about risk has received relatively little attention
to date, and is not always adequately understood, either by decisionmakers or even sometimes by risk analysts.
Some suggestions have been presented here for
how to communicate the impacts of dependence to
decisionmakers. However, these ideas are only preliminary in nature. Little research has been done on
effective methods of communicating risk analysis results to decisionmakers, even though the advent of
risk-informed decision making means that decisionmakers are increasingly being asked to take highly
technical risk analysis results into account in their
decisions.(31,32) Therefore, further research would be
desirable on topics such as the effectiveness of different risk communication strategies, and methods for
enhancing decisionmakers’ understanding of probabilistic risk methods.
ACKNOWLEDGMENTS
The material in this article draws heavily from
an earlier conference paper by the first author, given
at the 10th International Conference on Structural
Mechanics in Reactor Technology.(33) This revised
version of that paper owes a great deal to discussions over the years with Dale Hattis of Clark University and Roger Cooke of Resource for the Future. This research was supported in part by the
U.S. Department of Homeland Security through
the National Center for Risk and Economic Analysis of Terrorism Events (CREATE) under award
number 2010-ST-061-RE0001, and by the National
Science Council of Taiwan under grant number
Treatment of Dependence in Making Decisions
NSC101-2410-H-011-034-MY2. However, any opinions, findings, and conclusions or recommendations
in this document are those of the authors and do
not necessarily reflect views of the U.S. Department
of Homeland Security, the University of Wisconsin–
Madison, the University of Southern California, the
National Taiwan University of Science and Technology, or CREATE.
1907
16.
17.
18.
19.
20.
REFERENCES
21.
1. U.S. Nuclear Regulatory Commission. Reactor Safety Study.
Washington, DC: U.S. Nuclear Regulatory Commission Report, WASH-1400, 1975.
2. Apostolakis G, Kaplan S. Pitfalls in risk calculation. Reliability Engineering, 1981; 2:135–145.
3. Kaplan S. A method for handling dependencies and partial dependencies of fragility curves in seismic risk analysis. In Proceedings of the Transactions of the 8th International Conference on Structural Mechanics in Reactor Technology, August
19–23. Brussels, Belgium, 1985.
4. Cullen AC, Frey HC. Probabilistic Techniques in Exposure
Assessment: A Handbook for Dealing with Variability and
Uncertainty in Models and Inputs. New York: Plenum Press,
1999.
5. Smith AE, Ryan BP, Evans JS. The effect of neglecting correlations when propagating uncertainty and estimating the population distribution of risk. Risk Analysis, 1992; 12(4):467–474.
6. Kaplan S. On a two-stage Bayesian procedure for determining failure rates from experiential data. IEEE Transactions on
Power Apparatus and Systems, 1983; PAS-102(1):195–199.
7. Paté-Cornell E. Uncertainty in risk analysis: Six levels of treatment. Reliability Engineering and System Safety, 1996; 54:95–
111.
8. National Research Council. Science and Judgment in Risk Assessment. Washington, DC: National Academy Press, 1994.
9. Morgan MG, Henrion M. Uncertainty: A Guide to Dealing
with Uncertainty in Quantitative Risk and Policy Analysis.
Cambridge: Cambridge University Press, 1990.
10. Brown R, Ulvila JW. Communicating uncertainty for regulatory decisions. Pp. 177–187 in Covello VT, Lave LB, Moghissi
A, Uppuluri VRR (eds). Uncertainty in Risk Assessment,
Risk Management, and Decision Making. New York: Plenum
Press; 1987.
11. Cox Jr. LA, Brown GG, Pollock SM. When is uncertainty about uncertainty worth characterizing? Interface, 2008;
38(6):465–468.
12. Mosleh A, Bier V. Uncertainty about probability: A reconciliation with the subjectivist viewpoint. IEEE Transactions
on Systems, Man, and Cybernetics—Part A: Systems and Humans, 1996; 26(3):303–310.
13. Felli JC, Hazen GB. Sensitivity analysis and the expected
value of perfect information. Medical Decision Making, 1998;
18:95–109.
14. Clemen RT, Reilly T. Making Hard Decisions with DecisionTools. Belmont, CA: Duxbury Press, 2000.
15. Cullen AC. The sensitivity of probabilistic risk assessment results to alternative model structures: A case study of municipal
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
waste incineration. Journal of Air & Waste Management Association, 1995; 45:538–546.
Liebowitz S. Anatomy of a train wreck: Causes of the mortgage meltdown. Pp. 287–322 in Holcombe RG, Powell BW
(eds). Housing America: Building out of a Crisis. Oakland,
CA: Independent Institute, 2009.
Slovic P, Fischhoff B, Lichtenstein S. Rating the risks. Environment, 1979; 21(3):14–20, 36–39.
Slovic P. Perception of risk. Science, 1987; 236(4799):280–285.
Keeney RL. Equity and public risk. Operations Research,
1980; 28(3): 527–534.
National Research Council. Science and Decisions: Advancing Risk Assessment. Washington, DC: National Academies
Press, 2008.
Vose J. Risk Analysis: A Quantitative Guide. West Sussex,
UK: John Wiley, 2000.
Ferson S, Kreinovich V, Ginzburg L, Myers DS, Sentz K.
Constructing Probability Boxes and Dempster–Shafer Structures. Albuquerque, NM: Sandia National Laboratories Report, SAND2002-4015, 2003.
Busschaert P, Geeraerd AH, Uyttendaele M, Van Impe JF.
Sensitivity analysis of a two-dimensional quantitative microbiological risk assessment: Keeping variability and uncertainty
separated. Risk Analysis, 2011; 31(8):1295–1307.
Zach L, Bier V. Uncertainty, risk assessment, and food safety.
Pp. 1–13, Vol. 3, in Voeller JG (ed). Wiley Handbook of Science and Technology for Homeland Security. Hoboken, NJ:
Wiley-Interscience, 2010.
Baert K, De Meulenaer B, Verdonck F, Huybrechts I, De
Henauw S, Vanrolleghem PA, Debevere J, Devlieghere F.
Variability and uncertainty assessment of patulin exposure for
preschool children in Flanders. Food and Chemical Toxicology, 2007; 45:1745–1751.
Cummins E, Nally P, Butler F, Duffy G, O’Brien S. Development and validation of a probabilistic second-order exposure
assessment model for Escherichia coli O157: H7 contamination of beef trimmings from Irish meat plants. Meat Science,
2008; 79:139–154.
Linkov I, Burmistrov D. Model uncertainty and choices made
by modelers: Lessons learned from the International Atomic
Energy Agency model intercomparisons. Risk Analysis, 2003;
23(6):1297–1308.
Dubus I, Brown C, Beulke S. Sources of uncertainty in pesticide fate modelling. Science of the Total Environment, 2003;
317:53–72.
Thompson KM. Variability and uncertainty meet risk management and risk communication. Risk Analysis, 2003; 22(3):647–
654.
Cox Jr. LA. What’s wrong with hazard-ranking systems? An
expository note. Risk Analysis, 2009; 29(7):940–948.
U.S. Nuclear Regulatory Commission. Use of Probabilistic
Risk Assessment in Plant-Specific, Risk-Informed Decisionmaking: General Guidance (Standard Review Plan Chapter
19). Washington, DC: U.S. Nuclear Regulatory Commission,
1998.
International Nuclear Safety Group. A Framework for an Integrated Risk Informed Decision Making Process. Vienna: International Atomic Energy Agency Report, INSAG-25, 2011.
Bier VM. On the treatment of dependence in making decisions about risk. In Proceedings of the Transactions of the 10th
International Conference on Structural Mechanics in Reactor
Technology, August 14–18. Anaheim, CA, 1989.