1 2 THE CONCEPT OF UNCERTAINTY IN GEOTECHNICAL RELIABILITY 3 Gregory B. Baecher,1 M.ASCE and John T. Christian,2 F.ASCE 4 Abstract 5 This paper considers the meaning of uncertainty in relation to risk analysis in civil engineering 6 and discusses an increasingly common vocabulary for describing uncertainties in such analyses. The 7 discussion touches on the concepts of necessity and chance, and proceeds to draw a distinction between 8 uncertainties modeled as resulting from natural variability vs. those resulting from lack of 9 knowledge: “uncertainties of the world” vs. “uncertainties of the mind.” The paper finishes by sug- 10 gesting implications of the division between these two types of uncertainty on the results and use of 11 risk analysis in water resources engineering. 12 Introduction 13 Over the past few years, the U.S. Army Corps of Engineers has significantly revised its 14 methodology for flood damage studies to embrace modern risk analysis techniques. Among the ad- 15 vances reflected in the new Corps methodology is the explicit incorporation of parameter and model 16 uncertainty within an analysis that had heretofore focused solely on stochastic or “random” variation 17 in flood frequency, water height, and damages. A National Research Council review of the new pro- 18 cedures (National Research Council 2000) concluded, 19 Professor and Chairman, Department of Civil and Environmental Engineering, University of Maryland, College Park, MD 20742 2 Consulting engineer, 23 Fredana Road, Waban, Massachusetts 02168 1 KEY WORDS: geotechnical reliability, risk analysis, uncertainty modeling 1/27 1 The new [risk analysis] techniques are a significant step forward and the Corps should be 2 greatly commended for embracing contemporary, but complicated, techniques and for departing 3 from a traditional approach that has been overtaken by modern scientific advances. […] There 4 should be no turning back from this accomplishment. 5 The recognition of parameter and model uncertainty as distinct from randomness is important in 6 modern risk analysis. To simplify the task of risk assessment, one makes assumptions about how to 7 grapple with uncertainties. By far the most important of these assumptions is separating uncertainty 8 between (1) natural variations over space and time from (2) lack of knowledge in the mind of the 9 analyst. The separation of uncertainty in this way is more an analytical convenience than a fact of the 10 11 world. The distinction between these two types of uncertainty has profound impact on the results 12 of a risk analysis, and on the meaning that one ascribes to those results. Yet, the questions raised by 13 this fundamental distinction are by no means simple to answer. Most uncertainties are a mixture of 14 things, so how does one practically differentiate natural variation from limited knowledge? Since the 15 two types of uncertainty reflect conceptually different things, how does one quantify each? If proba- 16 bility theory is used as a measure of uncertainty, are different types of probability needed for differ- 17 ent types of uncertainties? Can and should the two types of uncertainty be combined? If they can 18 and should be combined, how does one do so? These issues are not limited to the analysis of flood 19 damage; they are just as important to dam safety, seismic hazard, structural reliability, wind threat, 20 and other risks of concern to the built environment. 21 22 Is the world random? To begin to answer these questions, one must first ask, what does it mean for something to 23 be uncertain, and what does it mean for something to happen by chance? Questions about the 24 meaning of uncertainty, and the conflict between necessity and chance, have a long history. 2/27 1 2 Uncertainty The word uncertainty is used by different people to mean different things. Most people take 3 uncertainty as a primitive term, that is, a term whose meaning is accepted but undefined. Indeed, it 4 seems possible to write entire books on the subject of uncertainty without ever defining the word. In 5 risk analyses for water resource projects, the term has sometimes been defined as “lack or absence 6 of certainty”(National Research Council 1995; U.S. Army Corps of Engineers 1992a; U.S. Army 7 Corps of Engineers 1992b), which is not at all helpful. Maass, et al. (1962) say uncertainty means, the 8 “consequences of a decision cannot be foretold with confidence,” which is slightly more definitive. 9 From a practical point of view, one might distinguish three facets to the notion of uncertain- 10 ty. Uncertainty with respect to the world means that an outcome is unknown or not established and 11 therefore in question. Uncertainty with respect to a belief means that a conclusion is not proven or 12 is supported by questionable information. Uncertainty with respect to a course of action means that 13 a plan is not determined or is undecided. We return to these facets in Figure 4. 14 The definition for uncertainty in the Water Resource Council’s Principles and Guidelines for pro- 15 ject evaluation (U.S. Water Resources Council 1983) is that “uncertainty describes situations wherein 16 lack of certainty is not describable by [numerical] probabilities.” This aligns with Knight (1921), who 17 introduced a well-know distinction into the business literature, distinguishing the notions of "risk" 18 and "uncertainty.” In his view, risk refers to situations wherein one can assign probabilities to 19 chance, but uncertainty refers to situations wherein one cannot. Keynes (1937) expresses the distinc- 20 tion as follows, 21 By uncertain […] I do not mean merely to distinguish what is known for certain from what 22 is only probable. The game of roulette is not subject, in this sense, to uncertainty […]. The 23 sense in which I am using the term is that in which the prospect of a European war is uncer- 24 tain, or the price of copper and the rate of interest twenty years hence […]. About these mat3/27 1 ters there is no scientific basis on which to form any calculable probability whatever. We 2 simply do not know. 3 This narrow definition is not widely used in science and engineering today, although it is still 4 common in the popular literature on investing (Bernstein 1996). A derivative of this view has 5 reemerged in the mechanical engineering design literature of decision based design (Smith 2000), where- 6 in risk is defined more or less according to Knight, but uncertainty is divided into uncertainty and ambi- 7 guity. Smith defines risk as referring to situations where chance can be described by known probabil- 8 ity distributions with specified parameters; uncertainty to situations where chance can be described by 9 probability distributions but either the forms or parameters of the distributions are unknown; and 10 ambiguity to situations where “the functional form is completely unknown, and often […] the rele- 11 vant input and output variables are unknown.” Smith’s prescription for risk is probabilistic model- 12 ing, for uncertainty more data, and for ambiguity more research. 13 Many economists and decision theorists reject Knight’s distinction. Current thinking is that, 14 while a decision-maker may choose not to assign probabilities to vague uncertainties, he or she can 15 do so, and actually does do so implicitly in making practical decisions. Said another way, the issue is 16 a problem of knowing rather than of existence. Keynes goes on to say, 17 [under uncertainty] there is no scientific basis on which to form any calculable probability 18 whatever. We simply do not know. Nevertheless, the necessity for action and for decision 19 compels us as practical men to do our best to overlook this awkward fact and to behave ex- 20 actly as we should if we had behind us a good Benthamite calculation of a series of prospec- 21 tive advantages and disadvantages, each multiplied by its appropriate probability waiting to 22 be summed. 4/27 1 That is, probabilities are simply “beliefs;” there are no “true” probabilities to be known or 2 not known. In any practical decision the quantification of beliefand thus the quantification of 3 probabilityalways results, if only implicitly. 4 Despite the decision theoretic view of uncertainty as inevitably quantitative, there are issues 5 for which we almost completely lack sureness or confidence, and about which it is nearly impossible 6 to assess credible numerical probabilities. An NRC panel (1996) includes among such issues, “the 7 economic impact of global climate change many decades in the future, which is […] extremely chal- 8 lenging to quantify.” Similar challenges attend geological and human-use issues in the evaluation of 9 nuclear waste repositories. Such considerations are recognized but cannot be measured, quantified, 10 or expressed statistically. Nonetheless, in the current idiom of risk analysis, these, too, would be un- 11 certainties. 12 Necessity vs. chance 13 Given the view of uncertainty above, what role is left for inherent randomness? Is there indeed ran- 14 domness in the world that needs to be accommodated in risk analysis; or is the world deterministic 15 and uncertainty simply a reflection of our ignorance? This argument dates to antiquity. 16 In the modern era, following Newton’s work of the17thC., philosophers and scientists 17 thought of the natural world as determined by physical laws. These physical laws and a set of initial 18 conditions fully determined the future state of the world. Laplace in 1814 begins his Philosophical Es- 19 say on Probabilities with, “[A]ll events, even those which on account of their insignificance do not 20 seem to follow the great laws of nature, are a result of it just as necessarily as the revolutions of the 21 sun.” This doctrine of necessity had profound implications. In a famous quotation from the opening 22 page of the Essay, Laplace goes on to say, 5/27 1 Given for one instant an intelligence which could comprehend all the forces by which nature 2 is animated and the respective situation of the beings who compose itan intelligence suffi- 3 ciently vast to submit these data to analysis–it would embrace in the same formula the 4 movements of the greatest bodies of the universe and those of the lightest atom; for it, noth- 5 ing would be uncertain and the future, as the past, would be present to its eyes. 6 So, by the beginning of the 19thC. the scientific community saw the world as completely determined 7 by natural laws, in both the past and the future. This had implications that gripped intellectual de- 8 bate: if the world was determined by natural laws, what room was left for human volition and the 9 random unpredictability of nature? James Clerk Maxwell, a founder of statistical mechanics and a 10 devout man, openly agonized over the implications of scientific determinism and moral free will. 11 Scientists of the standing of Boltzmann and Boussinesq were caught up in the debate (Hacking 12 1990). Laplace became one of the greatest of early developers of probability theory by reasoning 13 that, although the world may be determined by an all-knowing intelligence and follow a doctrine of ne- 14 cessity, yet, human intelligence is limited and therefore perceives randomness due to ignorance. Prob- 15 ability theory was needed, not to describe randomness, but to describe man’s limited knowledge 16 about the functioning of a deterministic world. 17 At the turn of the 20th century, Peirce (1998) refuted Laplace’s position by exaggerating it, 18 “[G]iven the state of the universe in the original nebula, and given the laws of mechanics, a suffi- 19 ciently powerful mind could deduce from these data the precise form of every curlicue of every let- 20 ter I am now writing.” Rejecting the doctrine of necessity, he concludes, “[T]he apparently universal 21 laws that are the glory of the natural sciences are a by-product of the workings of chance.” Of 22 course, the view that the workings of physical world are random at the most basic level is a central 23 belief of modern physics. 6/27 1 In recent years a new consideration has arisen on the nature of chance: chaos. The origins of 2 chaos theory lie in the observation that for complicated non-linear systems small changes in initial 3 conditions can lead to large changes in final conditions. Since the initial changes are too small to be 4 feasibly modeled, final conditions become unpredictable and, for practical purposes, due only to 5 chance (Gleick 1988). 6 Thus, in contemporary thought, two views of necessity and chance coexist. One follows La- 7 place and the doctrine of necessity. The other follows Peirce and the doctrine that at some small 8 enough level of detail the world can only be explained by statistical laws. Certain portions of the 9 high-energy physics community subscribe to the second view, but as a practical matter, most engi- 10 neering risk analysts—certainly those dealing at the scale of civil infrastructure—likely subscribe to 11 the Laplacian view: The world is deterministic, things occur necessarily according to natural law, and 12 the only source of uncertainty is man’s limited knowledge about the state of the world. Randomness 13 is only a modeling assumption. 14 Uncertainty in reliability assessment 15 We presume that the world is deterministic on a macro scale, and thus, knowable. Our information 16 about the world, on the other hand, is limited. All uncertainty in risk analysis is due to limited 17 knowledge. Nevertheless, most risk analyses have focused on random uncertainties. Floods and 18 earthquakes have been treated as stochastic processes. How does one reconcile the philosophical 19 point of view that the world is deterministic with the risk analysis presumption that the world is ran- 20 dom? 21 The answer is that risk analysts find it mathematically effective to model the world as if some 22 of its uncertainties were not due to limited knowledge but were actually random. That is, some un- 23 certainties are modeled as random processes because it is convenient to do so. In the practical appli7/27 1 cation of risk analysis there is a presumed combination of some uncertainties treated as random var- 2 iables and some treated as due to limited knowledge. It is important to remind ourselves, however, 3 that the distinction is a hypothetical construction of the modeling effort. The distinction is one that 4 we impose on a problem; it is not an inherent distinction in the physical system. 5 Thus, for the purposes of risk analysis, uncertainty is usually attributed to two things: (1) a 6 presumption of inherent randomness in certain natural events, and (2) incomplete knowledge about 7 relationships, models, parameters, and unique events. Until recently, the first of these was given 8 precedence in engineering risk analysis. The significant change now underway is that both types of 9 uncertainty are being explicitly considered. 10 11 Natural variation We treat uncertainties due to limited knowledge as if they were due to chance because we 12 have too little understanding of their mechanisms or too few data to model them in a way that pro- 13 vides more predictive power. Randomness at the macro scale in a risk analysis is an assumption; it is 14 not an inherent quality of the world. In principle, one ought to be able to predict whether a tossed 15 coin lands heads-up or heads-down, but it is more convenient to assume that coin tossing is a ran- 16 dom process with a consistent frequency of “heads”. This random model may predict the outcomes 17 of the tosses of a coin at least as well as we could by modeling the structural and aero-dynamics of 18 coin flipping. 19 Rainfall and runoff are processes that are usually modeled as random processes. Does this 20 mean that rainfall and runoff are unpredictable attributes of nature? Not necessarily. Given advances 21 in atmospheric science and hydrology, it is becoming common for weather models to be used in 22 predicting rainfall, and thus runoff and flood heights. Such models have also been used to predict 23 probable maximum floods for dam safety studies (Salmon 1998). When flood discharges are predict- 8/27 1 ed by mechanistic modeling, they cease to be assumed to be random processes. Their uncertainties 2 become those of model and parameter errors. 3 Human factors in risk analysis present some particularly interesting examples of how deter- 4 ministic processes are treated as if they were random, simply for convenience. Consider a Wanderer 5 who comes to a parting of the ways along the road to Rome and does not know which path to take. 6 Not all roads lead to Rome (at least not in this example), so what is the probability that our Wander- 7 er gets to Rome instead of, say, Genoa? This might be modeled as a random event–the Wanderer 8 chooses one road, and it may or may not be the one that leads to Rome–although clearly there is 9 nothing random about which road does lead to Rome. The uncertainty is not of the world but in the 10 mind of the Wanderer, yet risk analyses might treat this as a random variable (Rescher 1995). 11 The evolution of the idea of randomness since early times (David 1962), has concerned natural 12 processes about which we have too little understanding to allow predictions about individual realiza- 13 tions like the roll of dice, patterns of the weather, when and where earthquakes occur. Such unpre- 14 dictable occurrences have been called aleatory by Hacking (1975) and others (Cooke 1991; Daston 15 1988; Gigerenzer et al. 1989), after the Latin aleator, meaning “gambler” or “die caster.” This term is 16 now widely used in risk analysis, especially in applications dealing with seismic hazards, nuclear safe- 17 ty, and severe storms. 18 In applying probability measures to such uncertainties, the meaning of term probability is usu- 19 ally taken to be the frequency of occurrence in a long or infinite series of similar trials. In this sense, 20 probability is interpreted for operational matters to be a property of the system (i.e., a property of 21 nature) independent of anyone’s knowledge of it or evidence for it. We may or may not know what 22 the value of this probability is, but the probability in question is a property for us to learn. It is in- 23 nate; there is a “true” value of this probability. Two observers, given the same evidence, should con- 24 verge to the same numerical value. 9/27 1 2 Limited knowledge The other aspect of uncertainty concerns what we know. The truth of a proposition, guilt of 3 an accused, whether or not war will break out. Such unknown things have been called epistemic, after 4 the Greek for “knowledge" (Hacking 1975). This term, too, is now widely used in risk analysis, to 5 distinguish imperfect knowledge from randomness. 6 The term probability, when applied to imperfect knowledge, is usually taken to mean the de- 7 gree of belief in the occurrence of an event or the truth of a proposition. We may or may not know 8 what the value of the probability is, but the probability in question can be learned by self- 9 interrogation. There is, by definition, no “true” value of this probability. Probability is a mental state 10 and therefore unique to the individual. Two observers, given the same evidence, may arrive at differ- 11 ent probabilities and both be right 12 How do epistemic uncertainties manifest themselves in risk analyses? It is convenient to 13 consider two types of risk analyses that are common in water resources engineering. The first, sug- 14 gested by Figure 1, uses a direct calculation of the derived distribution of some uncertain variable 15 based on uncertainties in a set of input variables, and boundary and initial conditions. This is the risk 16 analysis model used by the USACE in assessing flood hazard damage reduction (1996). In a direct- 17 calculation approach, epistemic uncertainties enter the analysis as model and parameter uncertainties. 18 Model uncertainties reflect the inability of a model or design technique to represent a sys- 19 tem's true physical behavior precisely, or the analyst’s inability to identify the best model, or a model 20 that may be changing in time in poorly known ways (e.g., a flood-frequency curve changing because 21 of changing watershed). The models used to approximate naturally varying phenomena need to be 22 fit to natural process by observing how those processes work, by measuring important features, and 23 by statistically estimating parameters of the models. 10/27 1 Parameter uncertainties result from an inability to assess exactly the parametric values from 2 test or calibration data due to limited numbers of observations and the statistical imprecision at- 3 tendant thereto. These include data uncertainties deriving from (i) measurement errors, (ii) incon- 4 sistency of data, (iii) data handling and transcription errors, and (iv) poor representativeness of sam- 5 pling schemes due to time and space limitations. 6 The second type of risk analysis model, suggested by Figure 2, uses an event-driven logic in 7 which initiating events are presumed and all possible chains of subsequent events (either logical or 8 chronological) are laid out in a tree-like pattern. This is the sort of model used in most dam safety 9 evaluations (Bowles et al. 1999). In this event-driven approach, epistemic uncertainties enter the 10 11 analysis unique events (e.g., an earthquake of a certain size and location). Probabilities associated with unique phenomena reflect uncertainty about the truth of a sci- 12 entific theory, a popularly held doctrine, or perhaps the occurrence of a specific historical event 13 when inadequate or conflicting accounts are involved. These probabilities are of a special nature 14 since, by definition, they cannot be defined with respect to repeatable experiments. The probabilities 15 involved are subjective and typically vary from one person to another. Using probability as a meas- 16 ure of uncertainty about unique phenomena enlarges its domain of application to propositions that 17 do not meet the requirement of repeatability. For example, the arrangement of playing cards in a 18 single deal is unique and can be determined exactly by simply looking through the deck. However, at 19 the time of the deal, bridge or poker players do not know the distribution and are forced to treat it 20 probabilistically, modifying their evaluations as more information becomes available during the play 21 of the hand. 22 23 24 Trading off aleatory and epistemic uncertainty Dividing uncertainty between aleatory and epistemic components is an active choice by the modeler and not an innate property of nature. A simple example demonstrates this balancing. 11/27 1 Consider the variation of soil properties along the length of a levee. The left-hand side of 2 Figure 3 shows hypothetical soil strength values within the foundation of a levee as a function of lo- 3 cation (station). The strengths increase gradually toward the right, but erratically. In making a limit- 4 ing equilibrium calculation of levee stability and the risk of levee failure, one needs to assess an aver- 5 age strength and then add to it some measure of variation about the average. 6 In the simplest case, one could use a fixed spatial average of soil strength and add to it a rela- 7 tively large variance about the average. Alternately, one could fit a linear spatial trend to soil strength 8 and add to it a reduced variance about the trend. Indeed, as the order of the trend approaches the 9 number of data points, the added variance about the trend would tend to zero. On the other hand, 10 the statistical confidence with which one can estimate the trend parameters becomes ever smaller— 11 that is, the statistical uncertainty in the trend parameters becomes ever larger—as the order of the 12 trend curve increases. As the order of the curve approaches the number of data points, the number 13 of remaining degrees of freedom approaches zero, and the parameter uncertainty increases without 14 limit. This is shown schematically in the right hand figure. Conceptually, there may be some opti- 15 mum point in the middle. 16 A Taxonomy of uncertainty 17 The National Research Council (1996) describes different types of uncertainty in risk analysis 18 using the following terminology. Aleatory uncertainty is sometimes called random variability, stochastic 19 variability, objective uncertainty, or external uncertainty. The NRC committee (2000) called this type 20 of uncertainty, natural variability. Epistemic uncertainty is often called knowledge uncertainty, or subjective 21 uncertainty, or internal uncertainty. Epistemic uncertainties divide into three major sub-categories: 22 model uncertainty, parameter uncertainty, and uncertainty about unique events. 23 24 Decision-model uncertainty describes our inability to understand decision objectives or, at least, how alternative projects or designs should be evaluated. This is a third type of uncertainty, some12/27 1 what different in character from the first two. This uncertainty includes, for example, uncertainty in 2 discount rates and the appropriate length of planning horizons. In the current USACE manual the 3 influence of decision-model uncertainty is for the most part not considered. 4 5 6 These concepts are by no means new. Table 1 lists a variety of word pairs that have been used to describe similar concepts of aleatory and epistemic uncertainty. As an example of how these different types of uncertainty manifest in a risk analysis, Moser 7 (1994) suggests the familiar case of flood frequency analysis. In the traditional approach to modeling 8 flood hazard, a frequency curve of discharge exceedance displays a presumed natural variability or 9 aleatory uncertainty of flood flows, while the error band about the curve displays epistemic uncer- 10 tainty. The frequency curve reflects the irresolvable variation of nature, which we assign to chance. 11 The error band on the curve reflects our lack of perfect knowledge about the statistically estimated 12 parameters of the frequency curve, which we assign to ignorance. Collecting more data would im- 13 prove our estimates of the parameters and thus reduce the error bands about the frequency curve, 14 but no amount of data would change the underlying probability distribution represented by the ex- 15 ceedance curve. 16 It is crucial that the framework for risk analysis clearly distinguish between natural variability 17 on the one hand and knowledge uncertainty on the other. The effects of these different sources of 18 uncertainty on risk calculations can be large. As a simple example, spatial variations in rockfill prop- 19 erties within an embankment, treated as aleatory uncertainty, will average out in calculations from 20 one section of embankment to the next. High values at some places balance against low values at 21 others. In contrast, parameter uncertainty in the average strength, for example, treated as epistemic, 22 introduces a systematic effect into a calculation. If mean strength is overestimated at one location, it 23 is overestimated in every location, and the whole embankment is subject to failure. Thus, the ques- 24 tion is joined, what does it mean for there to be a 10% probability of failure of an embankment? 13/27 1 Does it mean that one-tenth of this embankment will fail with near certainty? Or does it mean that 2 of ten embankments just like this one, on average one will fail in its entirety? The answer depends 3 on whether the uncertainty is aleatory or epistemic. 4 5 Implications for risk analysis Does the balancing between natural variation and knowledge uncertainty make any differ- 6 ence to (1) how calculations are performed in risk analysis or (2) the meaning of the output? The an- 7 swer to each of these is, yes. A third issue for consideration is, should the two types of uncertainty 8 be accounted for separately, or combined into a single measure of risk? 9 10 How calculations are performed We take for granted today that uncertainties should be denominated in the language of prob- 11 ability. From the practical view of risk analysis, frequency and belief are not necessarily opposing 12 meanings for probability. One could argue that frequency and belief are just applications of the same 13 theory to different realities. For example, we apply differential calculus to groundwater regimes and 14 to magnetic fields. Frequency and belief are meanings of probability as applied to fundamentally dis- 15 tinct uncertainties. Both types of uncertainty exist in the same risk analysis. Probability is simply a 16 logical framework over-arching the notions of frequency and belief and encompassing both. 17 Some aspects of flood risk can be treated as if they were random and thus describable by rel- 18 ative frequencies (e.g., flood frequency, spatial variations of soil properties). Other types of uncer- 19 tainty have to do with unique events whose occurrence is not certain. In this case, probability has 20 the meaning of strength of opinion. Such strength of opinion may not be uniquely identifiable with 21 observed responses in the past, but it may depend on qualitative experience, reasoning from first 22 principles, and intuition–that is, on tacit knowledge. 14/27 1 From the duality in probabilistic philosophy between frequency and belief, modern statistical 2 practice, too, has evolved in two principal schools. These might be called the “traditional” or fre- 3 quentist school and the “Bayesian” or degree-of-belief school. Much of engineering risk analysis has 4 been dominated by traditional approaches, although Bayesian approaches are becoming more com- 5 mon. Indeed, the USACE flood damage model began as a traditional statistical model and has 6 evolved to incorporate Bayesian methods (U.S. Army Corps of Engineers 1998). 7 The principal difference between traditional and Bayesian approaches is that the former take 8 probability to be a property of the world which has some specific although imprecisely known value, 9 whereas the latter take probability to be a degree of belief, which may be applied to a property of the 10 world. Probability in the traditional sense is a natural frequency over time or space; it has some fixed 11 value, and it makes no sense to speak of a probability distribution over this frequency. In contrast, 12 probability in the Bayesian sense is a belief about the world; thus, it does make sense to quantify de- 13 grees of belief about the value of any parameter, even a measure of natural variation. 14 If risk analysis is to incorporate uncertainties due to limited knowledge, then it needs to use a 15 Bayesian approach. If the goal is to convolve uncertainties due to natural variation with uncertainties 16 due to limited knowledge, then the frequency part of the model also needs to use Bayesian methods. 17 It makes no logical sense to mix traditional and Bayesian methods, even though this is common in 18 practice. The only factor saving the day for a risk analyses that commits this fundamental error is 19 that in some common cases (e.g., models involving Normal processes) the numerical results of tradi- 20 tional and Bayesian procedures are nearly the same–even though the meanings attached to these 21 numerical results are distinctly different. 22 One commonly encounters this unsound mixture of traditional and Bayesian methods in re- 23 gression models, such as stage-discharge curves. Water height data are fit against flows using tradi- 24 tional regression analysis to develop a predictive model. The uncertainty in predicting water height 15/27 1 from a known discharge then has two main components: variation of the historical data about the 2 regression line, and statistical uncertainty in the parameters of the line. Using traditional regression 3 analysis, the confidence limits for the parameters of the line have to do with variations in repeated 4 sampling from the presumed model, not with an inferred probability distribution over the parame- 5 ters based on the data. Thus, it is clearly wrong to propagate an uncertain flow value through the 6 model, and to integrate over the “uncertainty” in the model parameters calculated by this traditional 7 procedure. Yet, this is regularly, and unfortunately, done. 8 9 The meaning of risk analysis output An implication of making the trade-off between aleatory and epistemic uncertainty is that 10 what is meant by a predictive probability may change. Consider the “probability of excessive settle- 11 ment” of a long levee. As stated above, this could mean the probability of overall settlement of the 12 entire levee or the portion of the levee that will experience the excessive settlement. Confusion over 13 this issue is frequent in the literature, where the temporal or spatial fraction of adverse performance 14 of a large structure is many times used to verify a probabilistic prediction. 15 The meaning of the output depends on how the modeling assumptions are made; specifical- 16 ly, on how the total uncertainty is divided between aleatory and epistemic. To the extent that all the 17 uncertainty is assumed aleatory, the probability refers to a temporal or spatial fraction. To the extent 18 that all the uncertainty is assumed epistemic, the probability refers to a chance of complete failure. 19 Almost always, the uncertainty is apportioned between aleatory and epistemic, so the probability it- 20 self is a mixture. 21 A second implication of the trade-off between aleatory and epistemic uncertainty is the vari- 22 ability of performance as a function of scale. To the extent that uncertainty is presumed to be aleato- 23 ry, uncertainty averages over space and time. The variability of soil properties among large speci- 24 mens will be less than among small specimens. The variability among in situ tests that mobilize large 16/27 1 soil volumes will be less than the variability among in situ tests that mobilize small soil volumes. The 2 converse is true of behaviors that rest on extreme properties. Seepage conditions and piping that 3 depend on the most transmissive element of a formation become both more variable with scale and 4 also more extreme. Rock slope failures that depend on the least favorably inclined joint become 5 more variable and also more probable as the volume of rock mass considered becomes larger. 6 A third implication of the trade off between aleatory and epistemic uncertainty is expressed 7 by the expected value of sample information (EVSI). EVSI is a decision theory concept alluding to 8 how much one should be willing to pay for the results of gathering (sampling) more information 9 (Pratt et al. 1995). To the extent that uncertainties are moved to the aleatory column, a floor is creat- 10 ed beneath which uncertainty cannot be reduced. Increasing amounts of information can be gath- 11 ered, but this only makes the estimate of population parameters more precise. It does nothing to 12 change their values. In contrast, gathering more information about epistemic uncertainties reduces 13 those uncertainties. 14 15 Should aleatory and epistemic uncertainties be combined? What does it mean to have a degree of belief about a frequency, or, said in a different way, 16 what does it mean to have a probability of a probability? When a model is chosen, certain uncertain- 17 ties are taken to be aleatory. The parameters for the models of these aleatory uncertainties have to 18 do with frequencies. But one does not know the values of these frequency parameters with preci- 19 sion, for they themselves are estimated from data. Such estimates can be made using traditional sta- 20 tistical techniques, and they result in unbiased estimators and confidence limits. These are not prob- 21 abilistic descriptions of the uncertainties in the parameters but conditional statements of how such 22 estimators function in repeated sampling from the same population. On the other hand, estimates 23 can be made with Bayesian techniques, in which case the resulting probability distributions apply di- 17/27 1 rectly over the space of parameters. The latter approach is obviously necessary if parameter (and 2 model) uncertainties are to be mathematically integrated out to yield an aggregate risk. 3 The issue of “probabilities of probabilities,” that is, of specifying probability distributions 4 over model parameters which are themselves probabilities (frequencies) has been the subject of ex- 5 tensive debate (De Finetti 1990; Mosleh and Bier 1996; Von Mises et al. 1939). Mosley and Bier, ex- 6 tending Kaplan and Garrick’s (1981) argument for a “probability of frequencies” framework, pro- 7 pose a hierarchy of conditional probabilities based on underlying conditions (i.e., hypotheses or 8 propositions). This approach seems to fit the emerging practice of risk analysis for water resources 9 projects, by clearly and separately accounting for aleatory and epistemic uncertainties. There is a nat- 10 ural extension of this approach to include probability assessments directly over alternate models 11 (Hoeting et al. in press), but a number of subtle issues arise in the model uncertainty arena which 12 complicate this practice. 13 An approach now gathering momentum in closely related fields of risk analysis–seismic safe- 14 ty, nuclear power plant reliability, and high-level waste disposal–is to use conditional statements of 15 risk, in which a family of probability distributions is generated over aleatory uncertainties. Each 16 member of the family of distributions is conditioned on a non-exceedance probability for the epis- 17 temic components of uncertainty (Pate-Cornell 1999; Pate-Cornell 1996). In this way, the assess- 18 ment of risk avoids the difficulties of convolving different types of uncertainty. Figure 5 suggests a 19 hypothetical outcome of such conditional analyses. Each complementary cumulative distribution is 20 associated with a particular value of some epistemic uncertainty. While this approach provides an in- 21 teresting way of portraying complex risk analysis results, its usefulness is limited to cases in which a 22 small number of epistemic uncertainties dominate the analysis. 18/27 1 2 Conclusion Risk analysis for water resource projects, and civil and environmental engineering more gen- 3 erally, involves uncertainties about processes in the natural world and about the performance of 4 man-made structures in response to natural processes. Presuming a Laplacian view—the world 5 obeys natural laws and acts in deterministic ways—leads inevitably to a conclusion that all the uncer- 6 tainties dealt with in risk analysis are caused by limited knowledge: there is no such thing as random- 7 ness. Nonetheless, one assumes hypothetically that some of these uncertainties are actually due to 8 natural variation of an inherently random character over time or space. This presumption, without 9 which the enterprise of risk analysis would be made much more difficult, has strong implications of 10 11 12 which practitioners of risk analysis need to be aware. Acknowledgements The authors wish to thank their many colleagues from the NRC Committee on Flood Dam- 13 age Reduction, the US Army Corps of Engineers, the Canadian Electricity Association working 14 group on dam safety, the Bureau of Reclamation, and other agencies, whose ideas have given charac- 15 ter and essence to this discussion: David Bowles, Darryl Davis, Karl Dise, Efi Foufoula-Geogiou, 16 David Goldman, Desmond Hartford, Ralph Keeney, Lester Lave, Harry Lins, Daniel Loucks, David 17 Maidment, Martin McCann, David Moser, Jery Stedinger, Ben Chie Yen, and Andy Zielinski, 18 References 19 Bernstein, P. L. (1996). Against the gods : the remarkable story of risk, John Wiley & Sons, New York. 20 21 22 Bowles, D. S., Anderson, L. R., Evelyn, J. B., Glover, T. F., and Van Dorpe, D. M. “Alamo dam demonstration risk assessment.” Proceedings of the Australian Committee on Large Dams (ANCOLD) Annual Meeting, Jindabyne, New South Wales. 23 Carnap, R. (1936). “Testability and meaning.” Philosophy of science, 3, 420. 19/27 1 2 Cooke, R. M. (1991). Experts in uncertainty : opinion and subjective probability in science, Oxford University Press, New York. 3 Daston, L. (1988). Classical probability in the Enlightenment, Princeton University Press, Princeton, N.J. 4 5 David, F. N. (1962). Games, gods and gambling; the origins and history of probability and statistical ideas from the earliest times to the Newtonian era, Hafner Pub. Co., New York,. 6 7 De Finetti, B. (1990). Theory of probability : a critical introductory treatment, Interscience Publishers, Chichester ; New York. 8 9 10 Gigerenzer, G., Swijtink, Z., Porter, T., Daston, L., Beatty, J., and Kruger, L. (1989). The Empire of chance : how probability changed science and everyday life, Cambridge University Press, Cambridge England ; New York. 11 Gleick, J. (1988). Chaos : making a new science, Penguin, New York. 12 Hacking, I. (1975). The Emergence of Probability, Cambridge University Press, Cambridge. 13 Hacking, I. (1990). The taming of chance, Cambridge University Press, Cambridge England ; New York. 14 15 Hoeting, J. A., Madigan, D., Raftery, A., and Volinsky, C. T. (in press). “Bayesian Model Averaging.” Statistical Science. 16 17 Kaplan, S., and Garrick, B. J. (1981). “On the quantitative definition of risk.” Risk Analysis, 1(1), 1127. 18 Keynes, J. m. (1937). “The General Theory.” Quarterly Jouranl of Economics, LI(February), 209-233. 19 Knight, F. H. (1921). Risk, uncertainty and profit, Houghton Mifflin Company, Boston, New York,. 20 21 Maass, A. (1962). Design of water-resource systems; new techniques for relating economic objectives, engineering analysis, and governmental planning, Harvard University Press, Cambridge,. 22 Maidment, D. R. (1993). Handbook of hydrology, McGraw-Hill, New York. 23 24 25 Moser, D. A. (1994). “Risk analysis framework for evaluation of hydrologic/hydraulics and economics in flood damage reduction studies: Course Notes.” , U.S. Army Corps of Engineers, Hydraulic Engineer Center, Davis. 26 27 28 Mosleh, A., and Bier, V. M. (1996). “Uncertainty about probability: A reconciliation with the subjectivist viewpoint.” IEEE Transactions on Systems, Man, and Cybernetics--Part A: Systems and Humans, 26(3), 1083-1090. 29 30 National Research Council. (1995). Flood risk management and the American River basin: An evaluation, National Academies Press, Washington, DC. 31 32 National Research Council. (1996). Understanding risk: Informing decision in a democratic society, National Academies Press, Washington, DC. 20/27 1 2 National Research Council. (2000). Risk analysis and uncertainty in flood damage reduction studies, National Academies Press, Washington, DC. 3 4 Pate-Cornell, E. M. (1999). “Conditional uncertainty analysis and implications for decision making: The case of WIPP.” Risk Analysis, 19(5), 995-1002. 5 6 Pate-Cornell, M. E. (1996). “Uncertainties in risk analysis: six levels of treatment.” Reliability Engineering and Systems Safety, 54, 95-111. 7 8 Peirce, C. S., Cohen, M. R., and Dewey, J. (1998). Chance, love, and logic : philosophical essays, University of Nebraska Press, Lincoln. 9 10 Pratt, J. W., Raiffa, H., and Schlaifer, R. (1995). Introduction to statistical decision theory, MIT Press, Cambridge, Mass. 11 Rescher, N. (1995). Luck : the brilliant randomness of everyday life, Farrar Straus and Giroux, New York. 12 13 Salmon, G. (1998). “Dam safety risk analysis at British Columbia Hydro.” , Comments before the USACE Waterways Experiment Station Workshop on Dam Safety. 14 15 16 Smith, R. P. (2000). “Risk, uncertainty and ambiguity in engineering design decision making.” The Open Workshop on Decision-Based Design: Origin, Status, Promise, and Future, NSF, State University of New York at Buffalo. 17 18 19 Stedinger, J. R., Heath, D. C., and Thompson, K. (1996). “Risk assessment for dam safety evaluation: Hydrologic risk,.” 96-R-13, U.S. Army Corps of Engineers, Institute for Water Resources, Alexandria, VA. 20 21 22 U.S. Army Corps of Engineers. (1992a). “Guidelines for risk and uncertainty analysis in water resources planning, Volume 1, Principles with technical appendices.” 92-R-1, Institute of Water Resources, Fort Belvoir. 23 24 U.S. Army Corps of Engineers. (1992b). “Guidelines for risk and uncertainty analysis in water resources planning, Volume 2, Example cases.” 92-R-2, Institute of Water Resources, Fort Belvoir. 25 26 U.S. Army Corps of Engineers. (1996). “Risk-based analysis for flood damage reduction studies.” EM 1110-2-1619, US Army Corps of Engineers, Washington, DC. 27 28 U.S. Army Corps of Engineers. (1998). “HEC-FDA Flood damage reduction analysis: Users manual CPD-72, Version 1.” , US Army Corps of Engineers, Hydrologic Engineering Center, Davis, CA. 29 30 31 U.S. Water Resources Council. (1983). “Economic and environmental principles and guidelines for water and related land resources implementation studies,.” , Executive Office of the President, Washington, DC. 32 33 Von Mises, R., Neyman, J., Sholl, D., and Rabinowitch, E. I. (1939). Probability, statistics and truth, The Macmillan Company, New York. 21/27 Direct Calculation Model Model 1 Model 2 Model n Figure 1. Approaches to modeling in risk analysis: Direct-calculation (derived-distribution) risk model. 22/27 Event Driven Model Event 2 Event 1 Event n Initiating Event Event 3 Figure 2. Approaches to modeling in risk analysis: Event-driven (event-tree) risk model. 23/27 Observed Value Mean Square Error Residual (random) variation Spatial Location Parameter (knowledge) uncertainty Total Su m Sum Te xt Polynomial Order Figure 3. Alternate models for the spatial variation of soil properties 24/27 Risk Analysis Natural Variability Knowledge Uncertainty Decision Model Uncertainty Temporal Model Objectives Spatial Parameters Values Time Preferences Figure 4. Categories of uncertainty entering risk analysis 25/27 0.95 CCDF 0.50 0.05 Outcome value Figure 5. Hypothetical outcomes of a risk analysis, expressed as complementary-cumulative probability distributions (CCDF) over aleatory uncertainty, with cumulative epistemic uncertainty expressed parametrically, ranging from 0.05 through 0.95. 26/27 Aleatory uncertainty Epistemic uncertainty Citation Natural variability Knowledge uncertainty National Research Council (2000) Random or stochastic variation Functional uncertainty Stedinger (1996) Objective uncertainty Subjective uncertainty Maidment (1993) External uncertainty Internal uncertainty Maidment (1993) Statistical probability Inductive probability Carnap (1936) Chance [Fr] Probabilité [Fr] Poisson, Cournot (Hacking 1975) Table 1. Alternate terms describing the dual meaning of uncertainty. 27/27
© Copyright 2026 Paperzz