Dealing with Uncertainties?

Dealing with Uncertainties? Combining System
Dynamics with Multiple Criteria Decision
Analysis or with Exploratory Modelling
Erik Pruyt
Delft University of Technology
Faculty of Technology, Policy and Management
Policy Analysis Section
E-mail addresses: [email protected]
August 22, 2007
Abstract
System Dynamics is often used to explore issues that are characterised by uncertainties. This paper discusses first of all different types of uncertainties that system dynamicists
need to deal with and the tools they already use to deal with these uncertainties. From this
discussion it is concluded that stand-alone System Dynamics is often not sufficient to deal
with uncertainties. Then, two venues for improving the capacity of System Dynamics to deal
with uncertainties are discussed, in both cases, by matching System Dynamics with other
method(ologie)s: first with Multi-Attribute Multiple Criteria Decision Analysis, and finally
with Exploratory Modelling.
Keywords: Uncertainty, Robustness, Multi-Attribute Multiple Criteria Decision Analysis,
MCDA, Exploratory Modelling
1
Introduction
It is often stated that our world becomes ever more complex and uncertain1 . It is therefore no
wonder that System Dynamics is used to deal with ever more complex and uncertain issues. It
seems that many System Dynamics studies acknowledge the existence of uncertainties, but they
only seem to explore them to a very limited extent. An important question is therefore whether
System Dynamics is an appropriate method for dealing with these uncertainties.
In order to deal sufficiently with uncertainties in practice, it is important to realise that many
different sources and types of uncertainties exist, and that a thorough understanding of these
different sources and types of uncertainties is required to cope with them.
A very common way to deal with uncertainties in the field of System Dynamics is to explore
them to some extent (by means of sensitivity analyses), to reduce the reducible uncertainties and
accept the existence of irreducible uncertainties, and to take the existence of these irreducible
uncertainties into account when interpreting the results (e.g. quantitative trajectories interpreted
as qualitative behaviour) and desig robust policies and systems. Many other System Dynamics
studies simply try to reduce uncertainties, although not all uncertainties are –as will be discussed
in section 2– reducible. However, uncertainties are also sometimes completely ignored by System Dynamics and other studies. Ignoring uncertainties is –generally speaking– one of the most
1 Uncertainty is defined here as ’the entire set of beliefs or doubts that stems from our limited knowledge of the
past and the present ([especially] uncertainty due to lack of knowledge) and our inability to predict future events,
outcomes and consequences ([especially] uncertainty due to variability)’ (van Asselt 2000, p88).
1
2
common ways to deal with uncertainties. But running away from uncertainties is not a sensible
strategy in a world that becomes ever more uncertain. A fourth way to deal with uncertainties
–not yet explored by system dynamicists but feasible as will be illustrated in this paper– is to fully
embrace, explore, evaluate and exploit (deep) uncertainties about underlying (model) structures,
parameters used, and how to value the desirability of alternative outcomes.
In section 2, we will first of all look at how System Dynamics in stand-alone modus could deal
with different sources and types of uncertainty (see subsection 2.1), and related aspects such as
robustness, resilience and flexibility. From this analysis, it will be concluded that System Dynamics
allows to deal with several of these aspects, but not with all of them, and that System Dynamics
often does not sufficiently stress the importance of uncertainties and the strategies to deal with
them, for example when evaluating policies.
Section 3 illustrates the combination of System Dynamics with other methods to deal more
inclusively with specific types of uncertainties. Subsection 3.1 discusses first of all the combination of System Dynamics and Multi-Attribute Multiple Criteria Decision Analysis to explore and
evaluate the potential influence of uncertainties, risks, and strategies to increase the robustness,
resilience and flexibility of models and policies. And subsection 3.2 discusses the combination
of System Dynamics and exploratory modelling to increase the capacity of System Dynamics to
deal with deep uncertainty and choose policies on the basis of their relative robustness. Section 4
finally consists of conclusions and suggestions for future research and software development.
2
Stand-alone System Dynamics to Deal with Uncertainty,
Risk, Robustness, Resilience and Flexibility?
2.1
System Dynamics and Uncertainty
Complex issues are often characterised by many types and sources of uncertainties. Several frameworks have been developed to classify these types and sources of uncertainty. The classification
of van Asselt (2000) will be used here to discuss which uncertainties matter in the case of System Dynamics and how system dynamicists deal with uncertainties. This section starts with a
brief look at the two typologies of this classification. While discussing these typologies, we will
immediately reflect on their relevance in the case of System Dynamics modelling and decision aid.
2.1.1
System Dynamics and Inherent Sources of Uncertainty
M.B.A. van Asselt (2000, p84-87) developed a generic taxonomy of inherent sources (or origins)
of uncertainty in any empirical decision-support case.
1. She called the ontological dimension of uncertainty variability. Variability makes the world
inherently uncertain and unpredictable. It is composed of
(a) the inherent randomness of nature. It is inherent and irreducible, and is something
system dynamicists should always keep in mind. It is often not explicitly integrated
in System Dynamics models, because these models are supposed to be transparent
simplifications of reality that generate understanding about the link between structure
and behaviour instead of being predictive devices.
(b) value diversity. Diversity of values drives behaviour and can sometimes be revealed,
which makes it interesting to integrate in different System Dynamics models or in a
single model (which mostly happens in traditional System Dynamics).
(c) uncertainty pertaining to human behaviour which is considered very important by most
system dynamicists. To cope with it, they only venture in highly aggregated models of
human behaviour, not models of individual behaviour. But even then do mainstream
system dynamicists
3
’believe that human unpredictability is too dominant a factor in social systems
to allow anything more than qualitative behavioral forecasts, even for aggregate systems where much unpredictability can be averaged out. Therefore they
find it hard to understand the great effort econometricians go through to obtain better and better estimations or to quote their findings to six or seven
significant digits. Especially when many exogenous variables must be predetermined, the whole econometric exercise looks to a system dynamicist like a
transformation of one set of uncertain and unscientific guesses into a second
set of equally uncertain guesses, presented with deceptive, scientific-looking
precision’ (Meadows and Robinson 1985, p78).
(d) societal randomness and technological randomness. Social, economic, cultural and technological dynamics are thought to evolve from the system and are therefore mostly dealt
with endogenously or by means of scenarios. It should however be kept in mind that
they can only be dealt with to some extent by means of endogenous relations. Most systems are not closed and surprising external events or structural changes always remain
possible.
2. M.B.A. van Asselt (2000) labelled the epistemological dimension of uncertainty lack of knowledge which she breaks down in unreliability and structural uncertainty.
(a) Unreliability comes from inexactness (’we roughly know’), lack of observations or measurements (’we could have known’), or practical immeasurability (’we know what we do
not know’). System dynamicists try to cope with unreliability by using soft variables
and structural assumptions, and by performing sensitivity analyses of inexact input
variables. There is however a tradeoff concerning the inclusion of uncertain factors or
variables in models: the more uncertainty included in the model, the closer a model
might get to real-world uncertainty, but the less useful it becomes –even ’plausible
nonsense’ (Nuthmann 1994) to some. But again –even more important than including uncertain variables– mainstream system dynamicists are concerned with ’behaviour
modes, dominance of modes and dominance transfer, not with precise numerical values’
(Coyle 1998, p356).
(b) Structural uncertainty comes from irreducible ignorance (’we cannot know’), indeterminacy (’we will never know’), reducible ignorance (’we do not know what we do not
know’) and conflicting evidence (’we do not know what we know’). Depending on
the basic assumptions –and thus paradigms (see (Pruyt 2006b))– adhered to, system
dynamicists could try to
• reconcile conflicting evidence in a single model,
• construct and maintain more than one model to preserve the conflicting evidence,
• preserve the evidence and model closest to the real world,
• preserve the evidence and model closest to the value systems or beliefs of parties
involved, or
• try to understand where the conflicting evidence comes from and learn form it.
During the System Dynamics modelling process, the presence of uncertainties due to a
lack of information and ignorance, that were previously unknown, may be revealed. If
deemed necessary, and if the contingencies allow it, in-depth exploration may lead to
the reduction of reducible ignorance and lessen the lack of information. It might even
lead to a different attitude towards decision-making. Otherwise the lack of information
and ignorance need to –at the very least– be kept in mind, just as the existence of
irreducible uncertainties.
2.1.2
System Dynamics and Context-Dependent Types of Uncertainty
The second typology deals with context-dependent types of uncertainty in modelling from the
point of view of modellers and decision-makers (see figure 1).
4
Figure 1: Inherent sources of uncertainty and types of uncertainty in modelling as in (van Asselt
2000, p91)
The uncertainties discussed in this typology are also important for system dynamicists, especially those that manifest themselves to modellers through:
1. technical uncertainties which are uncertainties about model quantities like parameters, input data and initial states. They ’arise from the quality or appropriateness of the data
used to describe the system, from aggregation (temporal and spatial) and simplifications as
well as from lack of data and approximation’ (van Asselt 2000, p88). Technical uncertainties are extremely important especially when System Dynamics modellers make full use of
approximations, soft variables, aggregations and simplifications.
2. methodological uncertainties which are uncertainties about model structure and functional
relationships. Methodological uncertainties are extremely important in the case of System
Dynamics modelling, which in the end is about retroduction/abduction of structures and
relationships that generate behaviour.
3. epistemological uncertainties which are uncertainties about model completeness/adequacy.
Epistemological uncertainties are also important for System Dynamics. However, System
Dynamics is a problem-oriented approach: system dynamicists do not aim to build ’complete
models’, but models which are adequate for understanding and dissolving issues –only those
causal links important for understanding and dissolving issues are therefore included in
System Dynamics models which means that System Dynamics models are never complete.
4. model operation uncertainties like bugs and accumulation of uncertainties propagated through
models.
Because of all these uncertainties related to the real-world (inherent randomness of nature,
human behaviour,. . . ), limits of models (methodological, epistemological and model operation
5
uncertainties) and data (technical uncertainties), system dynamicists favour robust models, robust
policies and robust systems2 .
2.1.3
System Dynamics Techniques and Tools to Deal with Uncertainties
Often used System Dynamics approaches to deal with uncertainties include: univariate and multivariate sensitivity analysis 3 (but almost never comprehensive4 ), validation tests / confidence
building, and most importantly, the specific System Dynamics attitude towards uncertainty and
the qualitative interpretation of the results – system dynamicists seem to recognise that all theories, models, maps and all other analytical tools lead in the end to assisted speculation, not to
certainty: it is ’[n]ever a complete analysis and there is always still a need for further speculation
beyond the insights reached by their use’ (Wolstenholme 1999, p424). Therefore, they put less emphasis on numerical uncertainty and the reduction of uncertainty, and instead, focus on behaviour
mode sensitivity, policy sensitivity and robustness, on preparing decision-makers for uncertainties
and risks through enhancing the understanding of the system behaviour and on building consensus and commitment (to the resulting decision) between the main stakeholders so as to reduce
uncertainties regarding the actual implementation.
Less often used System Dynamics approaches to deal with uncertainties include: comprehensive
sensitivity analysis (of all uncertain parameters, relations and boundaries), extensive formal scenario analysis, event-/cross-impact matrices, directed Automated Nonlinear Tests, exploration of
different (cultural) perspectives5 , ’qualitative uncertainty discovering’6 , hedging oriented methods,
real options7 , fuzzy logic8 , Multi-Attribute Multiple Criteria Decision Analysis, and Exploratory
Modelling.
In spite of these available methods and techniques as well as today’s computing power, there
is probably not enough attention paid to the issue of uncertainty in System Dynamics. One of the
reasons might be that uncertainty adds another level of complexity, on top of dynamic complexity
addressed by System Dynamics. The many different sorts of uncertainty make the situation even
more complex. Another reason might be that there is only a limited set of tools to deal with
uncertainty and risk available in most of the System Dynamics software packages (e.g. sensitivity
or risk analysis), but not the more complicated tools to perform comprehensive sensitivity analyses
or to deal with, for example, deep uncertainty (see subsection 3.2). The thorough exploration of
(deep) uncertainty is also a daunting (almost impossible) task. And finally, computing power was
until very recently insufficient to support extensive exploration of uncertainties while at the same
time keeping the analysis and conclusions transparent. Today however, this seems to be possible.
2 See
for example (Richardson and Pugh III 1981), (Lane 2000, p17), (Groessler, Miller, and Winch 2004, p81).
test for numerical sensitivity (different assumptions lead to different numerical results), but especially to test
for behaviour mode sensitivity (different assumptions lead to different patterns of behaviour) and policy sensitivity
(different assumptions lead to different policy recommendations). Although sensitivity testing is performed at
many moments throughout the System Dynamics modelling process (Tank-Nielsen 1980, p203), it is often limited to
testing of parameter sensitivity (not alternative structural assumptions (formulations) or choices of model boundary
(Sterman 1991)).
4 Sterman (2000, p884) argues that ’Comprehensive sensitivity analysis is [. . . ] impossible even when restricted
to parametric sensitivity. Since most models are significantly nonlinear, the impact of combinations of assumptions
may not be the sum of the impacts of the assumptions in isolation. Comprehensive sensitivity analyses would
require testing all combinations of assumptions over their plausible range of uncertainty.’
5 See for example the PRIMA approach in (van Asselt 2000). However, System Dynamics theory ’rarely touches
on practical means of helping participants generate and articulate a richly divergent set of significantly different
views which might then inspire different issues upon which a model building study may centre’ (Lane and Oliva
1998, p224).
6 Qualitatively tracing out the possible structures and dynamics helps to identify uncertainties and risks related
to different structural representations, structural options (Mayo, Callaghan, and Dalton 2001, p269), and leverage
points (Mayo, Callaghan, and Dalton 2001, p269). The process of discovering risks and uncertainties could be
greatly enhanced by means of qualitative System Dynamics.
7 See for example (Ford and Sobek 2003).
8 Tessem and Davidsen (1994), Kunsch and Springael (2006) and others propose the application of fuzzy numbers
as an alternative to probabilistic methods for dealing with uncertainty, vagueness and qualitative values in system
dynamics models, more precisely to deal with vague and imprecise technical parameters, initial conditions and
lookup tables.
3 To
6
3
Combining System Dynamics with Other Method(ologie)s
to Deal with Uncertainties
’Uncertainty is inherent in the very nature of making policies for the future and in the
dynamic behavior of the systems being affected. The acceptance of this fact opens up
possibilities for the successful development and use of multi-disciplinary, multi-method
approaches, based on the integration of quantitative and qualitative research, which
recognize and handle the full spectrum of uncertainties’ (Walker and Marchau 2003).
Two such multi-method approaches that combine System Dynamics with other method(ologie)s
to deal with uncertainties, are discussed in this section. Both approaches are illustrated using
adapted versions of the Wonderland Model, a very simple model of demographic, economic and
environmental interactions.
3.1
System Dynamics and Multi-Attribute Multiple Criteria Decision
Analysis
One possible way to deal with uncertainties (and related aspects such as robustness, resilience and
flexibility) is to combine System Dynamics and discrete –or Multi-Attribute– Multiple Criteria
Decision Analysis.
3.1.1
Multi-Attribute Multiple Criteria Decision Analysis
Multiple Criteria Decision Analysis (MCDA) is the name adopted by the OR field modelling and
analysing multi-dimensional issues to find ’most appropriate’ solutions, instead of optimal ones9 .
The domain of MCDA proposes many (primarily) prescriptive approaches to consistently aid
’decision makers’ (individuals or groups) in the decision-making process to take multiple criteria
explicitly into account when exploring decisions that matter. MCDA is not just an algorithm, but
rather an iterative process of identifying, structuring, modelling and exploring. MCDA methods
aid the integration of (objective) measurement/comparison of alternatives on multiple dimensions
and the (subjective) judgment of the relative value of these different dimensions. There are many
different Multiple Criteria Decision Analysis methods and even different Multiple Criteria Decision
Analysis schools, but all share the commonality of structuring decision problems in terms of an
explicitly identified set of criteria.
Multi-Attribute Multiple Criteria Decision Analysis (MA MCDA) methods are used to describe/choose/rank/classify countable sets of alternative policies on multiple criteria, with/without
compensation between criteria, with/without qualitative criteria (e.g. for modes of behaviour,
amplitudes, degrees of lock-in, resilience, recovery times,. . . ). Some reasons why Multi-Attribute
MCDA methods are advocated here instead of continuous –or Multi-Objective– MCDA (MO
MCDA) as is implemented in existing System Dynamics softwares, are that
1. these multi-objective criteria are fully compensational which is often undesirable, even unethical
2. the Multi-Objective weights are too exact
3. Multi-Objective Multiple Criteria Decision Analysis measures only the present and past
instead of measuring the present, past and future
4. qualitative aspects –such as modes of behaviour, uncertainties, . . . – cannot be taken into
account in Multi-Objective Multiple Criteria Decision Analysis
5. System Dynamics is not about sweeping huge parameter spaces in search of optimal policies,
but rather about simulating well thought-out policies.
9 Readers interested in MCDA are referred a.o. to Figueira, Greco, and Ehrgott (2005), Belton and Stewart
(2002), Roy and McCord (1996), Vincke, Gassner, and Roy (1992) and Keeney and Raiffa (1973).
7
An important analysis necessary in most MCDA processes is the extensive testing of the
robustness or sensitivity of the outcomes –whether the outcomes change when weights, values
or methods are (slightly) changed. This also means that the final outcomes (choice, ranking,
design or classification) of MCDA analyses are not the only end products. What is often even
more important is the critical exploration, reflection and questioning, the increased insight, the
consensus or compromise build, et cetera.
Multiple Criteria Decision Analysis methods are very useful for System Dynamics to describe,
evaluate and choose between policies simulated by means of system dynamics models, taking
multiple (quantitative and qualitative) criteria on multiple time scales from multiple perspectives
into account (see (Pruyt 2006a)).
3.1.2
[SD + MA MCDA] to Deal with Uncertainties
The [SD + MA MCDA] multimethod to deal with uncertainties could then consist of following
phases:
1. Classic System Dynamics phase: Construction of System Dynamics models and the simulation of policies on multiple scenarios (including univariate and multivariate sensitivity
analyses and other techniques to deal with uncertainties in modelling – see following paragraph) for multiple perspectives (if these multiple perspectives cannot easily be reconciled),
2. Multi-Attribute MCDA evaluation table phase: MCDA description of simulated policies on
multiple criteria –including uncertainties as crisp / interval / probability / fuzzy / qualitative
evaluations– and (qualitative) criteria, both –if desired– on multiple time scales and from
different perspectives,
3. Multi-Attribute MCDA selection/elimination phase: Simple veto/robustness thresholds could
then be defined to eliminate unacceptable/inappropriate policies, or specific MCDA methods
could be used, to select acceptable/robust/appropriate policies.
Hence, System Dynamics is used to simulate dynamics over time, something Multiple Criteria
Decision Analysis is not appropriate for, and MA MCDA is used to evaluate the (often conflicting)
multi-dimensional information, something System Dynamics is in turn not appropriate for. And
more or less the same tools used in stand-alone System Dynamics to include uncertainties could
be used here as well. Uncertainties related to data, models, scenarios and preferences could be
taken into account in the combined System Dynamics and Multiple Criteria Decision Analyses
and the overall policy sensitivity could be tested by means of following techniques:
• Sensitivity analyses can be used, during the quantitative System Dynamics phase, to assess
the quantitative impacts of uncertain parameters, initial values, different structural formulations and boundaries, which results in outputs in the shape of intervals, distributions,
fuzzy or qualitative evaluations (like ’very sensitive’) or new criteria. These can be dealt
with in the subsequent Multiple Criteria Decision Analysis as (i) additional criteria to deal
separately with uncertainty, or as (ii) interval, stochastic, fuzzy or qualitative evaluations in
the normal content criteria in Multiple Criteria Decision Analysis methods that are able to
deal with these types of data. The smaller the intervals are, or the less the results vary in
the case of additional criteria, the less sensitive the models and policy recommendations are.
Hence, the additional criteria could be of the minimising type or veto thresholds. In case of
evaluations of the interval, stochastic, fuzzy or qualitative type within the evaluations (without creating additional criteria), the treatment partly depends upon the particular Multiple
Criteria Decision Analysis method chosen, but also upon choices which depend on the issue,
the decision-maker(s) and the particular criteria. In the case of interval evaluations, the
choice could, for example, be made to take the full intervals, or only part of the intervals
(like only the 50% confidence interval or the interval worse than the base case value), or only
the worst values into account.
8
• Formal scenario analysis and structure analysis can be used, during the System Dynamics
phase, to obtain and represent significantly different futures which result in different sets
of evaluations for the same aspects. These can again be dealt with in the subsequent Multiple Criteria Decision Analysis by means of additional criteria to deal with different sets
of evaluations with different inter-criteria information (structures) or –without additional
criteria– by means of methods that are able to deal with interval evaluations, fuzzy number
evaluations or qualitative assessments instead of point evaluations. Again, the smaller the
intervals are or the less the results vary in the case of additional criteria, the less sensitive
the models and policy recommendations are.
• During the System Dynamics phase, qualitative uncertainty discovering by means of qualitative what-if explorations, qualitative assessment of uncertainties and potential risks and
out-of-the-model-and-box speculation result in qualitative or fuzzy assessments of uncertainties and potential risks. These could again be taken into account in the subsequent Multiple
Criteria Decision Analysis phase by means of fuzzy or qualitative evaluations or additional
criteria.
• In the Multiple Criteria Decision Analysis, different inter-criteria information (structures)
could also be tested –by means of different weight sets, weight intervals, fuzzy weights, intercriteria orders, et cetera– to assess the influence of different preference profiles or inter-criteria
information structures.
• The robustness of the policy recommendations, can only be checked by taking into account
the whole decision aiding multi-methodology –thus both the System Dynamics and Multiple
Criteria Decision Analysis phases which together lead to these policy recommendations. The
a-posteriori question could then be asked as to what changes would be needed to modify
the policy recommendations arrived at using the System Dynamics and Multiple Criteria
Decision Analysis models. The more change required to switch policy recommendations,
the higher the robustness of the policy recommendation. In the domain of Multiple Criteria
Decision Analysis, the term robustness is also used in this sense and extensions of some
methods exist to explore this robustness.
This means that uncertainty related to the data, models, scenarios and preferences could be
assessed and evaluated in the combined System Dynamics and Multiple Criteria Decision Analyses
and that the overall policy sensitivity could be tested. If the policy recommendations do not change
much, or if the strategies/structures remain good, in spite of varying data, model formulations,
scenarios and preferences, then the models and policy recommendations are robust (models and
policy recommendations are not sensitive to varying data and model structures).
3.1.3
Example of the Combination of System Dynamics and Multi-Attribute Multiple Criteria Decision Analysis to Deal with Uncertainties
The [SD + MA MCDA] approach to deal with several uncertainties will now be illustrated using
a slightly adapted version of the Wonderland model as published in (Sanderson 1995), explored in
(Milik, Prskawetz, Feichtinger, and Sanderson 1996) and replicated in Vensim by Tom Fiddaman10 .
The main reason why this particular model has been used here to illustrate the combination of
System Dynamics and Multi-Attribute Multiple Criteria Decision Analysis is that another version
of the model was used by (Lempert, Popper, and Bankes 2003) to illustrate the combination of
System Dynamics and Exploratory Modelling which will be referred to and discussed in subsection
3.2.
The Wonderland model is a small and highly aggregated System Dynamics model with many
exogenous variables (coefficients). It is most certainly not unproblematic and it could easily be
criticised both from a System Dynamics point of view and from applied (demographic, economic
and environmental) points of view. However, it suits the purpose of this paper, illustrating how
10 See
Tom Fiddaman’s web pages: http://www.metasd.com/models/Library/Environment/Wonderland/.
9
Figure 2: Slightly adapted Wonderland model (Sanderson 1995) (Milik et al. 1996)
one could deal with uncertainties. Figure 2 shows the three sectors (population, environment and
economy) of the adapted model. And figure 3 shows three different dynamics/scenarios that could
be generated –the dream scenario, the escape scenario and the nightmare scenario– when changing
just 2 variables11 .
Two small changes have been made to the model:
1. A policy lever –the exogenous variable additional investment in clean technology–
and several new links (in orange) have been added.
The additional investment in clean technology directly influences two variables:
(a) the Pollution Intensity Change Rate = Pollution per Unit Output *
(Pollution Change Rate + additional investment in clean technology)
and thus indirectly the Pollution per Unit Output stock which decreases faster in
case of additional investments in clean technology.
Three policies are simulated: a base case policy without any additional investments, an
early investment policy (policy 1) with additional investments of -0.01 –the additional
investment is expressed directly in terms of the Pollution Intensity Change Rate
which is why this value is negative– between time periods 0 and 20, and a later investment policy (policy 2) with additional investments of -0.01 between time periods 30
and 50.
(b) and directly and indirectly the Output Growth Rate =
Growth Rate - (Growth Rate + Contraction Rate)
* (1 - Effective Natural Capital)^Contraction Nonlin
+ additional investment in clean technology / Pollution per Unit Output
11 Dream: abatement coefficient = 2 & Pollution change rate = -0.03 ; Escape: abatement coefficient = 100 &
Pollution change rate = -0.01 ; Nightmare: abatement coefficient = 2 & Pollution change rate = -0.01.
10
Figure 3: Three different behaviours with the slightly adapted Wonderland model: sustained
growth in the dream scenario, collapse in the nightmare scenario and boom and partial bust
followed by recovery in the escape scenario
which decreases with additional investments in clean energy (which are expressed negatively), all the more when the Output per Unit Output decreases from 1 to 0. This
corresponds to increasing marginal investment costs for ever cleaner technology.
2. Two different structural forms / perspectives are modelled: an environmentalist’s perspective
and an industrialist’s perspective. In the environmentalist’s perspective, the Cleansing Flow
is the non-linear function as in the original model:
Cleansing Rate / Regen Coeff * Effective Natural Capital^Cleansing Coeff
And in the industrialist’s view, the Cleansing Flow remains constant –no matter what
happens– and equals the
constant cleansing flow for industrialists = 10 PU/year.
All parameters in the multivariate sensitivity analyses (Latin Hypercube, 2000 times, seed
1234) follow random uniform distributions ± 10% around the initial values (except for the abatement coefficient and the pollution change coefficient):
11
Birth Coeff 1
Birth Coeff 2
Birth Income Effect
Cleansing Coeff
Cleansing Rate
Contraction Nonlin
Contraction Rate
Control Effect
Control Nonlin
Control Scale
Death Coeff 1
Death Coeff 2
Death Income Effect
Envir Death Nonlin
Envir Death Scale
Growth Rate
Normal Regen Rate
Regen Coeff
Pollution Change Rate
Abatement Coeff
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
random
random
random
random
random
random
random
random
random
random
random
random
random
random
random
random
random
random
random
random
uniform
uniform
uniform
uniform
uniform
uniform
uniform
uniform
uniform
uniform
uniform
uniform
uniform
uniform
uniform
uniform
uniform
uniform
uniform
uniform
(0.0360,
(1.2375,
(0.1440,
(1.8000,
(0.9000,
(1.8000,
(0.9000,
(0.0180,
(1.8000,
(0.4500,
(0.0090,
(2.2500,
(0.1620,
(13.5000,
(3.6000,
(0.0180,
(0.6750,
(0.0900,
(-0.0300,
(1.0000,
0.0440)
1.5125)
0.1760)
2.2000)
1.1000)
2.2000)
0.1100)
0.0220)
2.2000)
0.5500)
0.0110)
2.7500)
0.1980)
16.5000)
4.4000)
0.0220)
0.8250)
0.1100)
-0.0100)
3.0000)
;
;
;
;
;
;
;
;
;
;
;
;
;
;
;
;
;
;
;
;
initial
initial
initial
initial
initial
initial
initial
initial
initial
initial
initial
initial
initial
initial
initial
initial
initial
initial
value
value
value
value
value
value
value
value
value
value
value
value
value
value
value
value
value
value
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
0.040
1.375
0.160
2.000
1.000
2.000
0.100
0.020
2.000
0.500
0.010
2.500
0.180
15.000
4.000
0.020
0.750
0.100
The choice of the uncertainty bounds in this example is arbitrary since the model is extremely
aggregated (many variables do not have direct real-world counterparts). Other parameter ranges
have also been simulated12 which results in different numerical results, and also in minor changes in
the aggregated modes of behaviour. Nevertheless do these uncertainty bounds need to be relatively
narrow (especially when compared to those used in Exploratory Modelling –see subsection 3.2),
because the multivariate output intervals are taken along in an aggregated way in the MA Multiple
Criteria Decision Analysis and would be practically unusable if they were too broad. The broader
they are, the more conflicting information they contain, and the more difficult it will be to reach
conclusions, using MA MCDA methods. Some of these multivariate outputs are shown in figure
4. The left hand side of the figure shows the evolution of the Population and Natural Capital
from the industrialist point of view, whereas the right hand side shows the same variables from
the industrialist point of view.
Figure 5 shows a possible MA MCDA evaluation table –very similar to a score card– which
contains more information than could be dealt with without formal multi-dimensional evaluation
tools. Evaluations are shown for two different perspectives (labelled the environmentalist perspective and the industrialist perspective), three different moments in time (the moments 50, 150 and
450), and 5 different variables (Output Per Capita, Gross World Product, Natural Capital, Pollution Per Output, and Population). The modes of behaviour of these variables at these moments in
time are also represented qualitatively13 . The quantitative evaluations of these variables for these
perspectives on these different time scales contain the 50 % uncertainty intervals.
The variables Output Per Capita and Gross World Product are assumed to be important for
industrialists and the variables Natural Capital, Pollution Per Output, Population are assumed to
be important for environmentalists.
The labels ’0’, ’1’ and ’2’ refer to the simulated policies as explained above. The cells highlighted
in green show the preferred (and almost equally preferred) policies for the different perspectives:
the industrialist prefers policy 0 (doing nothing), whereas the environmentalist prefers policy 1
or 2. If acceptability thresholds are used, then policy 1 might be an acceptable policy for both
industrialists and environmentalists.
This analysis could of course be extended with more (qualitative) criteria, perspectives (differ12 For example: Abatement Coeff = random uniform (1.0000, 100.0000) and Contraction Rate = random uniform
(0.0900, 0.1100).
13 The first group of symbols represents the behaviour of the 50% upper bound, and the second group of symbols
represents the behaviour of the 50% lower bound. The groups of symbols could be: + + + ; - - - ; 0 + 0 ; + 0 - ;
0 - - ; - + 0 ; + 0 - 0 + ; + 0 + and so on, where + + + stands for exponential increase, + 0 - for growth followed
by a maximum peak and decline, and so on. Visual representations or other notations might be clearer to readers
not used to these notations.
12
Figure 4: The multivariate dynamics of the Population (topmost) and Natural Capital (bottommost) for the industrialist and the environmentalist perspectives
ent models and/or preference profiles), base runs, more refined analyses of multivariate sensitivity
analysis results, relative or absolute comparisons and evaluations, different Multiple Criteria Decision Analysis methods, et cetera. However, the goal of this paper is simply to illustrate the use
of a multi-dimensional evaluation tool to explicitly evaluate uncertainties in System Dynamics.
3.1.4
[SD+MA MCDA] – Some Advantages and Disadvantages
Combining System Dynamics and Multi-Attribute Multiple Criteria Decision Analysis has following advantages:
• Both quantitative criteria and qualitative criteria can be evaluated. In the example, only
the mode of behaviour –of all possible qualitative criteria– was taken into account. However,
other qualitative criteria –such as the degree of lock-in of the policies– could have been
evaluated too14 .
• Policies can be evaluated on different criteria and different time scales, without full discounting and/or aggregation and thus full compensation.
• Different points of view on the same issue can be taken into account and evaluated simultaneously, by means of different models, different preference profiles and acceptability
14 See
for another example (Pruyt 2006a).
13
Figure 5: An Multi-Attribute Multiple Criteria Decision Analysis evaluation table summarising
the values of the 50% uncertainty bounds and behaviour of 5 criteria for 2 perspectives on 3
different moments in time
thresholds.
• Compromise solutions could also be searched for instead of consensual solutions as in traditional System Dynamics.
Combining System Dynamics and MA Multiple Criteria Decision Analysis for dealing with
uncertainties has following disadvantages:
• Policies are compared over all the entire uncertainty interval (uncertainty bounds) instead
of per uncertain plausible future as in Exploratory Modelling (see subsection 3.2).
• The uncertainty ranges that could be explored are therefore necessarily rather narrow, at
least, compared to uncertainty ranges that could be explored using Exploratory Modelling.
14
Figure 6: Iterative and interactive man-machine cooperation – adapted from figure 3.1. of (Lempert et al. 2003, p64)
3.2
3.2.1
System Dynamics and Exploratory Modelling in Search of Robustness
Exploratory Modelling
The second multi-method venue for dealing with uncertainties illustrated here is the combination
of System Dynamics and Exploratory Modelling15 . Exploratory Modelling is focussed specifically
on deep uncertainties to explore the potential for surprise. Deep uncertainty is defined as situations
’where analysts do not know, or the parties to a decision cannot agree on
• the appropriate conceptual models that describe the relationships among the key driving
forces that will shape the long-term future [e.g. different drivers and underlying structures
than today],
• the probability distributions used to represent uncertainty about key variables and parameters in the mathematical representations of these conceptual models, and/or
• how to value the desirability of alternative outcomes’ (Lempert, Popper, and Bankes 2003).
Figure 6 illustrates the necessary iterative and interactive man-computer cooperation required
in Exploratory Modelling: important evaluations (e.g. when to stop the search), decisions (e.g.
which policies to model and test for robustness, which uncertainty bounds to explore) and choices
(e.g. which multi-objective functions to use) need to be made and remade by human beings to
direct the automated computer search. First, computer models (called scenario generators) are
constructed and used to generate a huge amount of uncertain scenarios –combining each time a set
of uncertain parameters/relationships/structures representing how the world works and a policy
to influence that world– which are grouped with similar scenarios in scenario ensembles. Note
that System Dynamics models used as scenario generators are used in a totally different way than
in exemplary System Dynamics modelling: the assumptions built into the SD model structure,
relationships and parameters are opened up as much as possible to generate a multiplicity of
plausible long-term futures. It should also be clear that Exploratory Modelling does not necessarily
require –but could make use of– System Dynamics models as scenario generators.
Then, exploratory modelling software is used to visualise these scenario ensembles and to
search this space for robust policies (see arrows (a) and (b) in figure 6) using several multiobjective functions (to represent and test the robustness from different perspectives) and decision
rules (e.g. relative minimum regret) that need to be made by the decision makers. If a policy
is found that seems to be robust given the initial set of policies and future states of the world,
15 For more on Exploratory Modelling, see (Bankes 1993), (Lempert and Schlesinger 2000) (Lempert, Groves, Popper, and Bankes 2006) and especially RAND publication MR1626 (Lempert, Popper, and Bankes 2003) (available
following this link to the RAND web site) on which this subsection is mainly based.
15
then the computer is used once more to test whether the policy is really robust by searching
for plausible futures –combinations of uncertain parameters/relationships/structures representing
how the world works– in which the policy would perform so badly that the hypothesis that the
policy is robust could be falsified (arrow (d)). If none of the initially tested policies is robust, new
–more refined– policies need to be defined (arrow (c)) and tested until a robust policy is found
of which the robustness hypothesis cannot be falsified. The performance of robust policies could
also –as an additional check– be tested against surprising events (arrow (d)).
The semi-automated search could be visualised in different ways, for example by means of
robustness maps (see figures 9 and 10) or else by means of classification trees as in (Agusdinata
and Dittmar 2007).
3.2.2
[System Dynamics+Exploratory Modelling] to Deal with Uncertainties
Until recently, desktop computing power was not sufficient for Exploratory Modelling, but nowadays, desktop computing power suffices and additional back-end ’press-the-button’ System Dynamicssoftware-packages could be developed to facilitate the systematic search for robust policies. The
software package might then allow to:
1. construct different System Dynamics models;
2. simultaneously load the (different) System Dynamics model(s);
3. define different forms of uncertainty: parameter, structural, event uncertainty, et cetera (see
section 2);
4. define different key moments in time (or discounting rules) and preference profiles in terms
of veto thresholds or other decision rules (e.g. minimum regret for relative robustness);
5. define and test different policies;
6. (visually) explore the acceptability or robustness of the different policies at these key moments in time for the different preference profiles.
3.2.3
Example of the Combination of System Dynamics and Exploratory Modelling
to Deal with Deep Uncertainties
The combination of System Dynamics and Exploratory Modelling to deal with deep uncertainties
will now be illustrated very briefly by means of an example published in (Lempert, Popper,
and Bankes 2003) on which this section is almost entirely based. It should be stressed that this
subsection is not original in that it summarises the work of others. Interested readers are therefore
explicitly referred to chapter 5 and appendices A and appendix B of (Lempert, Popper, and Bankes
2003) for a fully elaborated account. In this RAND report, the authors (ab)use a slightly further
adapted version of the Wonderland model to illustrate Exploratory Modelling.
Figure 7 supports their argument that uncertainty ranges are most often defined too narrowly:
it shows the landscape of plausible futures in terms of only two uncertain parameters, the decoupling rate and the economic growth rate. Very similar couples of values for these two variables
already result in three completely different scenarios –the Conventional World Scenario, the Great
Transition Scenario and the Barbarization scenario. Comparing these simulated couples of values
with historic couples of values shows that the uncertainty ranges need to be defined much wider,
which might result in even more divergent long-term futures.
Lempert, Popper, and Bankes (2003) modified the Wonderland model as described in (Herbert
and Leeves 1998)16 ’to improve the original model’s ability to serve as the scenario generator’: (i)
they split the one Wonderland world into a developed and a developing world, coupled by their
16 See http://journal-ci.csse.monash.edu.au//ci/vol06/herbert/herbert.html
or http://www.complexity.org.au/ci/vol06/herbert/herbert.html.
16
Figure 7: Landscape of plausible futures – Source: (Lempert et al. 2003, p92) RANDMR1626-5.1
(Reprinted with permission from the RAND Corporation, Santa Monica, CA).
economic growth rates, decoupling rates, and the state of their carrying capacities, and they added
(ii) a delay for policy interventions to take effect, (iii) four multi-objective measures that value the
output of alternative scenarios from different perspectives, and (iv) a representation of the way in
which future generations may respond to concerns about sustainability.
The uncertain parameters used in the scenario generator are displayed in figure 8. The combined simulation of these uncertain parameter values (which constitute plausible futures) and –in
this case– 16 alternative policies generates a huge amount of scenarios.
Figure 9 shows the regret of a particular policy (the Slight Speed-Up Policy) compared to
the 15 alternative policies across a range of plausible futures using two different multi-objective
measures (perspectives). It shows that the policy is –relative to the other policies– robust from
one multi-objective perspective (see the left-hand side graph representing the robustness in terms
of the North Quasi-HDI measure), but not from another multi-objective perspective (see the lefthand graph representing the robustness in terms of the World Green measure). A policy needs to
be robust using different measures, otherwise alternative policies need to be designed, simulated
and compared, until a policy is found that seems to be robust.
In the example elaborated by Lempert, Popper, and Bankes (2003), none of the 16 simple
policies is robust. Hence they design and test new –more refined, that is to say, adaptive and
flexible– policies. Since one of these new policies seems to be robust, the computer and optimisation
techniques are used to find breaking scenarios to test whether the robustness hypothesis can be
falsified.
Finally, it is possible to test the robust policy (policies) found for robustness given specific
surprises. Figure 10 shows the performance of a robust adaptive and flexible policy –labelled the
Safety Valve policy– for a surprise-free case and for three surprises occurring in 2030 (a technology
surprise, a population surprise and a values surprise). This additional analysis lead the authors to
conclude that this policy remains a rather robust policy in spite of the fact that there are regions in
the case of the technological surprise where this policy’s relative performance degrades compared
to other alternative policies.
17
Figure 8: Uncertain parameters in the Wonderland Scenario Generator – Source: (Lempert et al.
2003, p160, table A.1)
Figure 9: Robustness performance of a policy using two measures – Source: (Lempert et al. 2003,
p97) RANDMR1626-5.4 and (Lempert et al. 2003, p99) RANDMR1626-5.5 (Reprinted with
permission from the RAND Corporation, Santa Monica, CA).
18
Figure 10: Performance of a strategy over a range of surprising futures – (Lempert et al. 2003,
p123) RANDMR1626-5.18 (Reprinted with permission from the RAND Corporation, Santa Monica, CA).
19
3.2.4
System Dynamics + Exploratory Modelling – Some Advantages and Disadvantages
Combining System Dynamics and Exploratory Modelling has following advantages:
• It leads to the systematic search for robust policies.
• Very broad uncertainty bounds and different structural uncertainties can be explored.
• The policies are compared pairwise and in a disaggregated way in any of a different futures,
as opposed to the combination of System Dynamics and Multi-Attribute Multiple Criteria
Decision Analysis which leads to the aggregated comparison of policies over all simulated
futures.
• Several different models –and thus different perspectives on the system and uncertainties
regarding the real-world system– can be simulated and thus taken into account.
• Different multi-objective measures –hence different preference profiles and thus different
perspectives on what matters– can be taken into account.
• Robust (e.g. relative minimal regret) compromises can be searched for when multiple stakeholders with very different perspectives and preference profiles are involved, instead of forcing
consensual solutions as in traditional System Dynamics.
Combining System Dynamics and Exploratory Modelling has following disadvantages:
• The discounted multi-objective measures are fully compensational over the different criteria
and over the simulation time, which might lead to undesirable trade-offs and/or ethical –for
example intergenerational– problems.
• Qualitative criteria –for example those related to the flexibility or the modes of behaviour–
cannot be taken into account.
• It adds another layer of complexity, takes more time to set up, and requires quite some
computing time and computing power.
• Currently there are no user-friendly software packages that could be combined with System
Dynamics softwares to perform these analyses in an easy, intelligible and transparent way.
4
Conclusions / Future Research
One of the main messages of this paper is that stand-alone System Dynamics is not sufficient to
deal with dynamically complex issues characterised by important or many uncertainties, but that
the combination of System Dynamics with other method(ologie)s might be appropriate to explore
the issues and the impact of the uncertainties.
System Dynamicists deal with uncertainties in very different ways. The most common System
Dynamics way to deal with uncertainties –the attitude that uncertainties are omnipresent and that
qualitative interpretations are therefore all that matters– is very specific to System Dynamics,
and is often misunderstood by outsiders of the field. Except for some univariate and multivariate
sensitivity analyses –which are very often, without proper evaluation, too poor an exploration of
uncertainty– it looks as though uncertainties are not really explored or explicitly taken into account
in SD, which makes that the SD approach might seem too deterministic and therefore inappropriate
for dealing with issues characterised by ever more important and abundant uncertainties.
However, this is not necessarily the case: uncertainties can be explored by System Dynamics
if complementary methods are used and full use is made of present-day computing power. The
combination of System Dynamics with two of these complementary methods has been briefly illustrated in this paper: namely Multi-Attribute Multiple Criteria Decision Analysis and Exploratory
Modelling.
But. . . this paper is merely an illustration. Desirable near term actions would therefore be:
20
• to apply these multimethods to several dynamically complex real-world issues that are particularly characterised by important or many uncertainties.
• and to develop 3 ’press-the-button’ SD-software-modules:
– a comprehensive automated (univariate & multivariate) sensitivity testing module;
– a Multi-Attribute Multiple Criteria Decision Analysis module
∗ to visualise simulation results (including uncertainty bounds and qualitative properties) of different policies on multiple criteria and moments in time, and
∗ to guide users through the selection/application/interpretation of appropriate MultiAttribute Multiple Criteria Decision Analysis methods.
– an Exploratory Modelling module to
∗ simultaneously load different System Dynamics models
∗ define different uncertainties (parameters, events, et cetera), key moments in time,
robustness thresholds and policies, and
∗ iteratively simulate and (visually) explore/evaluate the robustness of policies for
different uncertainties.
These proposed modules generate very different insights. Policies are evaluated and compared
simultaneously –in the [SD + MA MCDA] multimethod– on all plausible futures (combinations
of uncertainties) without falling back on fully compensational multi-objective functions and decision rules. Moreover, this multimethod is appropriate for dealing with different quantitative
and qualitative criteria, different time scales (without the necessity of discounting), and different
perspectives. However, this multimethod is only appropriate for dealing with relatively (compared to the combination of System Dynamics and Exploratory Modelling) narrow uncertainty
bounds. And it does not provide –unless these scenarios are explicitly simulated– the possibility
to automatically compare individual trajectories of different policies for specific futures.
In the [SD + Exploratory Modelling] multimethod, policies are evaluated and compared for
each of the futures individually using fully compensational multi-objective functions and specific
decision rules (e.g. relative minimal regret). This multimethod is also appropriate for dealing with
different perspectives, with very broad uncertainty bounds and structural uncertainty. However,
it is not appropriate for dealing with qualitative criteria and different time scales unless they
are cardinalised and discounted. This multimethod automatically compares –contrary to the
System Dynamics+ Multiple Criteria Decision Analysis multimethod– the discounted values of
the individual trajectories of alternative policies. However, it does not provide a comparison
and view on these individual trajectories –unless separately simulated–, only a discounted and
aggregated overview of the relative robustness of these policies, because there are simply too many
uncertainties.
This paper merely tried to illustrate the possibilities and differences of both multimethod
approaches to deal with uncertainties. One of the main conclusions is that both multi-method
approaches offer the possibility to explicitly explore and evaluate the impact of uncertainties on
different policies. This is good news, especially when confronted with ever more complex issues
characterised by ever more and more important uncertainties.
References
Agusdinata, D. and L. Dittmar (2007, April). System-of-systems perspective and exploratory
modeling to support the design of adaptive policy for reducing carbon emission. In In Proceedings of the 2007 IEEE SMC International Conference on System of Systems Engineering,
pp. 1–8. 15
Bankes, S. (1993, May-June). Exploratory modeling for policy analysis. Operations Research 41 (3), 435–449. 14
21
Belton, V. and T. Stewart (2002). Multiple Criteria Decision Analysis: an Integrated Approach.
Boston, MA: Kluwer Academic Press. 6
Coyle, G. (1998). The practice of system dynamics: milestones, lessons and ideas from 30 years
experience. System Dynamics Review 14 (4), 343–365. 3
Figueira, J., S. Greco, and M. Ehrgott (Eds.) (2005). Multiple criteria decision analysis: state
of the art surveys. International Series in Operations Research and Management Science.
New York: Springer. 1045p. 6
Ford, D. and D. Sobek (2003). Modeling real options to switch among alternatives in product
development. In Proceedings of the 2003 International Conference of the System Dynamics
Society. 5
Groessler, A., M. Miller, and G. Winch (2004). Perspectives on rationality in system dynamics
– a workshop report and open research questions. System Dynamics Review 20 (1), 75–87.
5
Herbert, R. and G. Leeves (1998). Troubles in wonderland. Complexity International 6. 15
Keeney, R. L. and H. Raiffa (1973). Decisions with Multiple Objectives: Preferences and Value
Tradeoffs. Cambridge: Cambridge University Press. 6
Kunsch, P. and J. Springael (2006). Simulation with system dynamics and fuzzy reasoning of a
tax policy to reduce CO2 emissions in the residential sector. European Journal of Operational
Research. doi:10:.1016/j.ejor.2006.05.048. 5
Lane, D. (2000). Diagramming conventions in system dynamics. Journal of the Operational
Research Society 51 (2), 241–245. 5
Lane, D. and R. Oliva (1998, May). The greater whole: towards a synthesis of system dynamics
and soft systems methodology. European Journal of Operational Research 107 (1), 214–235.
5
Lempert, R., D. Groves, S. Popper, and S. Bankes (2006, April). A general, analytic method for
generating robust strategies and narrative scenarios. Management Science 52 (4), 514–528.
14
Lempert, R., S. Popper, and S. Bankes (2003). Shaping the next one hundred years: New
methods for quantitative, long-term policy analysis. RAND report MR-1626, The RAND
Pardee Center, Santa Monica, CA. 8, 14, 15, 16
Lempert, R. and M. Schlesinger (2000, June). Robust strategies for abating climate change.
Climatic Change 45 (3–4), 387–401. 14
Mayo, D., M. Callaghan, and W. Dalton (2001). Aiming for restructuring success at London
underground. System Dynamics Review 17 (3), 261–289. 5
Meadows, D. and J. Robinson (1985). The Electronic Oracle. Computer Models and Social
Decisions. Chichester: John Wiley & Sons. 3
Milik, A., A. Prskawetz, G. Feichtinger, and W. Sanderson (1996). Slow-fast dynamics in wonderland. Environmental Modeling and Assessment 1, 3–17. 8
Nuthmann, C. (1994). Using human judgment in system dynamics models of social-systems.
System Dynamics Review 10 (1), 1–27. 3
Pruyt, E. (2006a). System dynamics and decision-making in the context of dynamically complex multi-dimensional societal issues. In Proceedings of the 2006 Conference of the System
Dynamics Society, Nijmegen. System Dynamics Society. 7, 12
Pruyt, E. (2006b). What is system dynamics? A paradigmatic inquiry. In Proceedings of the
2006 Conference of the System Dynamics Society, Nijmegen. System Dynamics Society. 3
Richardson, G. and A. Pugh III (1981). Introduction to System Dynamics Modeling. Productivity Press: Portland. Previously published by MIT Press. 5
22
Roy, B. and M. McCord (1996). Multicriteria methodology for decision aiding. Nonconvex optimization and its applications. Kluwer Academic Publishers: Boston. 6
Sanderson, W. (1995). Predictability, complexity, and catastrophe in a collapsible model of
population, development, and environmental interactions. Mathematical Population Studies 5 (3), 259–279. 8
Sterman, J. (1991). Managing a Nation: The Microcomputer Software Catalog, Chapter A
Skeptic’s Guide to Computer Models, pp. 209–229. Boulder, CO: Westview Press. 5
Sterman, J. (2000). Business dynamics: systems thinking and modeling for a complex world.
Irwin/McGraw-Hill: Boston. 982 p. 5
Tank-Nielsen, C. (1980). Elements of the System Dynamics Method, Chapter Sensitivity Analysis in System Dynamics, pp. 185–204. MIT Press/Wright-Allen Series in System Dynamics.
Cambridge, MA: The MIT Press. 5
Tessem, B. and P. Davidsen (1994). Fuzzy system dynamics - an approach to vague and qualitative variables in simulation. System Dynamics Review 10 (1), 49–62. 5
van Asselt, M. (2000). Perspectives on uncertainty and risk: the PRIMA approach to decision
support. Kluwer Academic : Dordrecht. 1, 2, 3, 4, 5
Vincke, P., M. Gassner, and B. Roy (1992). Multicriteria decision-aid. John Wiley and Sons:
Chichester. 6
Walker, W. and V. Marchau (2003). Dealing with uncertainty in policy analysis, and policymaking. Integrated Assessment 4 (1), 1–4. 6
Wolstenholme, E. (1999). Qualitative vs quantitative modelling: the evolving balance. Journal
of the Operational Research Society 50 (4), 422–428. 5