Noname manuscript No.
(will be inserted by the editor)
Varying the explanatory span: scientific explanation
in computer simulations
Juan M. Durán
Abstract This article aims to develop an account of scientific explanation
for the results of computer simulations. Two questions are answered: what is
the explanatory relation for computer simulations? and what kind of epistemic
gain should be expected? For several reasons tailored to the benefits and needs
of computer simulations, these questions are better answered within the unificationist theory. I submit, however, that some modifications must be made
in order to frame a full account of computer simulations. I also argue that our
understanding of the results of a simulation go beyond the reductive unificationist version. Several philosophical issues related to my view are discussed.
Keywords Computer simulations · scientific explanation · unification ·
scientific models
1 Introduction
It is often claimed that computer simulations provide genuine instances of
scientific understanding. Recent philosophical debates have focused on the reliability of simulations as novel methods in scientific practice. General notions
like ‘explanation,’ ‘prediction,’ ‘confirmation’ and the like are widely accepted
and used as the epistemic groundwork in discussions of simulations. However,
these notions are normally not analyzed in detail, but rather embedded into
other philosophical contexts and discussions. Most prominently, they are found
either in discussions on the ‘experimental’ side of computer simulation ([20],
[17], [18]), or in discussion about its ‘modeling’ side ([13] and [8]).
Part of the discontent shown by Roman Frigg and Julian Reiss [6] about the
possibility of a philosphy of computer simulations can be attributed, I believe,
to this general way of approaching simulations. According to these authors,
computer simulations can be analyzed in terms of more familiar and mature
philosophies, such as the philosophy of experimentation and the philosophy
Address(es) of author(s) should be given
2
Juan M. Durán
of models. Now, despite the many efforts made by philosophers to show the
philosophical importance of computer simulations, few have engaged directly
in the ‘fine print’.1 In this respect, I propose to discuss the notion of scientific
explanation in the context of computer simulations in an attempt to ground
their epistemic power.
Specifically, this article looks squarely at the logic of explanation in computer simulations. I show how to explain the results of computer simulations
and in what sense they increase our understanding. Since I will be dealing only
with the results of computer simulations that represent real-world phenomena,
to my mind, the explanation of results also has the purpose of explaining the
phenomena that are being simulated, and thus increasing our understanding
of such phenomena. To accomplish this, I expand the explanatory base of the
unificationist approach by including computer simulations, proposing certain
appropriate modifications to the unificationist in the process. I then focus on
answering two core questions, namely: ‘what is the explanatory relation for
computer simulations?’ and ‘what kind of epistemic gain should we expect? ’.
Along with these questions, several philosophical issues relating to scientific
explanation and computer simulations will emerge and be discussed.
The discussion is organized as follows. Section 2 discusses terminology.
The purpose of the section is to ground notions such as computer simulation,
instantiation, and simulated phenomenon, which are central for my approach.
Section 2.1 elaborates an example of a computer simulation of an orbiting satellite under tidal stress. Admittedly, the example is rather simple. One could
ask why we should focus on such a case when more interesting, large scale
simulations are driving scientific practice forward. To this, I give two answers:
First, the example suffices for the purpose of exhibiting the explanatory input
of computer simulations, which is the goal of this article. As I argue, the example conveys the specific characteristics of simulations in a way that makes the
analysis of scientific explanation a manageable task. For instance, the representation of the target system is more straightforward than, say, a multi-scale
simulation. It also helps to focus explanation on equation-based simulations
as opposed to cellular automata and agent-based simulations. Second, since
this article is meant to lay down an incipient logic of explanation for computer
simulations, it must keep the number of philosophical issues to a minimum.
The example used, however, is a good representative of current simulation
practice.
Section 3 addresses scientific explanation in the context of computer simulations. It begins by arguing that the unificationist account of explanation,
as opposed to other non-nomothetic accounts, is the most suitable theoretical
framework for computer simulations. To my mind, the very nature of computer
simulations as constructs of our body of beliefs leads to the unificationist position. Section 3.1 presents the unificationist account as elaborated by Kitcher
1 Exceptions to this are Margaret Morrison [21], who discusses computer simulations as
measuring devices, Anouk Barberousse et al. [1], [2] and Paul Humphreys [12], who discuss
the notion of data, and Claus Beisbart [3] who shows in what respects computer simulations
are arguments.
Varying the explanatory span: scientific explanation in computer simulations
3
([14], [15]), followed by an illustration of how to accommodate the example in
section 2.1. This section is important for two reasons. First, because it shows
how an explanation of the results of a computer simulation is performed and,
in addition, reveals what is unique and characteristic about computer simulations that make the whole enterprise of explaining results different from other
forms of explanation. Second, because it poses a real challenge to the unificationist by forcing modifications to the schematic sentences in order to fully
accommodate computer simulations. Section 3.1.2 then contains a discussion
of the kind of understanding obtained by explaining the results of a computer
simulation. Although my position is sober, I give reasons why we must go
beyond the standard viewpoint of the unificationist.
Finally, section 4 presents a general discussion of explanation for computer
simulations. It is argued that contemporary views on scientific explanation for
computer simulations fail in their attempts to explain results.
2 Background terminology
From a general viewpoint, computer simulations encompass a simulation model
implemented on a physical, stepwise machine. The class of computer simulations depends on how this simulation model is designed and coded. An agentbased simulation, for instance, consists of programmed multiple autonomous
units referred to as ‘agents,’ the interaction among them and their environment, and a set of heuristics for decision making. Another class of computer
simulations are cellular automata. Standard examples of these are abstract
mathematical systems where space and time are considered to be discrete,
consisting of a regular grid of cells, each of which can be in any number of
states at any given time. Typically all the cells are governed by the same rule,
which describes how the state of a cell at that given time is determined by the
states of itself and its neighbors at the preceding moment. An equation-based
simulation makes use of a mathematical model by transforming it. This means
that the model must go through a series of modifications, distortions, abstractions, idealizations, computational syntax, programming languages, and the
like for implementation on the physical computer. The result is a simulation
model, a computer-based and tractable model.
Thus understood, this idealized image of computer simulations faces obvious difficulties. We could easily find examples where the representational
capacity of an equation-based simulation is not tailored to a model; or where
an agent-based simulation neither predicts nor assesses the overall behavior of
emergent systems. The universe of computer simulations is large and it is not
the purpose of this article to substantiate a notion of scientific explanation for
all classes. I therefore restrict the class of computer simulations of interest to
equation-based simulations that represent a given empirical target system. An
equation-based simulation is here understood as one that implements a model
underpinning real-world phenomena. I use the notion of ‘representation’ in a
rather loose way, avoiding any commitment to theories of representation.
4
Juan M. Durán
Another core term for this article is the instantiation of parameter values.
A computer simulation, as I am framing the notion, has the capacity to single
out a host of what I now term, and will clarify later, ‘simulated phenomena’
simply by changing parameter values, such as initial and boundary conditions,
initial subroutines, simulation time, etc. For instance, the simulation model in
section 2.1 implements a model of classical Newtonian mechanics for a twobody interaction system. In order to single out the simulation that leads to the
spikes of Figure 2, it is enough to instantiate the parameter values as shown on
page 7. An instantiation is then understood as filling out the free parameters
of the simulation model with specific values (e.g., the mass of the first body
to 2 x 1027 , the mass of the second body to 3 x 1022 , the initial time step to
10 seconds, and so forth). Part of the activity of simulating, then, consists in
carefully choosing these parameter values in such a way that the phenomenon
of interest can be accurately simulated.
Finally, the notion of simulated phenomenon is taken as an umbrella term
that covers all the results of an equation-based simulation that represents realworld empirical phenomena, including abstractions, idealizations, and approximations. I am aware of the philosophical difficulties that this claim engenders.
But, again, I will take it as unproblematic that the practice of simulation modeling is a well established discipline, and that we can genuinely represent in a
reliable, accurate, and robust way the intended real-world phenomenon. For
instance, Figure 2 describes the orbital eccentricity of the satellite and planet
as singled out by the parameter values on page 7.
One final issue regarding the notion of simulated phenomenon should be
raised. In principle, this notion is restricted to all the results of a simulation
that represent its empirical counterpart. Making use of the the example on
page 1, the simulation of a satellite with mass of 3 x 1022 kg, a planet with
mass 2 x 1027 kg, and so can all be found in the empirical world. One could
ask, however, why it is not extended to all simulated phenomena within the
target system. Undoubtedly, it would be desirable to be able to explain all the
results of a given simulation. Problems arise, however, the moment we realize
that the set of parameter values can be instantiated in such diverse ways that
we could single out empirically-unfeasible and nomologically-impossible results.
To illustrate this, consider again the simulation of a satellite under tidal stress.
Suppose that all variables and constants in the simulation range are in the
domain of the naturals. Now suppose that we set the gravitational constant to
G = 2m3 kg −1 s−2 . To the best of our current scientific knowledge, no physical
system exists with such a gravitational constant. The result of the simulation,
whatever that might be, is nomologically impossible as it violates a physical
constant. The obvious solution is to impose as good programming practice
the setting of constants and principles of nature to their right values, along
with the right description of laws and scientific units of measurement. Such
solution is, to my mind, only a palliative as it depends on previous knowledge
of such constants, principles, laws, and units of measurement. Unless more
is provided, there are no indications of how to proceed in cases where such
knowledge is lacking. A similar point could be raised in the case of a simulation
Varying the explanatory span: scientific explanation in computer simulations
5
that uses the mass of a sphere of gold as a variable; one could then a variable
to 1, 000, 000 kilograms. Although this is not nomologically impossible, it is
empirically unfeasible or at least far-fetched that such a sphere exists. The
overall worry, then, is that computer simulations allow the creation of an
indiscriminate number of scenarios, some of which we might know how to
explain (e.g., because they do not violate the laws of nature), some of which
we do not know how to explain (e.g., because we cannot anticipate them), and
some which make no empirical sense. In this article, I am interested only in
simulations that represent empirical phenomena. A broader approach should
include simulations without an adequate representation of the target system.
I believe that such an account is attainable by the same means as I offer here,
provided that we also accommodate a suitable notion of understanding.
Summing up, computer simulations consist of implementing a simulation
model (i.e., a computationally transformed scientific model) that represents an
empirical target system. In order to single out the simulation of a real world
phenomenon, one needs to instantiate the set of parameters with appropriate
values. The outcome is what I called the simulated phenomenon, understood
as the set of results of the computer simulation that accurately represents a
real-world phenomenon.
As a final remark, let me state something that need not to be argued.
Any computing process contains a number of small errors and distortions
that are typically transferred into the results. Truncation errors, for instance,
occur by approximating an infinite number, sum, etc. with a finite one. Since
computer simulations deal with the infinite as a very large finite number,
truncation errors are unavoidable. These artifacts do occur in computational
practice, and are typically compensated through a number of mathematical
and computational contrivances. I assume that, if such artifacts occur, then
they are negligible at the software as well as hardware level. Moreover, I assume
that the real-world phenomenon is not misrepresented by the results in any
relevant sense. I am aware of the extension of these assumptions, as well as their
philosophical importance; unfortunately, here is not the place to address them.
In section 3 I briefly address this issue and show how they could be included
as part of the explanatory schemata. Next I elaborate a simple example that
incorporates these elements in a comprehensible way.
2.1 A simple example of a computer simulation
The example used in this section is of a satellite orbiting around a planet.
Although a simple simulation, it has considerable explanatory value. To see
this, note that the following is assumed: (a) that we have a model that represents the target system of interest; (b) that we know that the simulated
phenomenon is empirically possible; (c) that there are no significant errors
introduced, either during the transformation of the model into the simulation
model, nor during the period of computing the simulation model; and (d) that
we have cognitive access to the simulation model. I take points (a) to (c) to
6
Juan M. Durán
be unproblematic, as they can be obtained by good modeling and programming practice. Point (d), though, might be objected to, as simulation models
are not a transparent and monolithic unity; a number of “black boxes” are
also used for specific purposes. Typically, these are libraries and modules that
complement the functionalities needed by the simulation model. Now, how
“black” a library could be depends on many factors, including whether it is
open distribution or proprietary. But we should always expect documentation
that provides an specification of the subroutines, the members in a structure
or class, and so forth. For instance, deviates.h is a library written in C for generating random deviates drawn from different probability distributions. That
library contains Int dev(), a subroutine that returns a binomial deviate [24,
376]. Regardless of how dev() is programmed, we know its principal functionality by looking at the specification (i.e., a subroutine that takes no argument
and returns an Integer). Admittedly, more complex equation-based simulations
(e.g., multi-scale, stochastic, distributed, etc.) might require different considerations. A preliminary mode of inclusion is advanced in section 3.1.1. In any
case, the simulation of the satellite orbiting around a planet is a good case for
a successful explanation. Let me describe the mathematical model used in this
simulation.
Following Woolfson and Pert [27, 17], I consider an orbiting satellite under
tidal stress which stretches along the direction of the radius vector. This model
presupposes, in addition, that the orbit is non-circular, and therefore, that the
stress is variable making the satellite expand and contract along the radius
vector in a periodic fashion. Since the satellite is not perfectly elastic, the
mechanical energy is converted into heat, which is radiated away. The overall
effect is, however, that whereas there is mechanical energy lost, the system
as a whole conserves angular momentum. As a results, the spikes in Figure
2 are observed, and an explanation is in order. The following conditions and
equations are included in the model:
For a planet of mass M and a satellite of mass m ( M ), in an orbit of semi-major
axis a and eccentricity e, the total energy is
E=−
GM m
2a
(1)
and the angular momentum is
H = {GM a(1 − e2 )}m
(2)
The model we shall use to simulate this situation is shown in Figure 1. The
planet is represented by a point mass, P , and the satellite by a distribution of three
masses, each m/3, at positions S1 , S2 and S3 , forming an equilateral triangle when
free of stress. The masses are connected, as shown, by springs, each of unstressed
length l and the same spring constant, k. Thus a spring constantly stretched to a
length l0 will exert an inward force
F = k(l0 − l)
(3)
Now, we also introduce a dissipative element in our system by making the force
dependent on the rate of expansion or contraction of the spring, giving the following
force law:
Varying the explanatory span: scientific explanation in computer simulations
7
Fig. 1 The satellite is represented by three masses, each m/3, connected by springs each
of the same unstrained length, l. [27, 19]
dl0
(4)
dt
where the force acts inwards at the two ends. It is the second term in Equation 4
which gives the simulation of the hysteresis losses in the satellite [27, 18-19].
F = k(l0 − l) − c
Thus understood, the simulation model makes use of classical Newtonian
mechanics for describing all two-body systems under tidal stress whether satellites or otherwise. However, since the target phenomena are a concrete satellite and planet, tidal stress and energy, etc., all specific characteristics and
constraints, these must be singled out by setting the values of the set of parameters. For instance, Woolfson and Pert use the following parameter values
[27, 20]:
number of bodies = 4
mass of planet = 2 x 1027 kg
mass of satellite = 3 x 1022 kg
initial time step = 10 s
total simulation time = 125000 s
body chosen as origin = 1
tolerance = 100 m
initial distance of satellite = 1 x 108 m
unstretched length of spring = 1 x 106 m
initial eccentricity = 0.6
As mentioned in the previous section, a mathematical model is transformed
into a suitable simulation model for implementation on the physical computer.
Thus, equation 1 is described by subroutine T OT M = CM (1) + CM (2) +
CM (3)+CM (4); EN = −G∗T OT M/R+0.5∗V 2, whereas the force equations
represented in 2 and 3 are described by subroutine ACC. There is no further
interest in providing detail the loops, conditionals, subroutines, and so on that
conceptually (and epistemically) separate the simulation model of the satellite
from its mathematical representation. For a full description of the simulation
model, see [28].
As for the parameters, they configure the simulation to represent a specific
satellite, orbiting around a specific planet, producing a specific tidal stress,
8
Juan M. Durán
Fig. 2 The orbital eccentricity as a function of time. [27, 20]
and so forth. Thus understood, the simulation has considerable explanatory
value. To see this, let us make the explanatory reasoning explicit. As an initial
condition, the position of the satellite is at its furthest distance from the
planet, hence the spikes only occur when they are at their closest. When this
happens, the satellite is stretched by the tidal force exerted by the planet.
Correspondingly, inertia makes the satellite tidal bulge lag behind the radius
vector. The lag and lead in the tidal bulge of the satellite give spin angular
momentum on approach, and subtract it on recession. When receding from
the near point, the tidal bulge is ahead of the radius vector and the effect
is therefore reversed. The spikes therefore occur because there is an exchange
between spin and orbital angular momentum around closest approach (see [27,
21]).
At this point, someone could object that a similar explanation could be
obtained by using the mathematical model (or Newtonian mechanics). Such an
objection rests on the assumption that computer simulations are mathematical
models directly implemented on a physical machine. Stephan Hartmann [9] is
a prominent advocate of this position, and the early Humphreys [10] also subscribed to it. I believe, as do many others (e.g., [26], [11], [22]), that computer
simulations are scientific models, in a general sense, but of a different kind.
To highlight this distinction, I earlier used the term simulation model for the
model implemented on the physical machine. Now, in what precise respects a
mathematical model differs from a simulation model is beyond the scope of
this paper. However, an initial defense of this distinction can be mounted if we
can show that the explanation of the spikes by computer simulation is more
successful than by a mathematical model.
Claiming that a mathematical model can explain the results of a computer
simulation might be true for simple cases, where a small set of equations is
all that it is implemented as a simulation model, and little or no algorithmic
Varying the explanatory span: scientific explanation in computer simulations
9
machinery is added. However, for most cases the mathematical model by itself
has limited explanatory input. In our example, the simulation model includes
information relevant for the shape of the results, which are crucial for their
explanation. Such information includes the discretization method responsible
for computability and errors (i.e., a Runge-Kutta algorithm with automatic
step control), a set of subroutines that code several aspects of the behavior of
the forces in N-bodies (e.g., the ACC subroutine), and model reconstructions
that balance representational accuracy and computability (e.g., subroutines for
spring and dissipative forces, such as the STORE subroutine), among others.
Thus understood, the previous explanation of the spikes does not depend on
mathematical machinery, but rather on algorithmic machinery as shown in
detail in section 3.1.1
Take for instance the round-off errors produced by the Runge-Kutta algorithm, partly responsible for the orbital eccentricity trending steadily downwards, as shown in Figure 2. We would be unable to explain this effect with the
mathematical model alone, because it is the NBODY subroutine, responsible
for implementing the Runge-Kutta method, that accounts for orbital eccentricity trending steadily downwards. This is the situation to which Woolfson
and Pert expose themselves, as their explanation remains heavily dependent
upon the mathematical machinery of the model [27, 21]. These facts provide
a justification for taking the simulation model to be a complete representative of both the target system and the computation that represents the target
system, and therefore as the most relevant unit with explanatory force. With
these ideas in mind, let us turn to explaining why the spikes in the simulation
occur by making use of the simulation model.
3 Explaining simulated phenomena
Making use of traditional terminology, I identify the explanans as the simulation model and the explananda as the simulated phenomena. It is standard in
the philosophical literature to take the explanans to consist of well-confirmed
scientific hypotheses, laws, theories, and, in more recent literature, of scientific
models. In this respect, simulation models are not entirely alien to this frame
of reference, as there is no conceptual problem in conceiving of them as some
kind of special scientific model (i.e., one with its own methodology, epistemology, semantics, etc.). The explanans, then, encompasses the simulation model
simply because it best accounts for the simulated phenomena. As argued before, a good discretization must balance the loss of information intrinsic to the
process of turning continuous functions into discrete subroutines with generating a reasonable search space. The consequence of such a discretization step is
that the final shape of the simulated phenomena will inevitably be affected. A
similar argument is raised for truncation and round-off errors, representational
‘tricks,’ and calculation of subroutines, among other unique features only accountable for by the simulation model. As for the explanandum, it must be
identified with the simulated phenomena, as it is their behavior we expect to
10
Juan M. Durán
explain and understand. Take for instance the explanation of why the spikes in
Figure 2 occur. What we expect is to explain the simulated spikes as a means
to explain the real-world spikes. In other words, we want our explanation of
the former to be epistemically equivalent to an explanation of the latter, and
in this vein to claim understanding of both systems.
I now give two reasons why unificationism provides the most suitable conceptual framework for computer simulations. First, unificationism is an nomothetic theory of explanation, and as such the relation between explanans and
explanandum depends entirely on our body of belief. Computer simulations
are well suited to being accommodated within this framework, as they are
coded using our current scientific knowledge. This contrasts with other theories of explanation, particularly ontic theories where the explanatory relation
depends on an objective external relation (i.e., causal relations, some sort of
representation of causal relations, or structures). As will be argued in section 4, the causal-mechanistic accounts of explanation, for instance, face some
problems with their notion of ‘causal relations’ in the context of computer
simulations. Explaining simulated phenomena, to my mind, is best realized
when quantified over our knowledge rather than over external relations.
Second, unification consists of using the same patterns of derivation for
reducing a multiplicity of phenomena that we have to accept as independent
that is, phenomena for which we have no explanation, but for which one is
nonetheless anticipated. This is a core concept of the unificationist approach
and it must be echoed by computer simulations as well. Now, simulations can
produce a multiplicity of results of different kinds by setting the parameters to
different values.2 Similarly, variables and subroutines can take different forms:
a mass can be a charge, and instead of the force law the simulation could use
the inverse-square law. Moreover, the same computer simulation could highlight different aspects of the same system. For instance, we might know why
the satellite orbits around the planet, but have no idea why the spikes occur.
The point here is that the occurrence of one simulated phenomenon has no
bearing on the likelihood that the next simulated phenomenon will be known
and therefore understood. Such epistemic independence is analogous to taking an empirical phenomenon as independent of other empirical phenomena,
despite the fact that they can be explained, predicted, and observed by the
same theory.3 In other terms, the results of computer simulations are not only
analytically novel, but also empirically unexpected.
Let it be noted that producing a multiplicity of simulated phenomena is
not an ad hoc characterization of computer simulations, but rather an inherent
feature. If these points are correct, then it is desirable for a theory of explana2 That this is not to say that computer simulations are unificatory systems. For such a
claim, I believe, we also need to specify in what respects they unify. Rendering a host of
simulated phenomena (some of which are clearly unknown) is a core feature of computer
simulations that squares well with the unificationist. A future task is to show in what respects
there is unification in computer simulations, including models that are not straightforwardly
unificatory.
3 This sense of independence should not be confused with the idea that simulated phenomena are conceptually related to the same simulation model.
Varying the explanatory span: scientific explanation in computer simulations
11
tion to account for, and capitalize on, these features. Unificationism is, in this
respect, the most suitable theory currently available, as explanation is possible
because we can account for a multiplicity of phenomena using a few schematic
patterns.
3.1 The unificationist framework
Explanation, for the unificationist, begins with the set of accepted scientific
beliefs, K. In the sciences, K can be interpreted as classical mechanics in
physics, the evolutionary theory in biology, or the atomic theory in chemistry,
to mention just a few examples. Finding the set K of accepted beliefs for
computer simulations is in no way different from other areas of science, as
the simulation model also relies on our current scientific knowledge. Examples
from molecular biology can be drawn, as we simulate the effects of alanine
scanning and ligand modifications based on molecular dynamics. Simulations
in nuclear physics would include the Boltzmann-Uehling-Uhlenbeck model,
some theorems from statistical mechanics such as Liouville’s theorem, and
general Hamiltonian equations of classical mechanics. And, of course, Woolfson
and Pert’s example relies on a set of differential equations as described by
classical mechanics.
The real challenge for the unificationist is to specify what counts as the explanatory store over K, E(K). This is, what is the set of acceptable argument
patterns that have explanatory force. According to Kitcher, E(K) encompass
three main elements, namely, the schematic sentences (i.e., expressions obtained by replacing some of the nonlogical expressions in a sentence by dummy
letters), a set of filling instructions (i.e., the set of directions for replacing those
dummy letters), and a classification (i.e., the set of sentences that provide directions for which terms are to be regarded as premises, what is inferred from
what, and so forth). The general argument pattern (or argument pattern for
short) is “a triple consisting of a schematic argument, a set of sets of filling
instructions, one for each term of the schematic argument, and a classification for the schematic argument” [15, 432]. Additionally, there is a comments
section that Kitcher uses with the sole purpose of adding non-explanatory
information, such as minor details on the limits of an argument pattern, or
possible corrections for it. To me, the comments section plays a more relevant
role for the explanation of results of computer simulations. Briefly, I take it to
be a repository for all the remaining information that (might) have explanatory force, but that cannot be constructed as a schematic sentence. I have
more to say about this in section 3.1.1.
To explain, then, consists of deriving descriptions of a multiplicity of phenomena using as few and as stringent argument patterns as possible. According
to the unificationist, argument patterns are descriptions that single out natural kinds, objective causal dependencies, objective natural necessities, and
similar concepts found in scientific textbooks. The situation is quite similar
in computer simulations. The matter begins with accepting that our body of
12
Juan M. Durán
knowledge K is the simulation model. As we know from the methodology of
computer simulations, a simulation model describes the properties, entities,
relations, and general behavior of the simulation with respect to an empirical
target system. This includes the model’s subroutines, equations, and variables,
as well as design decisions, values, etc.
Thus understood, some terms used in the simulation model are also found
in the standard scientific vocabulary (e.g., subroutine, equation, variable).
However, some other terms have no counterpart in the traditional scientific
parlance, and therefore do not refer in the unificationist’s preferred way. This
pose the first challenge of modifying the unificatory basis and including information that, although relevant for the explanation of the results, might
not be derivable. Examples of these are preconditions and postconditions,
truncation and round-off errors, parallelization, etc. Let us call them external
quantifications, as a way to refer to external assessment for the overall reliability of results of a simulation. For instance, schematic sentence 6 specifies
the subroutine NBODY including the Runge-Kutta subroutine with a local
truncation-error on the order of O(hp + 1), and a total accumulated error on
the order of nChp+1 = C(x̄−x0 )hp . For the most part, these external quantifications cannot (and are not meant to) be derived, and therefore they cannot
be included as part of the schematic arguments. For instance, the total accumulated error is a function of the cumulative error caused by many iterations,
and therefore it works as a constrain on the results. This means that they
lack a specified interpretation for the schematic sentences, filling instructions,
and classification. However, accounting for such external quantifications are
central for explaining the spikes appearing in the orbiting of the satellite. Because of this, I propose to extend the notion of argument pattern in such a
way that it includes external quantifications as part of the explanans as well.
In the following, I claim that this can be done by expanding the basis of the
comments section.
3.1.1 Example of an explanation
Allow me now to reconstruct the explanatory store for the simulation of an
orbiting satellite under tidal stress. Following the unificationist, a possible
explanatory schemata for explaining the spikes shown on Figure 2 are:4
4 A reconstruction of computer simulations as arguments is given by [3]. Unlike him, I am
not claiming that all computer simulations are arguments, but rather that some aspects of
the simulation model can be reconstructed for explanatory purposes.
Varying the explanatory span: scientific explanation in computer simulations
13
Schematic Sentences:5
1. There are two objects, I and J, one of mass CM (I) and another
with a mass of CM (J) ( CM (I))
2. There is an orbit of semi-major axis A and eccentricity E
3. The object of mass J is distributed into three masses, each J/3,
at positions P OS(1), P OS(2) and P OS(3), forming an equilateral
triangle free of stress.
4. The relative velocities of the bodies are V EL(1), V EL(2), and
V EL(3)
5. . . .
6. Subroutine NBODY includes the Runge-Kutta subroutine with automatic step control.
7. Subroutine STORE stores intermediate coordinates and velocity
components as the computation progresses.
(a) For each position {POS(1, 2, 3)} of the satellite, the expected
orbital distance is given by equation:
R = SQRT (P OS(1) ∗ ∗2 + P OS(2) ∗ ∗2 + P OS(3) ∗ ∗2)
(b) For the mass of the satellite {C(1, 2, 3)}, and mass of the planet
C(4), the expected intrinsic energy is given by the equation:
Velocity V 2 obtained from the square of relative velocities of
the bodies,
T OT M = CM (1) + CM (2) + CM (3) + CM (4), and
EN = −G ∗ T OT M/R + 0.5 ∗ V 2
(c) For each position {POS(1, 2, 3)} and velocities {VEL(1, 2, 3)}
of the satellite, the expected intrinsic angular momentum is
given by:
D1 = P OS(2) ∗ V EL(3) − P OS(3) ∗ V EL(2),
D2 = P OS(3) ∗ V EL(1) − P OS(1) ∗ V EL(3), D3 = P OS(1) ∗
V EL(2) − P OS(2) ∗ V EL(1), and
H2 = D1 ∗ ∗2 + D2 ∗ ∗2 + D3 ∗ ∗2
(d) . . .
8. Subroutine ACC calculates the acceleration of each body due to its
interactions with all other bodies
(a) For each I = 2..N B−1, N B being the number of bodies, and for
each J = I +1..N B, and for each K = 1..3, equation DIF (K) =
XT (I, K) − XT (J, K) calculates the spring and relative forces,
while the expected length of spring is given by the equation
ELP = SQRT (DIF (1) ∗ ∗2 + DIF (2) ∗ ∗2 + DIF (3) ∗ ∗2)
(b) For each I = 1..N B − 1, L = J + 1..N B, and K = 1..3, the
interactions for all pairs of bodies is given by (only the gravitational force is considered here): R(K) = XT (J, K) − XT (L, K)
and RRR = (R(1) ∗ ∗2 + R(2) ∗ ∗2 + R(3) ∗ ∗2) ∗ ∗1.5
5
A full description of the variables, data types, and subroutines can be found in [28].
14
Juan M. Durán
(c) . . .
E The spikes formed are due to an exchange between spin and orbital angular momentum around closest approach.
Filling Instructions:
The gravitational constant is set to G = 6.667E − 11. The mass CM (I)
and the mass CM (J) will be replaced by a planet’s and a satellite’s
respectively. A dissipative element is introduced into the structure by
making the force dependent on the rate of expansion or contraction of
the spring, giving a force law, where the force acts inwards at the two
ends. Values for P OS(1), P OS(2), P OS(3), V EL(1) and V EL(2) must
also be given. Recall from earlier the parameter values set.
Classification:
The classification of the argument indicates that 1-5... are premises,
that 6, 7, and 8 are subroutines containing equations which can be obtained by substituting identical. The explanandum E follows from 6, 7,
and 8 by derivation.
Comments:
1. Normal SI Units are used.
2. By changing the subroutines different problems may be solved. The
CM’s can be masses or charges or be made equal to unity while
the force law can be inverse-square or anything else (e.g., LennardJones).
3. The four-step Runge-Kutta algorithm is used. The results of two
STEPS with TIMESTEP H are checked against taking one STEP
with TIMESTEP 2 ∗ H. If the difference is within the TOLERANCE, then the two STEPS, each of H, are accepted and the
STEPLENGTH is doubled for the next STEP. However, if the TOLERANCE is not satisfied, then the STEP is not accepted and one
tries again with a halved STEPLENGTH. It is advisable, but not
essential, to start with a reasonable STEPLENGTH; the program
quicky finds a suitable value.
4. The user is required to specify a TOLERANCE, the maximum absolute error that can be tolerated in any positional coordinate. If
this is set too low, the program can become very slow.
5. Three kinds of forces are operating (subroutine ACC): the normal
gravitational inverse-square law between all pairs of bodies, a second force due to the elasticity of the material as modeled by the
set of three springs, and a third force operating between the three
component bodies of the satellite and which depends on the rate at
which the springs expand or contract. The third force provides the
dissipation in the system.
Varying the explanatory span: scientific explanation in computer simulations
15
6. Subroutine NBODY includes the Runge-Kutta subroutine with automatic step control, with a local truncation-error on the order of
O(hp + 1), and a total accumulated error on the order of nChp+1 =
C(x̄ − x0 )hp .
Thus understood, the spikes of Figure 2 are explained as a exchange between spin and orbital angular momentum around closest approach. The explanation is obtained by deriving from the schematic sentences 1-8, as set forth
in the filling instructions and the classification.
The comments section, on the other hand, is a repository of reliable information for the explanation. As such, it plays two central roles. First, it
documents boundaries and alternative explanations for the argument pattern
(along with the limits and possible corrections). For instance, comment 2 indicates that the same explanatory schemata could be used for explaining masses
as well as charges. This section also establishes standards and limitations. For
instance, it is recommended to start with a reasonable STEPLENGTH, otherwise it could take longer to find suitable values for the Runge-Kutta algorithm.
Similarly, an acceptable value for TOLERANCE is around 100m. Second, and
as a more active role, it includes information about the results with relevant
explanatory input, but that could not be included as a schematic sentence.
Reasons for this vary from formulations that play no role in the computing
process (e.g., preconditions and postconditions), but that are essential in the
assessment of correctness of subroutines, to quantifications that are intrinsically difficult to reconstruct. For instance, the Runge-Kutta subroutine has a
local truncation-error function of the order O(hp + 1), and a total accumulated error on the order of nChp+1 = C(x̄−x0 )hp . Both are iterative functions,
therefore the derivation of the local and total error depends on each iteration.
Thus described, the local and total errors cannot be derived in the unificationist’s preferred way, although both carry significant explanatory input. If
assessed correctly, this situation forces us to expand the argument pattern and
vouch for the comment section as an active member of the explanation.
A different situation occurs where the error can be measured with a certain
precision. To illustrate this point, take the results of a simulation where the
calculation of the orbiting of a satellite throws a small round-off error such
that, for each loop in the computation, a difference of 1−1000 kilometers is
introduced for each revolution with respect to the real value. Although very
small, this round-off error plays a crucial role in the overall eccentricity of the
satellite and, therefore, in the formation of the spikes. In particular, given a
sufficiently large number of loops, such an error is responsible for the satellite
reaching an eccentricity equal to 0 (i.e., after a determined number of runs, the
satellite reaches a circular orbit). In such a case, the tidal stress seen in Figure
2 takes an entirely different form, one that reaches a stable point without
showing any spikes at all.
Thus understood, there are two possible outcomes. First, such errors are
measured and reconstructed in terms of schematic sentences, allowing the
16
Juan M. Durán
derivation of the explanandum in the usual way. Such a case comes about
as something like the following:
Schematic sentence:
8’) There is a round-off error of approx. 1−1000 in the total simulation time.
Reconstructing errors in this way is obviously the best option, as it allows the derivation of the explanandum. Unfortunately, measuring errors is
not always an easy enterprise. For cases where errors are unmeasurable, I
suggest they be included in the comments section as further non-derivables,
although comprising reliable information with explanatory force. This move is
perfectly acceptable to the unificationist, since non-derivable yet explanatory
information is included in the comments section (e.g., [15], section 4.6). This
is obviously not an ideal situation, as it restricts the explanatory force of the
simulation. However, knowing about the presence of errors makes the epistemic difference between being aware of the existence of a disturbing factor,
and thus being able to interpret and to explain the simulated phenomenon
in light of those errors, and being unable to account for unexpected results.
Such a solution is also valid for similar cases, such as subroutines, tolerance
thresholds, and even design decisions.
A rather different concern would be that the schematic sentences are too
complex to reconstruct from the simulation model, making the derivation too
hard to follow. To this, I reply that the example is only meant to be a intelligible reconstruction of the simulation, and therefore does not need to be
reconstructed in full length. Simpler schematic sentences can be derived directly from the subroutines (i.e., NBODY, STORE, and ACC), provided that
we know the input variables and the return value. Alternatively, we could
rewrite some of the subroutines as a more comprehensive schematic sentence.
For instance, the subroutine that calculates the square of intrinsic angular
momentum could be rewritten as: SqIAM (P 1, P 2, P 3, V 1, V 2, V 3) : D1, D2,
where {P1, P2, P3, V1, V2, V3} are the input variables and D1 and D2 are the
return values. This is a similar case as the one mentioned in section 2.1 concerning “black boxes” in simulation practice. There I claimed that we should
always expect documentation that provides a specification of the subroutines,
the members in a structure or class, and so forth. In this sense, and regardless
of how a subroutine is programmed, we can always know how it works, what
sort of input variables are taken, and what values are returned. This moves
the level of interpretation of the simulation model one step up, and solves the
problem of limitations in reconstructing schematic sentences.
Let me close this section by briefly addressing one last concern. Current
practice in simulations typically makes use of multi-scale computing as well
as parallel computing. In this respect, the explanatory schemata needs to account for more than a single model simulation. I believe that the general ideas
sketched here survive in cases of more complex simulations. First, multi-scaling
and parallelizations might not be explanatory relevant, since they are methods
for increasing the performance of the computation rather than shaping the results. Second, and as discussed in the previous paragraph, even if multi-scaling
Varying the explanatory span: scientific explanation in computer simulations
17
and parallelizations are explanatorily relevant, they can be reconstructed at
the right level of abstraction and included either as a schematic sentence (optimal situation) or in the comment section. In either case, this approach accounts
for multi-scale and parallel computing.
3.1.2 Understanding
Explanations work, when they do, not only in virtue of the right explanatory
relation, but also because they provide genuine scientific understanding. The
Ptolemaic model, for instance, could not explain the trajectory of planets in
any epistemically meaningful way since it fails to provide understanding of its
mechanics. On the other hand, classical Newtonian models explain precisely
because they describe the structure of planetary motion in a comprehensible
way. The criterion for entrenching one type of model as explanatory rather
than the other depends, partially, in their relative capacity for yielding understanding of the phenomenon under scrutiny.
Scientific explanation is an epistemic enterprise par excellence. We want to
explain because we expect to gain further understanding of the phenomenon
under scrutiny and, in doing so, make the world a more transparent and comprehensible place. To the unificationist, understanding comes from seeing connections and common patterns in what initially appeared to be brute or independent of any body of beliefs. ‘Seeing’ here is understood as the epistemic
maneuver of reducing a phenomenon to a greater theoretical framework, such
as our corpus of knowledge. By showing why a given phenomenon becomes
part of a greater theoretical framework, says the unificationist, we obtain a
more unified picture of nature, a more coherent and robust corpus of beliefs,
and more confidence in our theories, among other epistemic gains.
When a simulated phenomenon is explained, a similar epistemic maneuver
is performed: by running a computer simulation we expand the amount of independent simulated phenomena seeking an explanation; and by carrying out
the explanation, we incorporate such independent simulated phenomena into
a larger body of knowledge, namely, the simulation model. Now, since the simulation model is built out of pieces of our body of scientific beliefs, it must be
expected that the simulated phenomena would also later be incorporated into
this body of beliefs. Let it be noted, however, that here we have a two-step
inclusion: firstly, the simulated phenomenon is related to our ‘simulational’
body of beliefs and secondly, via the simulation model, to our general body
of scientific beliefs. In this vein, I believe, the epistemic gain of explaining
simulated phenomena goes beyond the canonical unificationist picture of understanding. To my mind, the cognitive and epistemic acts of understanding
also lead to conveying information that exhibits why certain simulated phenomena behave in a given manner. They also lead to re-shaping the way we
think about empirical phenomena, and in redefining and reconceptualizing our
theoretical background about the target system.
Moreover, explaining simulated phenomena also presupposes a practical
dimension, one that encompasses grasping the technical difficulties behind
18
Juan M. Durán
coding more complex, faster, and more realistic simulations, interpreting verification and validation processes, and conveying information relevant for the
internal mechanism of the simulation. All of these factors are expected to enhance our general understanding of the simulated phenomena, as well as of
their empirical counterparts.
The epistemic question can now be addressed face-on: explanation in computer simulations provides understanding of the given simulated phenomenon
as well as of the empirical phenomenon that it describes. We understand because computer simulations can show us why independent simulated phenomena can be unified with our body of ‘computational’ beliefs as well as general
scientific beliefs. Similar to the unificationist, there is no understanding of one
isolated simulated phenomenon, but rather of a multiplicity explained again
and again with the same simulation model. Here is where the epistemic power
of computer simulations can be seen at its best: a simple computer simulation
yields understanding of an enormous multiplicity of simulated phenomena, and
through it, of an equal multiplicity of empirical phenomena. In addition, we
understand because we convey information that exhibits the behavior of the
simulated phenomena, revealing new ways of reshaping our general view of
the world. On a practical dimension, understanding a simulated phenomena
facilitates the overall assessment of the simulation model and our modeling
practices.
Consider again the example of the satellite and the planet in section 2.1.
We can explain why the spikes in Figure 2 occur because there is a well-defined
pattern structure that enables us to derive a description of the spikes from the
simulation model. By explaining, then, we exhibit the behavior of the satellite
and the planet, and in doing so we obtain useful information about their
interaction, convene accurate information about a system that, ex hypothesi,
we did not have, and incorporate that simulated phenomenon into a greater
corpus of scientific beliefs. By explaining, then, we conceive the satellite-planet
body interaction in a way that is now familiar to us, that is, unified with our
body of established scientific knowledge. In an identical manner, understanding
the spikes is vital to the conceptualization and optimization of the computer
simulation as a whole, for in this way we can achieve higher performance of
the simulation while decreasing the amount of computer resources.
By the same token, we can explain and understand a simulation with a
round-off error of approximately 1−1000 in the total simulation time in the
same sense just given. This is possible because now we are able to see how
an explanation connects with previous results, how they link together with
the current corpus of beliefs (that is classical Newtonian mechanics), and even
how future results might look. In this manner, the epistemic access to the phenomenon becomes more transparent and, with respect to our previous corpus
of beliefs, more unified. At the epistemic level, these explanations are incorporated into our corpus of beliefs, systematizing and unifying it, thus simplifying
our general view of the world.
In other words, we understand the multiplicity of simulated phenomena
because we can explain them and, in doing so, we are able to incorporate them
Varying the explanatory span: scientific explanation in computer simulations
19
into our corpus of scientific beliefs. Moreover, we understand further simulated
phenomena because we can explain them using the simulation model again
and again. The unificationist spells out this idea by saying that the process of
explaining leads to understanding, which makes the world a more transparent
and intelligible place. By explaining simulated phenomena, in the same way
we make the world a more transparent and intelligible place. The difference
would be, nevertheless, that we can simulate substantially more phenomena
than non-computationally based-scientific practice can.
4 Scientific explanation and computer simulations
Why is it important to entrench the legitimacy of scientific explanation for
computer simulations? The first and most straightforward reason is that researchers depend on the results of simulations as reliable sources in understanding the empirical world. It is a desirable aim, then, to be able to explain the
results of a simulation as a means for explaining their empirical counterpart.
Moreover, if it is correct to say that much scientific practice is now shifting
to a centralized use of computer simulations, then our access to the world is
not uniquely regulated by modeling, measuring, and observing phenomena,
but also by computing special kinds of models. In this sense, explaining results places computer simulations on the path towards more justified scientific
practice and, as such, as good candidates for driving scientific progress.
Standard literature takes note of scientific explanation as a central epistemic feature of computer simulations. For instance, Beisbart says: “It is arguable that some scientific computer simulations provide explanations. If computer simulations are arguments and if explanations are arguments (or are at
least built upon arguments), it is obvious how computer simulations can figure
in explanation.” [3, 429]. El Skaf and Imbert [4] are another good example of
how philosophers acknowledge explanation as an important epistemic feature
of computer simulations. They state that “[e]xperiments, computer simulations
and thought experiments (hereafter E, CS and TE) are traditionally assigned
different roles in scientific activity. For example, TE are often seen as ways of
exploring conceptual apparatus and developing theorizing (Kuhn 1964), and
CS as ways of providing theoretical explanations or making predictions, which
E hardly contribute to.” [4, 3452]. It is not part of these authors’ aims to
discuss the logic of scientific explanation, but rather to acknowledge the role
of computer simulations in explanatory contexts. Nevertheless, it makes plain
the importance of a logic of scientific explanation for computer simulations.
Recent attempts have centered on the work of two main actors.6 : these
are Paul Weirich and Peter Krohs, who share a similar view on explanation,
specially regarding the framework in which computer simulations should be
embedded.
6 I must state explicitly that the work on explanation by Jordi Fernández [5] and Marcin
Milkowski [19], for instance, is not considered here, as it is focused on cognitive science and
artificial intelligence.
20
Juan M. Durán
In line with other authors (e.g., [9], [23], [13], among others), Weirich believes that computer simulations directly implement a scientific model, and
thus there is no epistemic difference between the two. This is, no methodology
mediates between scientific models and the computer. Weirich puts this claim
in the following way: “[i]ndividuals in an explanatory simulation stand in a
one-one correspondence to individuals in an explanatory model. Some relations
of individuals in the simulation represent relations of individuals in the model.
Individuals in the simulation stand in those relations if and only if their counterparts in the model stand in the corresponding relations” [25, 164]. Now, if
there is no epistemic distinction between the simulation and the implemented
model, then the latter has all the necessary explanatory force. In other words,
the computer simulation plays no role in the explanatory schemata other than
providing results (to be explained). In fact, this is precisely Weirich’s role for
computer simulations: “[a] simulation shows the results of assumptions [of the
model]” [25, 161].
Thus understood, the model implemented as a simulation explains the
results, and in this respect explanation need not go above or beyond wellknown theories of model explanation. In the author’s own words, “[f]or the
simulation to be explanatory, the model has to be explanatory” [25, Abstract],
and “the simulation draws explanatory power from the model that guides
it. For example, a computer simulation of an economic market explains the
emergence of an efficient allocation of goods if the model it follows does”
[25, 156]. Thus understood, Weirich’s explanation is unable to account for the
orbital eccentricity trending steadily downwards, as shown in our example,
because round-off errors are partially responsible for this effect and are not
represented in the scientific model.
To my mind, what Weirich is calling the ‘explanatory power of computer
simulation’ is actually a model explanation (or something along those lines)
of the results of computer simulation. This is, of course, different from taking
computer simulation as carrying explanatory force and, as such, accounting
for its logic of explanation.
A similar claim is made by Krohs, although with some remarkable differences from Weirich. Krohs shares the assumption that computer simulations
are scientific models (‘theoretical’ models in Kroh’s terminology) implemented
on the computer, which refer to the internal mechanisms of a real-world phenomenon. In this regard, he says “[s]imulations provide numerical solutions to
models. They are run primarily when models cannot be integrated analytically
[...] but may, of course, be helpful also in cases where analytical methods are
in fact available.” [16, 278] The explanatory force for Krohs, as for Weirich,
stems from an external model rather than the simulation itself. Krohs makes
this point clear when he says that “[s]uch models may be regarded as not
only describing, but also as explaining, the process under consideration.” [16,
278]. He restates this point later on by saying that “[t]he explanatory relation holding between simulation and real world is [...] an indirect one. In the
triangle of real-world process, theoretical model, and simulation, explanation
of the real-world process by simulation involves a detour via the theoretical
Varying the explanatory span: scientific explanation in computer simulations
21
model” [16, 284]. Now, unlike Weirich, Krohs makes explicit the explanatory
framework wherein he accommodates computer simulations. To his mind, to
explain is to exhibit the mechanisms that bring about the dynamics of the
system modeled (i.e., the real-world phenomenon) as described in the model
[16, 283-284]. In this way, Krohs paves the way for a mechanistic theory of
explanation. In other words, he expects to explain the why of the results by
explaining the how. Krohs’s reconstruction of how computer simulations explain real-world phenomena is unsatisfactory. First, because it depends on the
idea that the scientific model implemented on the computer has explanatory
force, despite acknowledging that it is different from the simulation model. Let
us note that this objection is slightly different from Weirich’s, who takes the
scientific model to be the same as the simulation model and therefore the explanatory force comes from the latter. To Krohs, there is a difference between
these two kinds of models. In fact, he openly acknowledges a methodology for
computer simulations. Despite all this, he still takes the theoretical model as
accounting for the explanandum. In opposition, I claimed that since we want
to explain the results of the simulation, and since the shape of such results
depends on the simulation model, then it must be the simulation model, rather
than the theoretical model, that is the best candidate for the explanans. Second, despite subscribing to a mechanicist theory of explanation, Krohs never
provides any details on how the explanation is carried out. The underlying
assumptions are that a simulation is the implementation of a dynamic model,
that such a dynamic model exhibits the mechanism of a phenomenon of interest, and that such mechanisms are accountable by the mechanistic theory
of explanation. In short, he leaves it to the mechanistic theory to provide the
details of how to carry out an explanation, without actually showing how this
is possible. I believe that this is not a satisfactory answer, as it is crucial to
Krohs’s thesis to vindicate the mechanistic account as a suitable candidate,
along with showing the explanatory relation and the epistemic gain.
If these considerations are correct, then neither Weirich nor Krohs can offer
a satisfactory account of scientific explanation for computer simulations. I admit that more needs to be said in order to fully settle things for the unificationist, especially when philosophers are more inclined towards causal/mechanistic
explanations. Nevertheless, as we get serious about the logic of explanation for
computer simulations, and about its nature as a special kind of scientific model,
it becomes increasingly evident that a nomothetic account is the most suitable. Here I have only scratched the surface of the issue, however much more
work needs to be done, specially on expanding the account to other classes of
computer simulation.
Acknowledgments
Thanks go to Paul Humphreys and Claus Beisbart for useful commentaries
and helping improve this work. Thanks also go to Raphael van Riel for a
fellowship where this paper was discussed. Finally, this paper is in debt to the
22
Juan M. Durán
many discussions I held with my research group. For this, special thanks go
to Manuel Barrantes, Itat Branca, and Andrés Ilcic.
References
1. Barberousse, Anouk, Sara Franceschelli, and Cyrille Imbert. 2009. Computer Simulations
as Experiments. Synthese 169(3): 557-574.
2. Barberousse, Anouk, and Vorms Marion. 2013. Computer Simulations and Empirical
Data. In Computer Simulations and the Changing Face of Scientific Experimentation,
Ed. Juan M. Durán and Eckhart Arnold. Cambridge Scholars Publishing.
3. Beisbart, Claus. How can computer simulations produce new knowledge? 2012. European
Journal for Philosophy of Science 2(3): 395-434.
4. El Skaf, Rawad, and Cyrille Imbert. 2012. Unfolding in the empirical sciences: experiments, thought experiments and computer simulations. Synthese 190(16): 3451-3474.
5. Fernández, Jordi. 2003. Explanation by Computer Simulation in Cognitive Science. Minds
and Machines 13: 269-284.
6. Frigg, Roman, and Julian Reiss. 2009. The Philosophy of Simulation: Hot New Issues or
Same Old Stew? Synthese 169(3): 593-613.
7. Goldman, Nir. 2014. Accelerated Reaction Simulations: A Virtual Squeeze on Chemistry.
Nature Chemistry 6: 1033-1034.
8. Guala, Francesco. 2002. Models, simulations, and experiments. In Model-based reasoning:
science, technology, values, Ed. L. Magnani and N. J. Nersessian, 59-74. Springer.
9. Hartmann, Stephan. 1996. The world as a process: simulations in the natural and social
sciences. In Modelling and simulation in the social sciences from the philosophy of science point of view, Ed. R. Hegselmann, Ulrich Mueller, and Klaus G. Troitzsch, 77-100.
Springer.
10. Humphreys, Paul W. 1990. Computer Simulations. PSA: Proceedings of the Biennial
Meeting of the Philosophy of Science Association 2: 497-506.
11. Humphreys, Paul W. 2004. Extending ourselves: Computational science, empiricism,
and scientific method. Oxford University Press.
12. Humphreys, Paul. 2013. What Are Data About? In Computer Simulations and the
Changing Face of Scientific Experimentation, Ed. Juan M. Durán and Eckhart Arnold.
Cambridge Scholars Publishing.
13. Keller, Evelyn Fox. 2003. Models, Simulation, and ‘Computer Experiments. In The Philosophy of Scientific Experimentation, Ed. Hans Radder, 198-215. University of Pittsburgh
Press.
14. Kitcher, Philip. 1981. Explanatory unification. Philosophy of Science 48(4): 507-531.
15. Kitcher, Philip. 1989. Explanatory unification and the causal structure of the world. In
Scientific explanation, Ed. by Philip Kitcher and Wesley C. Salmon, 410-505. University
of Minnesota Press.
16. Krohs, Ulrich. 2008. How digital computer simulations explain real-world processes.
International Studies in the Philosophy of Science 22(3): 277-292.
17. Lenhard, Johannes, and Eric Winsberg. 2010. Holism, Entrenchment, and the Future of
Climate Model Pluralism. Studies in History and Philosophy of Science Part B: Studies
in History and Philosophy of Modern Physics, Special Issue: Modelling and Simulation
in the Atmospheric and Climate Sciences, 41(3): 253-262.
18. Massimi, Michela, and Wahid Bhimji. 2015. Computer Simulations and Experiments:
The Case of the Higgs Boson. Studies in History and Philosophy of Science Part B:
Studies in History and Philosophy of Modern Physics 51: 71-81.
19. Milkowski, Marcin. Explanatory Completeness and Idealization in Large Brain Simulations: A Mechanistic Perspective. Synthese 193(5): 1457-1478.
20. Morgan, Mary S. 2005. Experiments versus Models: New Phenomena, Inference and
Surprise. Journal of Economic Methodology 12(2): 317-329.
21. Morrison, Margaret. 2009. Models, Measurement and Computer Simulation: The Changing Face of Experimentation. Philosophical Studies 143(1): 33-57.
Varying the explanatory span: scientific explanation in computer simulations
23
22. Morrison, Margaret. 2015. Reconstructing Reality. Models, Mathematics, and Simulations. Oxford University Press.
23. Parker, Wendy S. 2009. Does matter really matter? Computer simulations, experiments,
and materiality. Synthese 169(3): 483-496.
24. Press, William H., Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery.
2007. Numerical recipes. The art of scientific computing. Cambridge University Press.
25. Weirich, Paul. 2011. The explanatory power of models and simulations: A philosophical
exploration. Simulation & Gaming 42(2): 155-176.
26. Winsberg, Eric. 2010. Science in the Age of Computer Simulation. University of Chicago
Press.
27. Woolfson, Michael M., and Geoffrey J. Pert. An introduction to computer simulations.
Oxford University Press, (1999).
28. Woolfson, Michael M., and Geoffrey J. Pert. 1999. SATELLIT.FOR, An introduction
to computer simulations. Oxford University Press.
© Copyright 2026 Paperzz