Moving parts: the natural alliance between dynamical and

Biol Philos (2015) 30:757–786
DOI 10.1007/s10539-015-9499-6
Moving parts: the natural alliance between dynamical
and mechanistic modeling approaches
David Michael Kaplan1
Received: 1 June 2015 / Accepted: 21 July 2015 / Published online: 28 August 2015
Springer Science+Business Media Dordrecht 2015
Abstract Recently, it has been provocatively claimed that dynamical modeling
approaches signal the emergence of a new explanatory framework distinct from that
of mechanistic explanation. This paper rejects this proposal and argues that
dynamical explanations are fully compatible with, even naturally construed as,
instances of mechanistic explanations. Specifically, it is argued that the mathematical framework of dynamics provides a powerful descriptive scheme for
revealing temporal features of activities in mechanisms and plays an explanatory
role to the extent it is deployed for this purpose. It is also suggested that more
attention should be paid to the distinctive methodological contributions of the
dynamical framework including its usefulness as a heuristic for mechanism discovery and hypothesis generation in contemporary neuroscience and biology.
Keywords
Mechanism Dynamics Explanation Models Neuroscience
Introduction
Over the past two decades, philosophers have increasingly focused on the important
role that mechanistic explanations play in the biological sciences including
neuroscience (Bechtel 2008; Bechtel and Richardson 1993/2010; Craver 2007;
Machamer et al. 2000). As the mechanistic perspective comes to occupy a dominant
position in philosophical thinking about explanation across this wide range of
scientific disciplines, it is natural to pose questions concerning its limits. Recently, a
& David Michael Kaplan
[email protected]
1
Department of Cognitive Science, ARC Centre of Excellence in Cognition and its Disorders
(CCD), Perception in Action Research Centre (PARC), Macquarie University, Australian
Hearing Hub 3.822, 16 University Drive, Sydney, NSW 2109, Australia
123
758
D. M. Kaplan
number of challenges have been raised about the limits of the mechanistic approach
to explanation in the context of biology, neuroscience, and cognitive science
(Chemero 2011; Chemero and Silberstein 2008; Dupré 2013; Lappi and Rusanen
2011; Stepp et al. 2011; Von Eckardt and Poland 2004; Weiskopf 2011; Woodward
2013). The general thrust of these arguments is that the mechanistic framework is
more fragmentary and restricted in scope than previously assumed, and that some
explanations constructed in these disciplines cannot legitimately be subsumed under
this framework. In this paper, I address and reject several recent claims that one
must embrace a non-mechanistic perspective on dynamical modeling in neuroscience. In particular, I expose the shortcomings of law- and prediction-based
conceptions of dynamical explanation (Bressler and Kelso 2001; Chemero 2011;
Chemero and Silberstein 2008; Stepp et al. 2011; van Gelder 1998; Walmsley
2008). After clarifying the challenges facing these prominent non-mechanistic
approaches, I emphasize the advantages of adopting a mechanistic perspective on
dynamical explanation. In doing so, I build upon previous efforts to establish a tight
link between models of neural dynamics and models of neural mechanisms (Bechtel
and Abrahamsen 2010; Kaplan and Bechtel 2011; Kaplan and Craver 2011; Zednik
2011). I move beyond this previous work by specifically arguing that the framework
of dynamics provides a powerful descriptive scheme for revealing the complex
temporal organization of activities in mechanisms (neural or otherwise) and has
explanatory value to the extent it is deployed for this purpose. It also serves as a
useful heuristic for mechanism discovery and hypothesis generation. Consequently,
I maintain that the distinctive contributions of the dynamical framework are
descriptive and methodological, rather than explanatory. Although the discussion
centers on examples from neuroscience, the lessons are intended to apply to other
areas of biological science where dynamical modeling is employed.
It is an undeniable fact that the mathematical tools of dynamics, or more
specifically dynamical systems theory (hereafter DST), are beginning to have a major
impact across many sectors of contemporary neuroscience and biology. The basic
toolkit of DST (described in more detail shortly), which includes differential
equations, geometric state-space analyses, and other visualization techniques such as
attractor landscapes and bifurcation diagrams, is playing an increasingly important
role in modeling and explaining the activity of neural systems. For example, the
dynamical approach has recently been used to model preparatory activity observed
across neural populations in primary motor cortex (Churchland et al. 2012; Shenoy
et al. 2013); persistent neural activation involved in working memory (Compte et al.
2000; Tank and Hopfield 1987) and long-term associative memory (Jeffrey 2011;
Wills et al. 2005); recurrent activity patterns in decision-making circuits (Wong and
Wang 2006); neural coding in the olfactory system (Laurent 2002); and dynamic
activations across large-scale brain networks during motor adaptation (Ahrens et al.
2012) and coordination tasks (Jantzen et al. 2009; Jirsa et al. 1998; Kelso et al. 1998).
As these representative examples indicate, dynamical modeling has considerable
reach in neuroscience and is being used to characterize activity patterns across a
staggering diversity of neural systems ranging from individual synapses and single
neurons to local neural circuits and entire brain networks (for book-length
treatments, see Amit 1992; Fuchs 2013; Izhikevich 2007; Sporns 2011).
123
Moving parts: the natural alliance between dynamical and…
759
The growing interest surrounding dynamical modeling approaches, coupled with
a research focus emphasizing dynamic patterns of activity over the brain structures
upon which these patterns are based, has emboldened some philosophers to assert
that this powerful toolkit also provides a radically new explanatory paradigm that is
importantly distinct from, and provides a competing alternative to, the dominant
framework of mechanistic explanation. Chemero and Silberstein (2008) frame the
issue as follows:
[I]t seems obvious to many contemporary cognitive scientists that explanations of cognition ought to be mechanistic…A growing minority of cognitive
scientists, however, have eschewed mechanical explanations and embraced
dynamical systems theory. That is, they have adopted the mathematical
methods of nonlinear dynamical systems theory, thus employing differential
equations as their primary explanatory tool. (2008, 11)
Clark (1997) offers a similar characterization:1
Dynamical Systems theory also provides a new kind of explanatory
framework. At the heart of this framework is the idea of explaining a
system’s behavior by isolating and displaying a set of variables (collective
variables, control parameters, and the like) which underlie the distinctive
patterns that emerge as the system unfolds over time and by describing those
patterns of actual and potential unfolding in the distinctive and mathematically
precise terminology of attractors, bifurcation points, phase portraits, and so
forth. There are many ways in which a typical Dynamical Systems explanation
varies from a traditional, component-centered understanding. (1997, 115)
Finally, Stepp et al. (2011) maintain:
Dynamical explanations do not propose a causal mechanism that is shown to
produce the phenomenon in question. Rather, they show that the change over
time in the set of magnitudes in the world can be captured by a set of
differential equations. (2011, 432)
It might readily be admitted, along with these authors, that DST does indeed
introduce something distinctive and new—for example, a new set of mathematical
techniques for describing patterns of change over time in neural systems, or even a
novel framework for conceptualizing cognition and neural activity in nonrepresentational or non-computational terms (van Gelder 1995, 1998). However,
what is under dispute in this paper is whether dynamics also constitutes a distinct
and competing explanatory framework to that of mechanism. In what follows, I
argue that dynamical models do not compete with mechanistic models; rather,
dynamical models provide one important set of resources, among many other
resources, for describing aspects of mechanisms. The relationship between
dynamics and mechanism is one of subsumption, not competition. In particular,
1
Although Clark’s (1997) characterization emphasizes the novelty of DST, it is important to
acknowledge that the conclusions he ultimately defends about the nature of dynamical explanation differ
markedly from those targeted in this paper. In fact, Clark emphasizes the need for a rapprochement of
mechanistic (what he calls ‘‘componential’’) and dynamical approaches, much as the current paper does.
123
760
D. M. Kaplan
dynamical models are especially well suited to reveal the temporal organization of
activity in neural systems. Because dynamical models are subsumed within the
broader toolkit for describing mechanisms, their explanatory value can be seen as
clearly depending on the presence of an associated account (however incomplete) of
the parts in the mechanism (and their interactions) that support, maintain, or
underlie these activity patterns. Dynamical modeling approaches do not signal the
emergence of a new explanatory framework in neuroscience distinct from that of
mechanistic explanation. Instead, dynamical models with explanatory force can
readily be understood within the mechanistic framework. Importantly, because of
the synergies between mechanistic and causal accounts of explanation, this
convergence of dynamical and mechanistic models also indicates a broader
alignment with the widely embraced ideal that explaining a phenomenon is a matter
of revealing how it is situated in the causal structure of the world (e.g., Salmon
1984).
The paper is organized as follows. In the ‘‘Dynamical systems theory: a primer’’
section, to provide the requisite background, I review some of the central concepts
of DST. In ‘‘The HKB model’’ section, I outline a paradigmatic example of a
dynamical model—the Haken–Kelso–Bunz (HKB) model. In the ‘‘Non-mechanistic
approaches to dynamical explanation’’ section, I discuss two non-mechanistic
approaches to dynamical explanation and identify their problems. In ‘‘The
mechanistic approach to dynamical explanation’’ section, I review the mechanistic
approach to dynamical explanation and discuss how dynamics are a real but
underemphasized aspect of most major accounts of mechanistic explanation,
centrally embodied in the concept of the temporal organization of a mechanism. In
‘‘The HH model’’ section, I illustrate how the Hodgkin–Huxley (HH) model of the
action potential, long a centerpiece of discussions of mechanistic explanation,
successfully integrates dynamical and mechanistic approaches, thus showcasing
how decomposing a mechanism and modeling its dynamics are complementary
endeavors.2 In ‘‘The natural alliance between dynamical and mechanistic
approaches’’ section, I provide a general characterization of the relationship
2
There is a large philosophical literature addressing how to understand the explanatory import of the HH
model. Craver (2006, 2007, 2008) argues that, at least in its original form, the HH model is a partial or
incomplete mechanistic explanation. More specifically, he argues the model is an explanatorily deficient
mechanism sketch because it does not reveal critical parts—ion channels—in the mechanism underlying
the observed conductances. Kaplan (2011) defends a similar view. Bogen (2005) offers a different,
although compatible, interpretation of the HH model that highlights the non-explanatory but nonetheless
important descriptive and predictive roles it plays. Levy (2014) rejects the idea that the original HH
model is incomplete, and instead argues that abstraction from mechanistic detail is its primary
explanatory virtue. According to Levy, models such as the HH model require a new ‘‘analytical category’’
within the mechanistic perspective to cover cases in which abstraction from certain kinds of underlying
structural detail is an intentional strategy (see also, Levy and Bechtel 2013). Critically, despite their
disagreements, both Craver and Levy maintain that the HH model instantiates a kind of mechanistic
explanation. Others such as Weber (2008) embrace a covering-law interpretation of the model according
to which its real explanatory weight is carried by implicit physical laws such as Ohm’s law, the Nernst
equation, and Coulomb’s law. On this view, the mechanistic details merely serve to specify the relevant
background or initial conditions for application of the laws. This view has played a less central role in
subsequent debates, and Craver (2008) provides a powerful rejection of this view. Although it is
inessential to the argument being made in the present paper, mechanistic interpretations of the HH model
are undeniably widespread in recent philosophy of science.
123
Moving parts: the natural alliance between dynamical and…
761
between dynamical and mechanistic modeling approaches in neuroscience.
Specifically, I maintain that the frameworks of dynamics and mechanism are
natural allies—each plays a valuable role in the common enterprise of describing
the parts, activities, and organization of neural mechanisms. Finally, in the ‘‘Conclusion’’ section, I revisit the HKB model and interpret it in the light of these
general principles.
Dynamical systems theory: a primer
In order to assess the claim that DST constitutes an alternative, non-mechanistic
explanatory framework, we need a clearer idea about its central features (for
detailed mathematical treatment, see Abraham and Shaw 1992; Strogatz 2014). DST
comprises a powerful and highly general set of mathematical techniques for
modeling, analyzing, and visualizing time series data. At the core of every
dynamical model is one or a set of differential equations or difference equations,
which contain variables and parameters that capture how different properties in the
system being modeled change over time. An especially powerful feature of these
models is that they can be used to explicitly and precisely track the time evolution of
multiple system variables and parameters and their mutual influence on one another.
Even though differential equations are well-established scientific tools for
modeling patterns of change, which have been successfully applied to an
exceedingly large range of systems, they are not without limitations. One important
limitation is that only the simplest differential equations admit of exact, closed-form
solutions—the function or set of functions satisfying a given equation. Nevertheless,
some properties of systems characterized by unsolvable differential equations may
be determined without finding their exact analytical form. Numerical methods and
computer simulations are often used to approximate solutions to differential
equations to arbitrary levels of precision (e.g., Mascagni and Sherman 1989). The
DST strategy goes beyond the idea of searching for (exact or inexact) solutions to
the equations defining a given dynamical system, and instead aims to model its
long-term behavior qualitatively in terms of trajectories in a geometrically-defined
space.
Many distinctive concepts of the DST framework are thus geometric in nature. A
state space is the set of all possible values or states that a given dynamical system
can take or be in over time. Each system variable defines a corresponding dimension
in this space, so that the dimensions of this space reflect the total number of state
variables (e.g., the dynamical system in Fig. 1 is defined by two variables, y1 and
y2). Plugging in different initial parameter settings and values to the relevant
differential equation specifies different possible trajectories of the system within
that space. More specifically, the differential equation or evolution rule serves to
define a vector field—an assignment of an instantaneous direction and magnitude of
change to each point in the state space (Fig. 1a). An individual solution trajectory is
a particular temporal sequence of states through this vector field, from some initial
starting state of the system. The set of solution trajectories of the system is called a
flow (Fig. 1b). Many dynamical systems eventually converge on a small subregion
123
762
D. M. Kaplan
Fig. 1 Basic constructs of DST. a A vector field for a two-dimensional dynamical system defined by the
two differential equations, ½y_1 = f1(y1, y2) and ½y_2 = f2(y1, y2). b A flow diagram depicting representative
solution trajectories for the same system. c A phase portrait depicting the system’s limit sets, their
stabilities, and their basins of attraction. Dots denote system equilibrium points. Stable limit sets are
colored blue, unstable limit sets are colored red. The circular blue trajectory is a limit cycle. Source:
Reprinted from Beer (2000) with permission from Elsevier. (Color figure online)
of the state space, called the limit set. If the system state passes through a limit set,
the dynamics will operate to keep it there and small external perturbations will only
alter the system momentarily. Afterward, it will converge back to the same state.
Limit sets can either be single points (fixed or equilibrium points) or trajectories that
loop back on themselves (limit cycles or oscillations) (Fig. 1c). For a stable limit
set, also known as an attractor, all nearby trajectories will converge to it. The set of
points that converge to an attractor over time is termed a basin of attraction. Finally,
the flow and overall attractor landscape can depend on other independent parameters
(called control or order parameters). Sometimes the flow field of a dynamical
system changes continuously or smoothly as one of these control parameters is
varied. Other times, however, a discontinuity or jump may occur in the flow pattern
with a continuous change in the parameter value, called a bifurcation. A few
additional details about DST will be introduced in what follows, but these basics
will provide the necessary foundation to understand how the DST and mechanistic
frameworks relate to one another.
The HKB model
To see the DST toolkit in action, consider a canonical example of a dynamical
model—the Haken–Kelso–Bunz (HKB) model of bimanual coordination (Haken
et al. 1985)—which has been heavily discussed both in the philosophical literature
on dynamical explanation (Chemero 2011; Chemero and Silberstein 2008; Gervais
and Weber 2011; Walmsley 2008) and by researchers in cognitive science and
neuroscience seeking to understand bimanual coordination (Fuchs 2013; Swinnen
2002). The HKB model remains one of the most widely tested quantitative models
of human motor behavior, which has been the focus of intense investigation in
motor neuroscience for over twenty years.
123
Moving parts: the natural alliance between dynamical and…
763
In the original experiment, subjects were instructed to perform a bimanual
coordination task involving repetitive side-to-side (oscillating) motion of their index
fingers in the transverse plane in time with a pacing metronome. Movements were
either in-phase (simultaneous, mirror-symmetric movements toward the midline of
the body; Fig. 2a) or anti-phase (simultaneous, parallel movements to the left or
right of the body midline; Fig. 2b). The metronome speed is an independent variable
under experimental control. This simple experiment yielded several interesting
observations. First, subjects can reliably perform both in-phase and anti-phase
coordination patterns at low oscillation frequencies. Second, in trials where
movement frequency is increased beyond a certain critical threshold, subjects can
no longer maintain switch involuntarily into in-phase movement. Third, only inphase movement is observed above the critical frequency. Finally, trials initiated in
the in-phase pattern do not switch as movement frequency increases, even when
exceeding the critical frequency.
The core of the HKB model is the differential equation expressing the rate of
change of the phase relationship (relative phase) between the fingers over time:
d/
¼ a sin / 2b sin 2/
dt
ð1Þ
where / represents relative phase (in-phase, / = 0; anti-phase, / = ± 180);
a and b are empirically fitted parameters reflecting the oscillation frequency of the
coupled fingers; and b/a, the so-called coupling ratio, is directly proportional to the
movement oscillation period and inversely related to frequency. In the language of
DST, relative phase is a collective variable, a relational quantity reflecting the
cooperation among individual components of a system; and the coupling ratio b/a is
a control parameter since small, continuous changes in its value can induce abrupt,
discontinuous changes (bifurcations) in system behavior (see, e.g., Kelso 1995;
Fig. 2c). The equation essentially characterizes how the derivative or rate of change
Fig. 2 The HKB experiment and model. a In-phase bimanual finger oscillation with relative phase (/)
= 0. b Anti-phase bimanual finger oscillation with relative phase (/) = 180. c 3-dimensional vector
field diagram. Thick lines indicate stationary or fixed points of the system (i.e., where d//dt = 0). Thick
black lines indicate stable fixed points or equilibria. Thick orange lines indicate unstable fixed points or
equilibria. Curved surface and color gradients depict the overall attractor landscape for the system.
Source: Reprinted from Zednik (2011) with permission from The University of Chicago Press; Adapted
from Kelso (1995). (Color figure online)
123
764
D. M. Kaplan
over time in relative phase (d//dt) is a periodic function of the collective variable
and the control parameter.
Another important aspect of the HKB model is the corresponding DST analysis.
According to this dynamical analysis, the observed behavioral regularities can be
characterized as a dynamical system with an overall attractor landscape that changes
as a function of the control parameter reflecting movement oscillation speed. A
vector field diagram (Fig. 2c) provides a useful way of visualizing the system’s
dynamics. It plots the time derivatives of relative phase (d//dt; Fig. 2c, z-axis)
against relative phase (/; x-axis) for different control parameter values (b/a; y-axis).
Stationary patterns (fixed points) of the system are those regions of the state space
where relative phase remains unchanged or invariant (d//dt = 0; Fig. 2c, thick
lines). When the slope of d//dt is negative along the x-axis, the fixed points form
stable equilibria of the system and operate as attractors. When the slope of d//dt is
positive, the fixed points are unstable equilibria and repelling.
As indicated, the critical feature of the HKB model concerns how the overall
attractor landscape varies with changes in the control parameter. When oscillation
frequencies are relatively low (b/a [ 0.25), there are two stable attractors
corresponding to both in-phase and anti-phase coordination patterns. In this regime,
the stable anti-phase pattern is also flanked by two unstable fixed points that
demarcate its basin of attraction. When oscillation frequencies are relatively high (b/
a \ 0.25), only in-phase coordination has a stable attractor and the anti-phase
pattern becomes unstable. More specifically, as the system moves in the direction of
a decreasing b/a ratio (movement from top to bottom along the y-axis in Fig. 2c),
which is associated with higher oscillation frequencies, the stable attractor for antiphase coordination disappears and is subsequently transformed into an unstable
equilibrium point (at b/a = 0.25). At this critical frequency, the system undergoes a
phase transition or bifurcation. Once the system crosses this threshold, any small
perturbation will push the system towards the stable fixed point attractor
corresponding to the in-phase movement pattern.
The HKB model provides a compact and accurate mathematical description of
subject performance in this bimanual coordination task, accommodating all features
of the data set described above including the precise switch point from the antiphase to in-phase movement pattern, the stability of both coordination patterns at
low speeds, etc. The model is not merely descriptive since it also generates novel
behavioral predictions including what happens when an ongoing coordination
pattern is perturbed, which have been confirmed by subsequent experiments (e.g.,
Scholz and Kelso 1989).3 The DST framework in general provides a powerful
3
It is worth digressing momentarily to note how the predictive power of the HKB model and other
dynamical models helps to allay the worry that they are merely descriptive. The objection proceeds as
follows. For any given data set, an equation can always be constructed that fits a curve connecting each
data point in that set (given the standard provisos about the tradeoffs between model generalization and
overfitting). This kind of ad hoc, curve-fitting exercise results in a model or equation that, at best,
provides a compact summary or redescription of the data and not an explanation of the phenomenon
responsible for generating the data. Given this background, the objection continues, perhaps dynamical
systems models are purely descriptive, curve-fitting models of this kind (Rosenbaum 1998; van Gelder
1998; Walmsley 2008). The natural dynamicist response is to state that dynamical models frequently go
beyond merely describing the data for which they were constructed in the specific sense that they are
123
Moving parts: the natural alliance between dynamical and…
765
descriptive and predictive scheme, which is justifiably finding wide application
across the neurosciences. Indeed, DST is an invaluable tool for characterizing the
complex (e.g., nonlinear) patterns of change over time in the activity of neural and
cognitive systems that might otherwise resist efficient description. The key question,
however, is whether it also provides a distinct explanatory scheme from that of
mechanistic explanation? Before moving on, it should be acknowledged that there is
an important sense in which the speeding up of movement frequency does explain
the spontaneous transition from anti-phase to in-phase dynamics. The explanation
provided is naturally construed as causal or etiological—it explains what occurs by
citing the relevant antecedent events or conditions (i.e., causes). Critically, this is
not the sense of explanation at stake in the current discussion. Instead, what is at
issue is explaining why a given system spontaneously switches between these
movement patterns with changes of speed.
Non-mechanistic approaches to dynamical explanation
To the extent that philosophers and dynamicist researchers have weighed in on the
nature of dynamical explanation, they have embraced a surprisingly radical view
according to which dynamical models embody an entirely distinct form of
explanation from the framework of mechanistic explanation that dominates research
across the biological sciences including neuroscience. Bechtel (1998b) captures the
view clearly, stating that dynamical research ‘‘employs a very different model of
explanation than that which underlies most modeling in cognitive science…what
Richardson and I call mechanistic explanation’’ (1998b, 306). According to the
conservative mechanistic perspective defended here, dynamical models with
explanatory import do not take a different form from that possessed by mechanistic
explanations. Instead, explanatory dynamical models instantiate, and form a proper
subset of, mechanistic explanations.4 Before defending this view, however, it is
worth exploring these alternative, non-mechanistic positions and their associated
problems.
Two distinct but interrelated strands can be identified within the non-mechanistic
outlook on dynamical explanation. According to the first strand, some dynamical
models are explanatory because they are instances of covering-law explanations.
According to a second and more general strand, dynamical models explain in virtue
of their predictive power. Both face serious problems. It is worth emphasizing at the
outset that, although the application to dynamical models is new, many of the
Footnote 3 continued
capable of making quantitative predictions about how a system will behave in untested conditions. This,
however, does not suffice to establish a dynamical model’s explanatory credentials, since predictive force
is not equivalent to explanatory force (Kaplan and Craver 2011).
4
These two views are not exhaustive of the range of possible positions. For instance, another logically
weaker view one might defend is that some dynamical explanations are mechanistic, while others are
non-mechanistic. This seems to be the view embraced by Zednik (2011). He maintains that some
explanatory dynamical models are mechanistic, whereas others instantiate covering-law explanations. For
reasons discussed shortly, this view is not tenable.
123
766
D. M. Kaplan
problems identified in what follows have been raised before in various guises in the
philosophy of science. As will become clear, however, many of these old lessons
have been too readily discarded or ignored when attention is directed to new
scientific models. The conservative point in this section is that traditional
discussions in the philosophy of science concerning what is required for an
explanation (dynamical or otherwise) as opposed to a merely descriptively or
predictively adequate model can help to clarify the current debate over dynamical
explanations.
The covering-law approach
According to the covering-law (CL) model, explanation involves (deductive or
inductive) subsumption of some event or phenomenon to be explained under a law
or set of laws (Hempel 1965). More specifically, a characterization of the relevant
empirical conditions under which the phenomenon obtains (initial conditions) and
the relevant laws is supposed to provide good evidential grounds for expecting the
occurrence of the explanandum-phenomenon. On this account, explanation is
closely linked to prediction, a point to which I return below.
According to the CL approach to dynamical explanation (hereafter DCL),
explanatory dynamical models are just instances or special cases of CL explanations
(Bechtel 1998b; Bechtel and Abrahamsen 2002; Walmsley 2008). A common
strategy among adherents of DCL is therefore to construe certain canonical
dynamical models as fitting precisely into the subsumption-under-law formula.
Along these lines, Walmsley (2008) suggests that the HKB model ‘‘conforms very
well’’ to the CL view:
[W]e have a case where the explanandum in any given case is a deductive
consequence of the law (expressed by Eq. 1) combined with the initial
conditions (expressed as values of a, b, and /). The equation is actually
required for the derivation of the explanandum, the explanandum is a
deductive consequence of the explanans, and the explanans has empirical
content (it was, after all, discovered on the basis of observations of rhythmic
finger movement in human subjects). So, the logical conditions of adequacy of
a covering law explanation, as set out by Hempel and Oppenheim above, are
met. (Walmsley 2008, 341, author’s original emphasis)
As suggested above, the DCL approach is relatively common in the literature.
Zednik (2011) goes so far as to characterize it as the ‘‘received view’’ about
dynamical explanation. Despite its prevalence, proponents have failed to address
many of the standard objections to the general nomological conception of
explanation upon which it is based.
Given the foundational assumption that laws are required for explanation,
defenders of DCL open themselves up to several pressing challenges that remain
unaddressed. First, it must be shown that explanatory dynamical models either
explicitly cite some law or implicitly convey information about the existence of the
relevant law. This enterprise is rendered difficult by the fact that it requires a
reasonably precise notion of law, according to which laws can be distinguished
123
Moving parts: the natural alliance between dynamical and…
767
successfully from non-laws or accidental generalizations.5 Having a satisfactory
account of this distinction is critical to the success of DCL because without an
account of what criteria a generalization must satisfy in order to qualify as a law, it
will be difficult if not impossible to characterize the role laws play in successful
explanation (dynamical or otherwise). Despite its widely noted importance, the
issue of what are the appropriate criteria for lawhood remains surrounded in
controversy within philosophy of science, and it is unlikely that any of the proposed
criteria can successfully demarcate laws from accidental generalizations (Hempel
1965; Salmon 1989/2006; Woodward 2000, 2003). Second, even granting that a
suitable notion of law is available, a deeper challenge involves justifying the claim
that the mathematical generalizations featuring in dynamical models satisfy this
notion and so are accurately construed as laws of nature. Although some
dynamicists elect to describe their models as involving laws (e.g., Bressler and
Kelso 2001; Kelso 1995; Schöner and Kelso 1988), it remains an open question
whether this practice is defensible and dynamical generalizations genuinely attain
this status.
Philosophical commentators on dynamical cognitive science have offered
surprisingly little clarity on this issue. For example, Bechtel and Abrahamsen
(2002) and Walmsley (2008), quoted above, merely stipulate that the equations
featured in dynamical models have the status of genuine laws. Bechtel and
Abrahamsen (2002) write:
In a covering-law explanation, a phenomenon is explained by showing how a
description of it can be derived from a set of laws and initial conditions. The
dynamical equations provide the laws for such covering-law explanations, and
by supplying initial conditions (values of variables and parameters), one can
predict and explain subsequent states of the system. (Bechtel and Abrahamsen
2002, 267)
Passages like these mistakenly imply that virtually any correct generalization taking
the form of a differential equation should qualify as a law. Given this kind of overly
permissive characterization of the concept of law, dynamicist researchers may be
forgiven for conflating laws with generalizations, and relatedly, for failing to
appreciate that without a precise characterization of laws in hand one will, among
other things, be hard pressed to draw other important distinctions such as that
between explanation and description.
Defenders of DCL therefore face a choice regarding how to treat the
generalizations appealed to in dynamical explanations. One option is to claim that,
appearances notwithstanding, these generalizations do in fact satisfy many of the
standard criteria for lawhood and thereby legitimately qualify as laws. This, in turn,
would make them suitable to feature in CL explanations. A second option involves
claiming that, like many explanatory generalizations found in the special sciences,
5
Hempel recognized this to be a serious barrier to his own account. Hempel (1965) considers a number
of standard criteria for lawhood and comes to the conclusion that none are completely satisfactory.
Salmon (1989/2006) and Woodward (2003) arrive at similarly pessimistic conclusions.
123
768
D. M. Kaplan
the generalizations in dynamical models count as laws, but are so-called non-strict,
qualified, or ceteris paribus laws (e.g., Fodor 1991; Pietroski and Rey 1995).
The first option is problematic because the mathematical generalizations
featuring in dynamical models do not readily appear to meet many of the major
criteria for lawhood. It is widely assumed, for example, that whatever else a law
may be, it must at least be an exceptionless generalization with wide scope. By
scope I here mean the range of different individual systems (or types of systems)
over which a given generalization or model holds. For example, the gravitational
inverse square law has wide scope in the sense that it correctly applies to all massive
bodies throughout the universe. The motivating thought is that generalizations
admitting of exceptions or having scope restrictions will be vacuous in the sense that
they will fail to make determinate predictions or be explanatory in any other sense.
More specifically, given the logical structure of the CL framework, deductive
inference of the explanandum cannot occur unless the law statement in the
explanans takes the form of a universally quantified generalization (that ranges over
its specified domain without exception).6 By contrast, the generalizations in
dynamical models, like most generalizations in biology, appear to be applicable to a
restricted range of systems. The HKB model, for example, covers an impressive
range of coordination patterns involving two or more oscillating components—
bimanual finger coordination in symmetric and anti-symmetric movement modes
(Haken et al. 1985), coordinated oscillatory movements across individual subjects
(Schmidt et al. 1990), and even certain forms of social coordination (Oullier et al.
2008). Nevertheless, its scope is restricted in certain ways. For example, the model
fails to apply to all rhythmic human limb movements such as those involved in
walking or running; as humans increase their movement speed, no discontinuous
shift from walking or running (anti-phase leg movements) to hopping (in-phase leg
movements) occurs (Rosenbaum 1998).7
The traditional requirement that all laws be exceptionless is also problematic for
DCL because, at least on the face of it, most dynamical generalizations in
neuroscience appear to be far from exceptionless. Like generalizations across the
life sciences, many of the generalizations captured by dynamical models in
neuroscience are exception-ridden, holding only within a certain domain or regime
of changes and breaking down outside of these. For example, the HKB model
characterizes an abrupt transition from one pattern of bimanual coordination to
6
Although this characterization focuses on the specific challenges for explanations involving deductive
subsumption under laws, it is also problematic for so-called inductive-statistical explanations involving
inductive subsumption under statistical laws (Hempel 1965). Inductive-statistical explanations conform to
the same general pattern, but are assessed according to whether the explanans confers high probability on
the occurrence of the explanandum event. The admission of exceptions in a statistical law featured in the
explanans could serve to lower the probability conferred on the explanandum, and consequently cause
similar problems—albeit less severe—for inductive-statistical variants of the CL account.
7
One may object that even though the HKB model does not apply to human locomotory behavior, it does
apply to other rhythmic limb movements such as those involved in equine locomotion, and so the scope of
the model is not quite as restricted as implied. Indeed, Kelso (1995) famously cites this interesting feature
as a notable strength of the model. Nevertheless, this response is inadequate because a situation in which
the HKB model has highly gerrymandered scope (the model applies to some but not all systems
exhibiting rhythmic limb behavior) is hardly an improvement over one in which it has restricted scope.
123
Moving parts: the natural alliance between dynamical and…
769
another when the control parameter related to movement oscillation frequency
exceeds some threshold level. The model was originally constructed to accommodate data about movement frequencies on the order of a couple of cycles per second
(Haken et al. 1985). However, it remains uncertain whether the relationships
characterized in the HKB model remain invariant across all changes in this control
parameter—such as extremely rapid changes (e.g., accelerations) in movement
oscillation frequency or at extreme speeds (e.g., 1000 cycles/second) approaching or
exceeding biomechanical limits imposed by properties of the human musculoskeletal system. Answering this objection involves showing how the generalizations at
the core of dynamical models such as the HKB model do in fact describe
exceptionless regularities, thereby securing their status as laws and role in coveringlaw explanations. To date, defenders of DCL have made little progress on this front.
Although philosophical discussion of this issue remains limited, philosophers
who have weighed in on the issue of the status of dynamical laws have tended to
emphasize how these generalizations support counterfactuals as justification for
incorporation into the DCL framework (e.g., Bechtel 1997; Clark 1997; van Gelder
1998). For instance, Clark suggests:
A pure Dynamical System account will be one in which the theorist simply
seeks to isolate the parameters, collective variables, and so on that give the
greatest grip on the way the system unfolds in time — including (importantly)
the way it responds to new, not-yet-encountered circumstances. (Clark 1997,
119; also quoted in Walmsley 2008)
Bechtel (1998b) similarly maintains:
One of the agreed upon characteristics of a law, though, is that it supports
counterfactuals. That is, a law would have to specify what would happen if the
conditions specified in its antecedent were met. DST accounts…are clearly
designed to support counterfactuals…This suggests that it may be appropriate
to construe these DST explanations as being in the covering law tradition.
(Bechtel 1998b, 311)
Given the centrality of the idea of a state space embodying information about all the
possible (and actual or observed) states and trajectories that the system can take to
DST (‘‘Dynamical systems theory: a primer’’ section), emphasis on counterfactual
support should be unsurprising. Possible state space trajectories embody straightforward counterfactuals concerning what a given dynamical system would have
done if things had been different (e.g., what trajectory would have unfolded if the
initial state had differed in a specific way). In the above passage, Bechtel implies
that being counterfactual-supporting is a necessary condition for a generalization to
qualify as a law. Unfortunately, this criterion cannot distinguish laws from
accidental generalizations because many accidental generalizations support counterfactual predictions. Borrowing an example from Woodward (2003, 280), the
generalization ‘‘All the coins in Clinton’s pocket are dimes’’ is both accidental and
counterfactual-supporting. For instance, supposing as a background condition that
Clinton had a policy to permit only dimes in his pocket, the above generalization
supports the following counterfactual: ‘‘If c were in coin in Clinton’s pocket, then it
123
770
D. M. Kaplan
would be a dime’’ (Woodward 2003, 280). Examples like these are trivial to
construct, yet they all uniformly serve to demonstrate that appeals to counterfactual
support are inadequate to underwrite an account of laws.
Embracing the second option—construing dynamical generalizations as nonstrict laws—is also fraught with difficulties. According to the general strategy, a
generalization with exceptions can still play an explanatory role (can make
determinate predictions, etc.) if it can be ‘‘completed’’ by specifying some further
set of conditions that, together with the conditions outlined in the original
generalization, are nomologically sufficient to generate the explanandum (Reutlinger and Unterhuber 2014). When appended with the appropriate completer, the
resulting generalization qualifies as an exceptionless law because the completer
operates to restrict or hedge the scope to just the range of circumstances where the
regularity holds. One well-known problem with this proposal is filling out the
completer clause in such a way as to avoid producing generalizations that are either
false or trivial (Earman and Roberts 1999; Woodward 2002, 2003). To date, there is
no consensus about whether this challenge can be met (Earman et al. 2002;
Reutlinger and Unterhuber 2014). Even if we assume that this specific problem can
be handled, others difficulties arise. For example, Woodward (2002) argues quite
plausibly that the very notion of a non-strict law incorporating qualifying clauses is
a poor philosophical reconstruction of a certain kind of causal generalization
common in the special science, which fails to map onto how these generalizations
are typically understood by the researchers who deploy them. All of these
considerations collectively indicate that the defenders of DCL have little to gain by
pursuing this strategy (or that they must work very hard to make this strategy pay
off).
Adherents of DCL therefore find themselves in the unenviable position of
defending a view of explanation that centrally relies on an account of laws that can
distinguish between laws and accidental generalizations, despite the fact that no
satisfactory account is currently available. For these reasons, proponents of DCL
must either face up to these difficult challenges or abandon the approach.
The predictivist approach
Another common non-mechanistic approach to dynamical explanation emphasizes
how the predictive power of dynamical models is central to their status as
explanations (Chemero 2011; Chemero and Silberstein 2008; Stepp et al. 2011; van
Gelder 1998). This view has been termed predictivism (Kaplan and Craver 2011).
Even though predictivism drops the problematic requirement that laws are needed
for explanation, it still bears close connections to the covering-law view since both
tightly link explanation and prediction.8 Although adherents of predictivism have
8
The tight connection between explanation and prediction follows as a direct consequence of the CL
account. If explanations take the form of arguments, then explanations and predictions will have the same
logical structure. Hempel (1965) recognized this, and argued that every adequate explanation can serve as
a potential prediction. For reasons explored in detail by Hempel (1965), he did not endorse the reverse
claim that every adequate prediction is a potential explanation.
123
Moving parts: the natural alliance between dynamical and…
771
not yet provided a systematic characterization of the view, a picture begins to
emerge from the following scattered remarks:
If models are accurate enough to describe observed phenomena and to predict
what would have happened had circumstances been different, they are
sufficient as explanations. (Chemero and Silberstein 2008, 12).
When carried out successfully, the [dynamical] modeling process yields not
only precise descriptions of the existing data but also predictions which can be
used in evaluating the model (van Gelder and Port 1995, 15)
[M]any factors are relevant to the goodness of a dynamical explanation, but
the account should at least capture succinctly the relations of dependency, and
make testable predictions. (van Gelder 1998, 625)
Dynamical explanations do not propose a causal mechanism that is shown to
produce the phenomenon in question. Rather, they show that the change over
time in set of magnitudes in the world can be captured by a set of differential
equations…dynamical explanations show that particular phenomena could
have been predicted, given local conditions and some law-like general
principles. (Stepp et al. 2011, 432)
The view of explanation these authors collectively embrace is that the explanatory
power of dynamical models flows primarily (or exclusively) from their predictive
power. As indicated, prediction-based accounts of explanation are not new and
share many features in common with the CL model. Although predictivism
successfully avoids some of the problems associated with the covering-law
approach, especially the already discussed problems connected with laws, it is
nonetheless burdened with many of the same difficulties that dethroned the CL
model.
Here I briefly review two problems facing predictivism as an account of the
explanatory power of dynamical models that are not easily overcome (for further
discussion, see Kaplan and Bechtel 2011; Kaplan and Craver 2011). First, simple
examples demonstrate how prediction is insufficient for explanation. For example,
given information about the relevant regularity holding between changes in
barometers and the presence/absence of storms, a storm’s occurrence can be
predicted reliably from changing mercury levels in a barometer. Yet it seems
problematic to say that a drop in mercury explains the occurrence of the storm.
Instead, a common cause—a drop in atmospheric pressure—explains both the
falling barometer and the developing storm.
Similarly, a dynamical model can be predictively adequate in so far as the model
predicts all the relevant aspects of the phenomenon with the required precision and
accuracy, and yet its variables may represent only magnitudes that merely correlate
with some other common cause for that phenomenon. Suppose you are observing a
set of three gears in an automotive transmission system. Gear A has a diameter half
that of gear B, and gear C a diameter twice that of gear B. Only gear B is directly
connected to the motor so that the rotational motion of gears A and C is produced
only via motion in gear B. As the engine turns over and power is delivered, the
123
772
D. M. Kaplan
angular velocities of all three gears change in synchronous fashion. Due to the ratio
between the gears (2:1:0.5), A will rotate twice as fast as B (in the opposite
direction), and four times as fast as C. Suppose that gear B is spinning at 100
revolutions per minute (rpm). In this case, A will rotate at a rate of 200 rpm and B
will rotate at a rate of 50 rpm. Because the behavior of all three gears is time-locked
to motor speed, their dynamics will be coupled (i.e., correlated). One could
therefore use information about the temporal dynamics (e.g., speed or acceleration)
of gear A to predict the dynamics of gear C (and vice versa). Yet it is strained to talk
about one explaining the other in this case because a common cause—gear B
attached to the motor—is fundamental to explaining the correlation between them.
Similarly, one could use temporal information about A to predict the behavior of B,
even though the rotational motion of B causally induces the motion of A. This is
equally problematic from the point of view of explanation.
Explanations must respect this fundamental asymmetry between causes and
effects (and mere correlates), even though effects and correlated variables can be
highly useful predictors of their causes. The solid reasons for rejecting the claim that
the barometer drop explains the storm, apply equally to merely predictively
adequate dynamical models. Consequently, the claim that their predictive force
alone endows them with explanatory import should be rejected. These and many
other similar examples illustrate that prediction is insufficient for explanation, and
that the predictive force of a given model does not directly correspond to and should
not be conflated with its explanatory force. Instead, what is needed is an account
that provides an understanding of precisely why the regularities that constitute the
phenomenon hold in the first place.
Because it assimilates explanatory and predictive power, another major problem
for the predictivist view is that it is incapable of capturing the explanatory gains
among predictively equivalent models that describe the causal structure of the target
system with increased accuracy. According to predictivism, the quality of an
explanation can be improved primarily by increasing its predictive power. Yet there
seem to be other ways—including building in more mechanistic details—to improve
an explanation, which the predictivist view cannot accommodate. Suppose a given
mathematical model includes a set of variables or parameters that capture a large
proportion of the variance in the target phenomenon, without specifying the causal
structures by which those variables change or by which those variables influence the
phenomenon (e.g., a linear regression equation whose fitted values reflect the use of
some parameter estimation method such as ordinary least squares to obtain the bestfit curve for a given data set). In this case, having more detailed information about
how these model variables map onto underlying structures, unless it increased the
predictive force of the model, would add nothing to the explanation according to
predictivism. These additions to the model would be explanatorily inert. Yet, this
flies in the face of widespread views about how scientific progress is achieved, and
specifically about the kinds of refinements and model-building activities that
produce better explanations. Although not all progress is achieved by increasing
model detail, a characterization of dynamics plus details about how the dynamics
are implemented carries more explanatory information and supports more (and more
precise) causal interventions on the target system (Woodward 2003).
123
Moving parts: the natural alliance between dynamical and…
773
Given the problems facing predictivism, what is the alternative? One appealing
possibility is to treat dynamical explanation as a kind of causal-mechanistic
explanation. This allows us to sidestep the problems outlined in this section (for
additional discussion, see Kaplan and Craver 2011). It also grounds dynamical
explanations in a dominant and well-understood form of explanation. According to
the view espoused in the next section, a dynamical model carries explanatory force
to the extent it reveals the patterns of change over time in the properties of the parts,
activities, and organizational features of the mechanism underlying the phenomenon
to be explained, and lacks explanatory force to the extent it fails to describe this
structure (see Bechtel and Abrahamsen 2010 for a similar view).
The mechanistic approach to dynamical explanation
Instead of emphasizing the gap between dynamics and mechanism, as the previously
considered approaches to dynamical explanation do, I now want to show how the
apparent gap is to be bridged. The first step is to show that dynamics have always
had a proper place in the mechanistic framework under the guise of temporal
organization, although its role in mechanistic explanations has been seriously
underemphasized. The second step involves walking through a paradigmatic
example of mechanistic explanation that explicitly incorporates dynamics, and
showing how dynamical modeling approaches such as DST supply powerful tools
that complement the mechanistic framework. I take this up in the next section.
When biologists and neuroscientists put forward explanations, they frequently
seek to identify the mechanism responsible for maintaining, producing, or
underlying the phenomenon of interest (Bechtel 2008; Bechtel and Richardson
1993/2010; Craver 2007; Machamer et al. 2000). In other words, they seek to
provide mechanistic explanations. Mechanistic explanations invariably involve the
articulation of three basic elements: (a) the component parts, (b) the component
operations or activities, and (c) the organization of the parts and their activities in
the mechanism as a whole. In spite of their differences, all major accounts of
mechanistic explanation identify a key role for each of these core elements.
For example, Kauffman (1970) describes ‘‘articulation of parts’’ explanations in
biology as those that ‘‘exhibit the manner in which parts and processes articulate
together to cause the system to do some particular thing’’ (1970, 257). Bechtel and
Richardson (1993/2010) define mechanistic explanations as those that ‘‘propose to
account of the behavior of a system in terms of the functions performed by its parts
and the interaction between these parts’’ (1993, 17). Machamer et al. (2000) define a
mechanism in terms of ‘‘the entities and activities organized such that they are
productive of regular changes from start or set-up to finish or termination
conditions’’ (2000, 3), and contend that adequate mechanistic explanation must
correspondingly describe the entities, activities, and organization present in the
target mechanism. Finally, Bechtel and Abrahamsen define a mechanism as ‘‘a
structure performing a function in virtue of its component parts, component
operations, and their organization’’, and go on to maintain that adequate mechanistic
explanations will necessarily elucidate this structure.
123
774
D. M. Kaplan
Given the fact that most major accounts of mechanistic explanation build in an
explicit role for temporal (and spatial) organization, it is prima facie surprising that
dynamicists have thought it plausible to portray the mechanistic perspective as
somehow hostile to dynamics. One charitable way of interpreting the dynamicists’
anti-mechanistic stance is that they are accurately zeroing in on a current deficiency
in existing mechanistic accounts. Despite the fact that lip service is frequently paid
to its importance, it is undeniably true that elucidating the role that organization
plays in mechanistic explanations remains the most underdeveloped aspect of most
major accounts. Nevertheless, the concept of organization is, at its core, given a
central role in mechanistic explanations.
Brief reflection reveals how organization is critical to the performance of
mechanisms, and in turn, to mechanistic explanations. Consider the internal
combustion engine. An engine cycles through four basic steps or strokes: (1) intake,
(2) compression, (3) combustion, and (4) exhaust. Engines are composed of
structural parts including cylinders, pistons, spark plugs, and intake valves. These
components are fundamentally moving parts, performing activities such as sliding,
sparking, and opening. Critically, these parts and their dynamic operations must
work together in a highly organized manner for the overall engine mechanism to
function properly. Organization can be spatial or temporal, and both are often
important. The components must bear specific spatial relationships to the other parts
with which they have causal interactions. For example, pistons must be located
within the cylinders, which in turn must be spatially proximate and mechanically
linked to the crankshaft via connecting rods. This ensures that vertical motion in the
cylinders can be transmitted to the crankshaft to produce torque in the axles.
No less important is the precise temporal organization of the activities performed
by the engine parts. Activities have intrinsic temporal properties such as duration
and rate and relational properties such as order or relative timing. Often these must
be precisely organized to ensure the proper functioning of a mechanism. For
example, the spark plugs must emit their spark at the top of the compression stroke
of the pistons, so that combustion can effectively drive the pistons back down the
cylinder producing rotational motion in the crankshaft. Similarly, the opening and
closing of the intake and exhaust valves must be timed precisely so that they are
sealed shut during compression and combustion (resulting in a sealed combustion
chamber) and open during the exhaust stroke. These comprise the engine dynamics.
The state of each of the components and the global state of the overall engine
system could in turn be quantified and plotted as a function of time. Each could also
be subjected to a dynamical analysis according to which the evolving state is
represented as a trajectory in a suitable state space.
Organization is thus a necessary part of most moderately complex mechanisms
such that perturbing either the spatial organization or temporal dynamics of a
mechanism, even while the components and their activities remain unchanged, can
have appreciable (even catastrophic) effects on its performance. Thinking about
mechanistic explanation, then, it is clearly insufficient to describe only the properties
and activities of the component parts in a given mechanism without giving adequate
weight or attention to the spatial and/or temporal organization of those parts and
activities. Often this point is underappreciated or lost when considering the nature of
123
Moving parts: the natural alliance between dynamical and…
775
mechanistic explanation. One prominent exception is Bechtel and Abrahamsen
(2010). They offer the following augmented characterization to explicitly highlight
how temporal dynamics figure into mechanisms and mechanistic explanations:
A mechanism is a structure performing a function in virtue of its component
parts, component operations, and their organization. The orchestrated
functioning of the mechanism, manifested in patterns of change over time
in properties of its parts and operations, is responsible for one or more
phenomena. (authors’ original emphasis, 2010, 323)
The discussion in this section reinforces how even relatively simple mechanisms,
such as internal combustion engines, can exhibit rather complex temporal
organization, a consequence of which is that understanding the dynamical
‘‘structure’’ of a mechanism can be just as important as understanding its physical
structure.9 This last point rings especially true in the context of neuroscience, where
neural mechanisms often exhibit a wide range of complex dynamic patterns that are
critical to their proper functioning. What, then, should one say about our efforts to
explain the dynamical organization of biological and neural mechanisms? Is the
mechanistic framework equipped to capture and explain neural dynamics, or must a
new explanatory framework be introduced?
The HH model
Our current understanding of the action potential—the basic currency of neural
signaling and communication in the brain—constitutes a prototype of mechanistic
research and explanation. It is also an exemplar of dynamical modeling.
Accordingly, it provides a case study for how the two approaches are natural allies.
Action potentials are rapid (*1 ms) fluctuations in the electrical potential across
the neuronal membrane serving as the basic currency of neural signaling and
communication in the brain. The temporal organization—the dynamics—of action
potentials is well-known. The characteristic shape or waveform of the action
potential of the squid giant axon (Fig. 3a, top row) is typically decomposed into
several distinct phases.10 During the rising phase, the membrane potential rapidly
depolarizes (i.e., becomes less negative) from its resting level of around -60 mV.
In the overshoot phase, depolarization transiently pushes the membrane potential to
its peak in positive territory. During the subsequent falling phase, the membrane
potential repolarizes (i.e., becomes more negative). Repolarization occurs to such an
extent that the membrane potential briefly become more negative than the resting
9
Of course, understanding all aspects of temporal organization is not equally important for every
mechanism. For example, understanding the precise temporal transition from one state to another in a
digital logic gate or transistor may be relatively unimportant as those intermediate, transitional states
between on- and off-states are not critical to how the transistor performs its function.
10
Although the action potential waveform observed in the squid axon is fairly typical, and closely
resembles those recorded from myelinated axons of vertebrate motor neurons, it is important to note that
the precise waveforms vary from neuron class to neuron class and from species to species.
123
776
D. M. Kaplan
Fig. 3 Action potential and conduction dynamics. a Dynamics of voltage, current, and the key gating
variables plotted against time (ms). Source: Reprinted from Dayan and Abbott (2001) with permission
from The MIT Press; b Simulated spike train produced by sustained current pulse. Source: Reprinted from
Wang (2009) with permission from Elsevier; c Potassium activation variable n plotted against membrane
potential (Vm). Different colors represent different initial conditions (with different Vm and n values at
t = 0). The system exhibits a closed trajectory or stable limit cycle (attractor) in phase space, where all
represented initial states evolve into the same oscillatory state. Source: Reprinted from Wang (2009) with
permission from Elsevier. (Color figure online)
membrane potential, known as the undershoot phase, before returning to its steady
state level.
The primary goal of Hodgkin and Huxley’s modeling efforts was to characterize
the voltage- and time-dependent changes in membrane conductance—the conduction dynamics—of sodium (Na?) and potassium (K?) ions and show that these were
sufficient to produce the form and time course of the action potential. Pioneering the
voltage clamp method in the squid giant axon, they experimentally confirmed that
ionic currents flow across the neuronal membrane when its electrical potential
departs from baseline levels (Fig. 3a, second row), and that the conductances for
Na? and K? ions exhibit rather different time courses. Despite possessing scant
information about the mechanisms by which these conduction changes occur, the
Hodgkin–Huxley (HH) model is nevertheless capable of reproducing the action
potential dynamics with remarkable accuracy as well as successfully predicting
many other major features of action potentials (Hodgkin and Huxley 1952). The
core of the model is the total current equation:
im ¼ gk n4 ðV Ek Þ þ gNa m3 hðV ENa Þ þ gL ðV EL Þ
ð2Þ
where im is the total current passing through the membrane reflecting a potassium
current, gKn4(V - EK); a sodium current, gNam3h(V - ENa); and a leakage current
123
Moving parts: the natural alliance between dynamical and…
777
gL(V - EL), a sum of smaller currents for other ions. The terms gK, gNa, and gL
represent the maximum conductances for the different ions. V is the displacement of
the membrane potential from rest, and EK, ENa, and EL represent the equilibrium or
reversal potentials for the various ion species.11 Finally, the equation includes rate
coefficients (gating variables, in modern parlance) n4 and m3h representing the
fraction of the maximum conductances actually expressed, whose individual time
courses are each governed by separate differential equations (gating equations, in
modern parlance) in the full model. The modern interpretation of these fitted
expressions is that they capture the fraction of channels of a given type that are in
conducting (open) or non-conducting states (closed) at a particular time (e.g., Hille
2001).
A major achievement of the HH model involves characterizing the mapping from
the dynamics of the action potential onto the underlying conduction dynamics of
Na? and K? ions. This is illustrated in Fig. 3a, which plots the temporal evolution
of the key variables in the model. The initial climb (depolarization) of the
membrane potential (V) (Fig. 3a, top row) reflects the injection of a stimulating
positive current into the model at 5 ms (leftmost tic on the x-axis). At the same time,
a sharp inward current of Na? ions moving into the neuron is also observed (Fig. 3a,
second row from top). The m variable, which captures the rapid activation of the
Na? conductance (or the open state of Na? ion channels), precipitously jumps in
value as this inward current begins to depolarize the membrane potential (Fig. 3a,
third row from top). Because of a higher extracellular concentration of Na?, this
increase in Na? conductance allows positively charged Na? ions to diffuse down
their concentration gradient and enter the neuron, thereby depolarizing the
membrane potential. The temporal coincidence of the initial increase in Na?
conductance in the model with the steep rising phase of the action potential strongly
suggests its role in action potential initiation. On a slightly slower timescale, the
h variable, which reflects the degree of inactivation of the Na? conductance,
changes from less to greater inactivation (Fig. 3a, fourth row from top). The
coincidence of decreasing Na? conductance with the falling phase of the action
potential indicates a role in repolarizing the membrane potential. At the same time,
depolarization due to changes in Na? conductance also activates the slower voltagedependent K? conductance, represented by the n variable, allowing K? to exit the
neuron (due to a higher intracellular concentration of K?) and repolarize the
membrane potential (Fig. 3a, fifth row from top). The timing and nature of the K?
conductance changes implies it too plays a contributing role in the falling phase.
Because the K? conductance (n) is slow to terminate, it temporarily becomes higher
than at rest. This corresponds to the undershoot phase of the action potential.
11
Electrochemical equilibrium is defined by the precise balancing of two opposing forces: (1) a
concentration gradient, which causes ions to flow from regions of higher concentration to regions of lower
concentration, and (2) an opposing electrical gradient that develops as charged ions diffuse down their
concentration gradients across a permeable membrane, taking their electrical charge with them. The
electrical potential generated across the membrane at electrochemical equilibrium, otherwise known as
the equilibrium or reversal potential, can be computed using the Nerst equation (for a single permeant ion
species) and the extended Goldman equation (for more than one permeant ion species). For further
details, see Dayan and Abbott (2001).
123
778
D. M. Kaplan
Critically, these key mappings are common to all mechanistic models—they
proceed from one mechanistic level to another (lower) mechanistic level. More
specifically, the mappings are from activity at the level of the neuronal system as a
whole (action potential dynamics) onto sub-activities localized to (then unknown)
individual component parts at some lower level (ion conduction dynamics). The HH
model therefore involves a straightforward functional decomposition and partial
localization (Bechtel and Richardson 1993/2010). It describes aspects of a
mechanism. However, it is a shallow mechanistic explanation that remains
incomplete or partial because the critical issue of how conduction occurs across
the neuron’s semi-permeable membrane remains entirely unexplained by the model
(Craver 2006, 2007).12 Put another way, the key model variables described above
correspond to real activities in the system (i.e., ion movement across the neuronal
membrane), but these activities remain ‘‘disembodied’’ in the model in the sense
that they are not associated with any determinate mechanism components or parts.
Accordingly, they only serve as formal placeholders or proxies for some part (or set
of interacting parts) that must be engaged in the activities described by the model,
even if currently unknown. What is therefore needed in order to deepen the
mechanistic explanation is a further specification of precisely how these functionally individuated conduction dynamics reflect distinct activities such as voltagesensitive gating (gating or channel dynamics) localized or implemented in real
structures such as ion channels. Adequate mechanistic explanation of the action
potential requires this.
Although Hodgkin and Huxley offered a tentative hypothesis concerning the
mechanism by which conductance changes occur, they openly acknowledged they
had no evidence to support their mechanistic conjecture (Hodgkin and Huxley
1952).13 It is now known that their speculations were incorrect, and that ion
channels—voltage-sensitive proteins spanning the lipid bilayer of the neuronal
membrane—provide ion-selective pathways through which ions of a particular type
can enter and exit the cell. Despite its noteworthy descriptive and predictive force, it
was not yet a complete mechanistic explanation of the action potential because
some of the key components and activities in the mechanism responsible for
producing the target phenomenon were not mapped onto variables or parameters of
the model (Kaplan and Craver 2011).14 Although the original HH model
12
This claim is consistent with the idea that pragmatic factors might dictate that such detail is irrelevant
in a given explanatory context. This does not, however, negate the fact that explanations with such gaps
leave something more to be explained. Filling in those gaps comprises a kind of progress, even if that kind
of progress is not relevant in a given explanatory context.
13
At the end of their seminal 1952 paper, they state: ‘‘It was pointed out in […] this paper that certain
features of our equations were capable of a physical interpretation, but the success of the equations is no
evidence in favour of the mechanism of permeability change that we tentatively had in mind when
formulating them. The point that we do consider to be established is that fairly simple permeability
changes in response to alterations in membrane potential, of the kind deduced from the voltage clamp
results, are a sufficient explanation of a wide range of phenomena that have been fitted by solutions of the
equations’’ (Hodgkin and Huxley 1952, 541).
14
Kaplan and Craver (2011) propose a mapping requirement on mechanistic models, which they dub the
model–mechanism–mapping (3M) principle. 3M is intended to capture a central tenet of the mechanistic
framework, namely, that a model carries explanatory force to the extent it reveals aspects of the causal
123
Moving parts: the natural alliance between dynamical and…
779
successfully captures how action potentials reflect the fine temporal organization of
activities of underlying parts in the neuronal membrane—i.e., the conduction
dynamics of Na? and K? ions—it remains explanatorily incomplete because it fails
to describe the nature of the parts (voltage-gated ion channels) supporting these
critical activities and interactions. A major international research effort spanning
over half a century has been devoted to unravelling the physical structure and
dynamics of these important transmembrane channels (Choe 2002; Doyle
et al. 1998; Hille 2001) and in the process transforming the original HH model
into a more complete mechanistic explanation (Craver 2006, 2007). This research
agenda was not accidental, but instead reflects the general sense that more remains
to be explained and that fully understanding action potentials requires understanding
the mechanisms of voltage-sensitive channel gating.
The current HH model does not just instantiate a mechanistic explanation, it is
also bears hallmark features of a dynamical explanation. The central phenomenon
with which it deals is dynamical in nature, involving patterns of change over time.
The model also comprises a set of differential equations, the standard mathematical
tools for describing dynamics or rates of change. Unsurprisingly, then, the key
dynamical variables in the HH model can be represented efficiently using dynamical
analyses of the sort described in Dynamical systems theory: a primer section. For
example, the variable n can be represented in terms of trajectories in a suitable state
space (Fig. 3b, c). The upward trajectory, where both V and n increase, corresponds
to the rising phase of the action potential. The leftward trajectory, where n continues
to increase yet V starts to decrease, corresponds to the action potential’s falling
phase. Finally during the downward trajectory, where n decreases while V starts to
increase, reflects the overshoot phase. The dynamical analysis reveals the system
converges to an oscillatory attractor state from a number of different initial
conditions. This dynamical analysis provides a useful descriptive framework for
characterizing the activity of the model variables, and relatedly, the action potential
phenomenon itself. Yet, without an account of the component parts that implement
the dynamics, the dynamical analysis describes only part of the explanation.
One might object that while the temporal organization of activities involved in
spike generation can usefully be represented using the framework of dynamics, it is
an unnecessary overlay on the HH model and single neuron compartmental or
conductance-based modeling more generally. After all, although dynamical analysis
of threshold or spiking behavior in individual neurons has attracted attention for
some time (e.g., FitzHugh 1955; Izhikevich 2007), it seemingly occupies at best a
marginal role in most conductance-based modeling (Dayan and Abbott 2001). This
objection can be addressed by identifying a growing number of research areas in
contemporary neuroscience for which reliance on the dynamical framework is not
optional.
The advent and increasingly widespread adoption of large-scale neural recording
methods (e.g., using multi-electrode arrays or optical recording technologies)
Footnote 14 continued
structure of a mechanism (i.e., to the extent the model elements map onto identifiable components,
activities, and organizational features of the target mechanism).
123
780
D. M. Kaplan
capable of monitoring the activity of many dozens or even hundreds of neurons
simultaneously has necessitated a major shift in how neural data are analyzed and
interpreted (Brown et al. 2004; Cunningham and Yu 2014; Stevenson and Körding
2011). Specifically, analytic tools appropriate for describing the activity of
individual neurons such as tuning curves and peri-stimulus time histograms are
becoming increasingly obsolete in favor of dimensionality reduction methods
capable of deciphering large-scale activity patterns in neural populations. Such
methods are capable of producing low-dimensional representations of highdimensional data sets that are more readily interpretable and which preserve or
otherwise reveal dynamic patterns or latent temporal structure of interest in the data
that would otherwise be exceedingly difficult if not impossible to discern. Although
there are a number of available dimensionality reduction methods to help visualize
and extract dynamical structure from neural population activity, one increasingly
common dimensionality reduction technique is dynamical state space analysis, in
which the activity of large population of neurons is reduced to a simplified
trajectory that evolves over time through a low-dimensional state space whose
dimensions capture the greatest variance in the data (e.g., Churchland et al. 2007,
2012; Shenoy et al. 2013; Yu et al. 2006). For example, Churchland et al. (2012)
characterize population activity in motor cortex during movement execution as
exhibited a particular temporal structure—the neural trajectory simply rotates with a
phase and amplitude set by the initial state of motor preparation. Critically, this
latent rotational structure is only readily discernable using tools from dynamics.
Given the growing trend toward large-scale neural recordings, the dynamical
framework promises to occupy an ever more central position in neuroscience in the
future.
The natural alliance between dynamical and mechanistic approaches
Generalizing from the preceding discussion, the relationship between dynamical and
mechanistic modeling approaches in neuroscience is not one of competition or
opposition, but rather one of complementarity and interdependence. According to
the view defended here, they are natural allies in the effort to describe and explain
the complex behavior of neural mechanisms. Like many strong alliances, this one is
formed on the basis of mutual need. On the one hand, the mechanistic framework
must increasingly incorporate the powerful descriptive tools of dynamics in order to
reveal the rich temporal structure of neural activity and the interactions of neural
systems over time. On the other hand, the explanatory import of the dynamical
models reflects its role in describing the dynamic activities of parts and organized
collections of parts of neural mechanisms or systems. Contrary to law- and
prediction-based accounts of dynamical explanation, there is no legitimate sense in
which dynamical models explain phenomena independently of describing mechanisms, either by subsumption under general laws or by appealing to their predictive
force alone.
This view is importantly distinct from two other closely related views in the
literature. Zednik (2011), for example, argues along similar lines, stating that in
123
Moving parts: the natural alliance between dynamical and…
781
certain cases ‘‘dynamical models and analyses are themselves used to describe the
parts and operations of a mechanism as well as its organization’’ (Zednik 2011,
248). In claiming that dynamical analyses can sometimes be deployed to describe
the activities of parts and temporal organization of mechanisms, and that in these
circumstances they should count as legitimate instances of mechanistic explanations, he is on common ground with the view defended here. However, Zednik
embeds this claim within a broader, pluralistic perspective about dynamical
explanation. According to this broader view, some dynamical models explain by
describing aspects of mechanisms, whereas others explain by subsuming phenomena under general laws. This represents a significant departure from the view being
advocated here. For reasons detailed above, dynamical explanations including the
HKB model do not instantiate covering-law explanations.
In various places, Bechtel and Abrahamsen have embraced a similar outlook to
the one defended in this paper. Importantly, Bechtel and Abrahamsen (2010)
maintain that dynamic mechanistic explanations preserve ‘‘the basic mechanistic
commitment to identifying parts, operations, and simple organization, but gives
equal attention to determining how the activity of mechanisms built from such parts
and operations is orchestrated in real time’’ (2010, 260). Elsewhere, they similarly
argue that the strategy of dynamical mechanistic modeling involves the selection of
‘‘properties of certain parts or operations of the mechanism that appear to be salient
to a particular dynamic phenomenon’’ which are subsequently ‘‘pulled into a
computational model as variables or parameters, thereby anchoring that model to
the mechanistic account’’ (Bechtel and Abrahamsen 2010, 323). One natural way of
interpreting the view expressed here is that they are embracing a pretty standard
requirement on mechanistic explanation, namely, that there is some description
(however incomplete) of the component parts and activities of the underlying
mechanism for the observed dynamics.15 In this respect, they are on common
ground with the view developed here. This is where the similarities end, however.
The first major point of departure between their view and the one espoused here
is that in several discussions they are non-committal or vague about the status of the
covering-law approach to dynamical explanation (Bechtel and Abrahamsen 2002;
Abrahamsen and Bechtel 2006). Consequently, they leave open the possibility that
dynamical models with explanatory import may be grounded in something other
than a description of mechanisms. For example, although they do not emphasize the
point, Bechtel and Abrahamsen (2002, 266–267) imply that the covering-law
account of explanation might serve equally well as the mechanistic account for
underwriting the explanatory force of dynamical models. In other places,
Abrahamsen and Bechtel (2006, 160–163) imply that a covering-law or broadly
unificationist account might similarly elucidate the nature of explanatory dynamical
models. This possibility is intentionally and explicitly prohibited by the current
view.
15
For a different interpretation of the view expressed by Bechtel and Abrahamsen (2010), see Zednik
(2011). Zednik puzzlingly maintains that they endorse the view that a given dynamical model ‘‘does not
itself describe the […] mechanism, but instead analyzes how the mechanism behaves over time’’ (Zednik
2011, 248). This interpretation is, however, exceedingly difficult to reconcile with the broader framework
presented by Bechtel and Abrahamsen.
123
782
D. M. Kaplan
Second, Bechtel and Abrahamsen underestimate the reciprocal influences
between the dynamical and mechanistic approaches. For example, Bechtel
(1998a) characterizes the relationship between dynamics and mechanism as follows:
The mechanistic and dynamical perspectives are hence natural allies…A longstanding feature of the mechanist perspective is that one needs constantly to
shift perspective between structure and function. When examining structure,
one focuses on (temporary) stabilities; when focusing on function, one focuses
on change. However, as soon as one decomposes the behavior of a structure,
one is concerned with the activity within the structure, activity that can change
the structure itself. Dynamics provides a set of tools for analyzing activity, but
the identification of structures often provides guidance about the entities
whose properties define the variables that change in value and whose patterns
of change are to be analyzed in dynamical terms. (Bechtel 1998a, 629)
Importantly, in claiming that the framework of dynamics provides a complementary
set of tools for analyzing the activity of mechanisms, Bechtel echoes a central claim
of the current view. Although the view comes close to the one defended here, it
differs in at least one important respect. In particular, in the quoted passage and
subsequent discussions of dynamical modeling approaches, Bechtel mistakenly
implies that the mechanistic perspective exclusively provides structural identifications and related analysis, whereas the dynamical perspective contributes an analysis
of the patterns of change or activity of these independently identified structures.
More recently, Bechtel and Abrahamsen (2010) have reinforced the earlier view,
stating that dynamical models (of circadian rhythms) are best conceived as proposals
‘‘to better understand the function of a mechanism whose parts, operations, and
organization already have been independently determined’’ (2010, 322). However,
the interplay between mechanistic and dynamic modeling approaches in contemporary neuroscience is often far more complex and interesting. For example,
dynamical analyses can and do play a role in mechanistic decomposition and
localization. Although the point is a methodological one, it is relevant because it
bears on the question of how the dynamical and mechanistic approaches are
integrated in the service of mechanism discovery, hypothesis generation, and
explanation building (Bechtel and Richardson 1993/2010; Craver and Darden 2013).
Recent work to model action potential dynamics more precisely has led to new
testable hypotheses about underlying mechanisms (Naundorf et al. 2006), which in
turn may lead to newly identified structures. Naundorf and colleagues sought to more
carefully measure what happens at the time of action potential initiation. They
observed onset dynamics approximately ten times faster than predicted by the HH
model, and hypothesized that this rapid onset likely reflects the cooperative activation
of neighboring Na? channels. On pain of admitting mysterious action at a distance, this
proposal demands some (currently unknown) mechanism of channel–channel
interaction such as physical coupling whereby the opening of one ion channel can
alter a neighboring channel’s probability of opening. Interestingly, if true, this would
challenge the widespread assumption that ion channels operate independently of one
another. Although a computer simulation incorporating this feature accurately
reproduced the observed onset dynamics, channel cooperatively remains controversial
123
Moving parts: the natural alliance between dynamical and…
783
and unconfirmed in real neurons (McCormick et al. 2007; Naundorf et al. 2006,
2007). Nevertheless, this dynamical analysis at least potentially stands to do
considerably more than merely describe the activity of independently determined
mechanistic structures. It may potentially lead to the identification of new underlying
structures and mechanisms that can explain temporal features of action potentials.
Conclusion
Return for a moment to the HKB model with which this paper began. Given the
preceding analysis, what should one say about the explanatory import of this
exemplary dynamical model? First of all, as indicated above, there is good reason to
grant that the model goes beyond mere description and possesses some degree of
explanatory power. Despite this, for reasons discussed above and elsewhere, its
descriptive and/or predictive adequacy alone is insufficient to account for its
explanatory status. Although many philosophical enthusiasts of dynamical modeling
appear to embrace these misguided views, it seems that at least some dynamicist
researchers are considerably closer to agreeing with the mechanistic perspective
characterized above. In particular, the importance of mechanistic considerations in
building and refining dynamical models such as the HKB model is now widely
acknowledged among prominent dynamicist researchers including Kelso (Jantzen
et al. 2009; Jirsa et al. 1998; Schöner and Kelso 1988). For example, Kelso and
colleagues (Jirsa et al. 1998) recently proposed a neural field model connecting the
observed phase shift described by HKB to the underlying dynamics of neural
populations in primary motor cortex. In doing so, they are leveraging the powerful
descriptive framework of dynamics to reveal the rich temporal structure exhibited in
human bimanual coordination (and possibly other forms of coordination), and link it to
the dynamics of neural population activity and the interactions of neural systems over
time. They are using the dynamical framework as a heuristic for mechanism discovery
and are effectively transforming the HKB model into a mechanistic explanation. Other
modelers have also discussed the explanatory shortcomings of the original HKB
model, stressing the need to incorporate mechanistic details if these deficiencies are to
be overcome (Beek et al. 2002; Peper et al. 2004). These considerations are broadly
supportive of the mechanistic approach to dynamical explanation embraced here.
In this paper, I have argued that the opposition or gulf between the frameworks of
dynamical and mechanistic explanation is an illusory one. Although importantly
distinct in many ways, these two approaches are not competitors in the explanation
business but are instead related in terms of subsumption—dynamical models provide
one important set of resources among many that are brought to bear to reveal features
of a mechanism. In the context of neuroscience, the framework of dynamics
specifically provides a powerful descriptive scheme for revealing dynamic patterns or
latent temporal structure in neural activity that would otherwise be exceedingly
difficult if not impossible to discern. Yet the real explanatory weight of dynamical
models, such as the HH model and the HKB model, require the presence of an
associated description (however incomplete) of the mechanisms that support,
maintain, or underlie these activity patterns. The frameworks of dynamics and
123
784
D. M. Kaplan
mechanism are thus natural allies—each plays a valuable part in the common
enterprise of describing the parts, activities, and organization of underlying
mechanisms.
References
Abraham R, Shaw CD (1992) Dynamics: the geometry of behavior. Addison-Wesley, Redwood City
Abrahamsen A, Bechtel W (2006) Phenomena and mechanisms: putting the symbolic, connectionist, and
dynamical systems debate in broader perspective. In Stainton R (ed) Contemporary debates in
cognitive science. Basil Blackwell, Oxford
Ahrens MB, Li JM, Orger MB, Robson DN, Schier AF, Engert F, Portugues R (2012) Brain-wide
neuronal dynamics during motor adaptation in zebrafish. Nature 485(7399):471–477
Amit DJ (1992) Modeling brain function: the world of attractor neural networks. Cambridge University
Press, Cambridge
Bechtel W (1998a) Dynamicists versus computationalists: whither mechanists? Behav Brain Sci 21(05):629
Bechtel W (1998b) Representations and cognitive explanations: assessing the dynamicist’s challenge in
cognitive science. Cogn Sci 22(3):295–318
Bechtel W (2008) Mental mechanisms: philosophical perspectives on cognitive neuroscience. Lawrence
Erlbaum, Routledge, Mahwah
Bechtel W, Abrahamsen A (2002) Connectionism and the mind. Parallel processing, dynamics, and
evolution in networks. Blackwell, Oxford
Bechtel W, Abrahamsen A (2010) Dynamic mechanistic explanation: computational modeling of
circadian rhythms as an exemplar for cognitive science. Stud Hist Philos Sci Part A 41(3):321–333
Bechtel W, Richardson RC (1993/2010) Discovering complexity: decomposition and localization as
strategies in scientific research. Reprinted MIT Press, Cambridge
Beek PJ, Peper CE, Daffertshofer A (2002) Modeling Rhythmic Interlimb Coordination: beyond the
Haken–Kelso–Bunz Model. Brain Cogn 48(1):149–165
Beer RD (2000) Dynamical approaches to cognitive science. Trends Cogn Sci 4(3):91–99
Bogen J (2005) Regularities and causality; generalizations and causal explanations. Stud Hist Philos Sci
Part C 36(2):397–420
Bressler SL, Kelso JAS (2001) Cortical coordination dynamics and cognition. Trends Cogn Sci 5(1):26–36
Brown EN, Kass RE, Mitra PP (2004) Multiple neural spike train data analysis: state-of-the-art and future
challenges. Nat Neurosci 7(5):456–461
Chemero A (2011) Radical embodied cognitive science. MIT Press, Cambridge
Chemero A, Silberstein M (2008) After the philosophy of mind: replacing scholasticism with science.
Philos Sci 75(1):1–27
Choe S (2002) Potassium channel structures. Nat Rev Neurosci 3(2):115–121
Churchland MM, Yu BM, Sahani M, Shenoy KV (2007) Techniques for extracting single-trial activity
patterns from large-scale neural recordings. Curr Opin Neurobiol 17(5):609–618
Churchland MM, Cunningham JP, Kaufman MT, Foster JD, Nuyujukian P, Ryu SI, Shenoy KV (2012)
Neural population dynamics during reaching. Nature 487(7405):51–56
Clark A (1997) Being there: putting brain, body, and world together again. MIT press, Cambridge
Compte A, Brunel N, Goldman-Rakic PS, Wang XJ (2000) Synaptic mechanisms and network dynamics
underlying spatial working memory in a cortical network model. Cerebral Cortex (New York, N.Y.:
1991) 10(9):910–923
Craver CF (2006) When mechanistic models explain. Synthese 153(3):355–376
Craver CF (2007) Explaining the brain: mechanisms and the mosaic unity of neuroscience. Oxford
University Press, New York
Craver CF (2008) Physical law and mechanistic explanation in the Hodgkin and Huxley model of the
action potential. Philos Sci 75(5):1022–1033
Craver CF, Darden L (2013) In search of biological mechanisms: discoveries across the life sciences.
University of Chicago Press, Chicago
Cunningham JP, Yu BM (2014) Dimensionality reduction for large-scale neural recordings. Nat Neurosci
17(11):1500–1509
Dayan P, Abbott LF (2001) Theoretical neuroscience: computational and mathematical modeling of
neural systems. MIT Press, Cambridge
123
Moving parts: the natural alliance between dynamical and…
785
Doyle DA, Morais Cabral J, Pfuetzner RA, Kuo A, Gulbis JM, Cohen SL, Chait BT, MacKinnon R
(1998) The structure of the potassium channel: molecular basis of K ? conduction and selectivity.
Science 280(5360):69–77
Dupré J (2013) Living Causes. Aristot Soc Suppl Vol 87(1):19–37
Earman J, Roberts JT (1999) Ceteris paribus, there is no problem of provisos. Synthese 118(3):439–478
Earman J, Roberts JT, Smith S (2002) Ceteris paribus lost. Erkenntnis 57(3):281–301
FitzHugh R (1955) Mathematical models of threshold phenomena in the nerve membrane. Bull Math
Biophys 17:257–278
Fodor JA (1991) You can fool some of the people all of the time, everything else being equal; hedged
laws and psychological explanations. Mind 100(397):19–34
Fuchs A (2013) Nonlinear dynamics in complex systems: theory and applications for the life-, neuro- and
natural sciences. Springer, Berlin
Gervais R, Weber E (2011) The covering law model applied to dynamical cognitive science: a comment
on Joel Walmsley. Minds Mach 21(1):33–39
Haken H, Kelso JA, Bunz H (1985) A theoretical model of phase transitions in human hand movements.
Biol Cybern 51(5):347–356
Hempel CG (1965) Aspects of scientific explanation. The Free Press, New York
Hille B (2001) Ion channels of excitable membranes, 3rd edn. Sinauer Associates, Sunderland
Hodgkin AL, Huxley AF (1952) A quantitative description of membrane current and its application to
conduction and excitation in nerve. J Physiol 117(4):500–544
Izhikevich EM (2007) Dynamical systems in neuroscience. MIT press, Cambridge
Jantzen KJ, Steinberg FL, Scott Kelso JAS (2009) Coordination dynamics of large-scale neural circuitry
underlying rhythmic sensorimotor behavior. J Cogn Neurosci 21(12):2420–2433
Jeffrey KJ (2011) Place Cells, grid cells, attractors, and remapping. Neural Plast 2011:182602
Jirsa VK, Fuchs A, Kelso JAS (1998) Connecting cortical and behavioral dynamics: bimanual
coordination. Neural Comput 10(8):2019–2045
Kaplan DM (2011) Explanation and description in computational neuroscience. Synthese 183(3):339–373
Kaplan DM, Bechtel W (2011) Dynamical models: an alternative or complement to mechanistic
explanations? Top Cogn Sci 3(2):438–444
Kaplan DM, Craver CF (2011) The explanatory force of dynamical and mathematical models in
neuroscience: a mechanistic perspective. Philos Sci 78(4):601–627
Kauffman SA (1970) Articulation of parts explanation in biology and the rational search for them. Philos
Sci 1970:257–272
Kelso JAS (1995) Dynamic patterns: the self-organization of brain and behavior. MIT press, Cambridge
Kelso JAS, Fuchs A, Lancaster R, Holroyd T, Cheyne D, Weinberg H (1998) Dynamic cortical activity in
the human brain reveals motor equivalence. Nature 392(6678):814–818
Lappi O, Rusanen A-M (2011) Turing machines and causal mechanisms in cognitive science. In: McKay
Illari P, Russo F, Williamson J (eds) Causality in the sciences. Oxford University Press, Oxford,
pp 224–239
Laurent G (2002) Olfactory network dynamics and the coding of multidimensional signals. Nat Rev
Neurosci 3(11):884–895
Levy A (2014) What was Hodgkin and Huxley’s achievement? Br J Philos Sci 65(3):469–492
Levy A, Bechtel W (2013) Abstraction and the organization of mechanisms. Philos Sci 80(2):241–261
Machamer P, Darden L, Craver CF (2000) Thinking about mechanisms. Philos Sci 67(1):1–25
Mascagni MV, Sherman AS (1989) Numerical methods for neuronal modeling. In: Segev I, Koch C (eds)
Methods in neuronal modeling. The MIT Press, Cambridge, pp 569–606
McCormick DA, Shu Y, Yu Y (2007) Neurophysiology: Hodgkin and Huxley model–still standing?
Nature 445(7123):E1–E2 (discussion E2–3)
Naundorf B, Wolf F, Volgushev M (2006) Unique features of action potential initiation in cortical
neurons. Nature 440(7087):1060–1063
Naundorf B, Wolf F, Volgushev M (2007) Neurophysiology: Hodgkin and Huxley model-still
standing?(Reply). Nature 445:2–3
Oullier O, de Guzman GC, Jantzen KJ, Lagarde J, Kelso JAS (2008) Social coordination dynamics:
measuring human bonding. Social Neurosci 3(2):178–192
Peper CLE, Ridderikhoff A, Daffertshofer A, Beek PJ (2004) Explanatory limitations of the HKB model:
incentives for a two-tiered model of rhythmic interlimb coordination. Hum Movement Sci
23(5):673–697
123
786
D. M. Kaplan
Pietroski P, Rey G (1995) When other things aren’t equal: saving ceteris paribus laws from vacuity. Br J
Philos Sci 46(1):81–110
Port RF, Van Gelder T (1995) Mind as motion: explorations in the dynamics of cognition. MIT press,
Cambridge
Reutlinger A, Unterhuber M (2014) Thinking about non-universal laws. Erkenntnis 79(10):1703–1713
Rosenbaum DA (1998) Is dynamical systems modeling just curve fitting? Mot Control 2(2):101–104
Rusanen A-M, Lappi O (2007) The limits of mechanistic explanation in neurocognitive sciences. In:
Proceedings of the european cognitive science conference
Salmon WC (1984) Scientific explanation and the causal structure of the world. Princeton University
Press, Princeton
Salmon WC (1989/2006) Four decades of scientific explanation. Reprinted University of Pittsburgh
Press, Pittsburgh
Schmidt RC, Carello C, Turvey MT (1990) Phase transitions and critical fluctuations in the visual
coordination of rhythmic movements between people. J Exp Psychol Hum Percep Perform
16(2):227–247
Scholz JP, Kelso JAS (1989) A quantitative approach to understanding the formation and change of
coordinated movement patterns. J Motor Behav 21(2):122–144
Schöner G, Kelso JAS (1988) Dynamic pattern generation in behavioral and neural systems. Science
239(4847):1513–1520
Shenoy KV, Sahani M, Churchland MM (2013) Cortical control of arm movements: a dynamical systems
perspective. Ann Rev Neurosci 36:337–359
Sporns O (2011) Networks of the Brain. MIT Press, Cambridge
Stepp N, Chemero A, Turvey MT (2011) Philosophy for the rest of cognitive science. Top Cogn Sci
3(2):425–437
Stevenson IH, Körding KP (2011) How advances in neural recording affect data analysis. Nat Neurosci
14(2):139–142
Strogatz SH (2014) Nonlinear dynamics and chaos: with application to physics, biology, chemistry, and
engineering, 2nd edn. Westview Press, Cambridge
Swinnen SP (2002) Intermanual coordination: from behavioural principles to neural-network interactions.
Nat Rev Neurosci 3(5):348–359
Tank DW, Hopfield JJ (1987) Collective computation in neuronlike circuits. Sci Am 257(6):104–114
van Gelder T (1995) What might cognition be, if not computation? J Philos 92(7):345–381
van Gelder T (1998) The dynamical hypothesis in cognitive science. Behav Brain Sci 21(05):615–628
van Gelder T, Port RF (1995) It’s about time: an overview of the dynamical approach to cognition.
In: Port, van Gelder (eds) Explorations in the dynamics of cognition: mind as motion. MIT Press,
Cambridge, pp 1–43
Von Eckardt B, Poland JS (2004) Mechanism and explanation in cognitive neuroscience. Philos Sci
71(5):972–984
Walmsley J (2008) Explanation in dynamical cognitive science. Minds Mach 18(3):331–348
Wang X-J (2009) Attractor network models. In: Squire LR (ed) Encyclopedia of neuroscience, vol 1.
Academic Press, Oxford, pp 667–679
Weber M (2008) Causes without mechanisms: experimental regularities, physical laws, and neuroscientific explanation. Philos Sci 75(5):995–1007
Weiskopf DA (2011) Models and mechanisms in psychological explanation. Synthese 183(3):313–338
Wills TJ, Lever C, Cacucci F, Burgess N, O’Keefe J (2005) Attractor dynamics in the hippocampal
representation of the local environment. Science 308(5723):873–876
Wong K-F, Wang X-J (2006) A recurrent network mechanism of time integration in perceptual decisions.
J Neurosci 26(4):1314–1328
Woodward J (2000) Explanation and invariance in the special sciences. Br J Philos Sci 51(2):197–254
Woodward J (2002) There is no such thing as a ceteris paribus law. Erkenntnis 57(3):303–328
Woodward J (2003) Making things happen: a theory of causal explanation. Oxford University Press,
Oxford
Woodward J (2013) Mechanistic explanation: its scope and limits. Aristot Soc Suppl 87:39–65
Yu BM, Afshar A, Santhanam G, Ryu SI, Sheynoy KV, Sahani M (2006) Extracting dynamical structure
embedded in neural activity. Adv Neural Inform Process Syst 18:1545–1552
Zednik C (2011) The nature of dynamical explanation. Philos Sci 78(2):238–263
123