Solving the Coordination Problem

Solving the Coordination Problem
Michael Townsen Hicks
February 27, 2014
1
Introduction
Not far from where I'm writing a group of physicists are running experiments
on a scanning-tunneling microscope to conrm one of the consequences of some
already well-supported and highly stable generalizations about how electrons
interact with atoms. And not far from those physicists a group of psychologists
are measuring the reaction times of college students when lights are ashed
to generate and conrm a distinct set of generalizations about the visual perception systems of humans. Both of these groups of scientists are discovering
truths through experiment; both sets of generalizations are highly evidentially
supported, support counterfactuals, and useful for prediction and the manipulation of our environment. Neither of them needs to pay attention to what the
other is doing; both of them are able to arrive at and support their respective
generalizations without the other.
same world.
Nonetheless, they they are describing the
They both accept that the students are made up of the atoms.
And if they were to discover generalizations that disagreed, either about what
actually happens or about what would happen in some counterfactual situation,
they would be dismayed, and one of them would be forced to change their views.
This combination of independence and mutual constraint characterizes the
relationship between the sciences.
All sciences are able to employ the same
methodology as physics, in terms of developing a conceptual structure, using
that structure to formulate explanations through laws, and holding those concepts and laws accountable to the world through experimentation. The gener-
1
alizations they arrive at support counterfactuals and feature in explanations .
1 Woodward (2003), chapter 6, denies that the generalizations of the special sciences are
laws; in doing so, he rejects the notion that only laws are counterfactually invariant, and that
only laws are available for use in explanations. Woodward's reasons are simple: according to
standard accounts of law, laws must be exceptionless. But the generalizations which feature
in special scientic explanation have exceptions, and (argues Woodward) these exceptions
cannot be crossed out by including a ceteris parabis clause within the law.
Like Woodward, I have no truck with a verbal dispute about the word 'law'. Here, and
throughout this paper, I will use 'law' to refer to those counterfactually robust generalizations
that can underwrite predictions and feature in explanations. Claiming that the generalizations
of the special sciences are not laws will not remove the burden of explaining these features of
those generalizations, and so will not (by itself) solve the coordination problem.
1
Despite this independence, the sciences exercise mutual constraint on one another: even counterfactual disagreements between sciences show that at least
one set of laws contains a falsehood.
And these sciences exhibit a hierarchi-
cal explanatory structure:
interscientic explanations ow up, from more to
less fundamental sciences.
Accounting for these four features is the job of a
philosophical account of law in the special sciences; this problem has generally
been approached as the problem of
reduction:
which sciences
reduce
to which?
Specically, do all sciences reduce to physics? And how is the relationship of
reduction to be understood?
Understanding the problem as a problem of reduction is mistaken for two reasons: rstly, it biases the discussion against views which emphasize the methodological independence of the sciences. Secondly, it creates the illusion that we
are looking for a simple yes-or-no answer.
Disagreements over whether, e.g.,
mere supervenience is sucient for reduction distracts us from the underlying
features of the relationship between sciences that need to be explained.
To
avoid these confusions, I'll call the puzzle posed by the relationship between the
sciences the
coordination problem :
how are various scientic disciplines coordi-
nated with one another?
In this paper, I'll present a new solution to the problem of coordination.
But rst, I'll identify two strains among extant solutions to the problem of
coordination.
I call these the
imperialist
and the
anarchist
solutions to the
coordination problem. The imperialist sees the special sciences as a consequence
of fundamental physics; the laws of the special sciences are laws because they can
be
derived from or grounded
in the laws of physics. This strong reductionist view
seeks to make every explanation an explanation from physics. The anarchist, on
the other hand, denies that the sciences are connected. Rather, she sees them
as each unifying a body of facts, or cataloging the dispositions of properties.
Both of these views fail to solve the coordination problem; the imperialist
fails to account for the independence of the sciences, and the anarchist fails to
account for their mutual, asymmetric dependence.
I'll conclude by oering a
third view, which I call the democratic view: on my view, the various sciences
work together to generate a set of laws, the informativeness of which are evaluated holistically.
But because various scientic disciplines are epistemically
isolated, in a way in which I will make more precise, they add to this lawbook
semi-autonomously. The view I advocate has the advantages of both the imperialist and the anarchist view. Like the anarchist, and unlike the imperialist, I
hold that the laws of the special sciences are
made laws
in the same way that
the laws of fundamental science are. Like the imperialist, but not the anarchist,
I hold that the laws of physics are fundamental, and that there is an asymmetry
between the special sciences and physics.
The proposal I oer is a Humean proposal, in the tradition of the Mill/Ramsey/Lewis
account of laws. But nonhumeans will nd much here to like: Humeans take
the epistemic role of laws to be
constitutive
of natural lawhood. That is, they
believe that laws support counterfactuals and provide explanations because of
their epistemic utility, not vice versa. A modalist about laws, who takes laws
to have either irreducible nomic or metaphysical necessity, will still need to
2
understand the epistemic utility of laws, and so can tack this account on to
their more metaphysically robust account as an explication of the epistemology
of laws. And many modalists about law take the only truly necessary laws to
be those of fundamental physics; such a metaphysician of law can accept this
view as an account of the laws of the special sciences while denying that it is
a sucient account of lawhood
simpliciter.
Finally, I argue in Section 1 that
neither the imperialist nor the anarchist provide an adequate solution to the
coordination problem. But thus far all modalist accounts of law fall into one of
these two camps. So a modalist need either respond to the challenges presented
in Section 1 or reject one of the four features of coordination there identied.
2
The Imperialist and the Anarchist
In what follows, I will rst this dichotomy in broad strokes and then show how individual philosophers t into one or another camp. It's worth noting that views
about the relationship between physics and the special sciences crosscut views
about the metaphysics of laws; although ultimately I favor a broadly Humean
view of laws, my criticisms of the current theoretical space of possibilities do
not rest on any metaphysical scruples.
To help illustrate the dierence between the anarchist and the imperialist,
and later to elucidate the democratic view, I'll make use of an idealized epistemic agent.
She needs, unlike us, to have a vast capacity for absorbing and
combining information from various sciences.
But we will not assumeexcept
when a view of laws demands itthat she is logically omniscient, or that she, like
Laplace's demon, is able know everything about the state of the world (though
she might), nor will we assume that inference is for her without computational
costs. Some of these details of our agent will be xed by the various purported
solutions to the coordination problem. We can refer to her as a FISA: a Fairly
Ideal Scientic Agent. She will have a set of conditional credences, and these
conditional credences will reect encode the laws of various sciences.
If one of our laws says that if A then B, FISA's credence in B conditional
on A will be 1. But the laws FISA responds to need not be deterministic: if
our laws are statistical, this to will be reected in her credences. So if it is a
law that agents who are asked to memorize a ten-digit number are more likely
to utter racial slurs than those who have no number to remember, her credence
F (slur|number) will be less than her credence F (slur|~number)2 .
I will evaluate imperialism, anarchism, and democracy with respect to four
features of the relationship between physics and the special sciences (briey introduced in the introduction). These desiderata must be a bit vague; dierent
views about the relationship between the sciences should be allowed to provide
2 Typically, discussions of objective probability assume that the objective probabilities are
precise in situations in which they are dened. But this is not obviously the case for some
special science generalizations: plausibly, some laws in the special science provide comparative
relations between conditional probabilities without nailing those probabilities down. While I
think that a complete account of special scientic law should be compatible with this (and
believe that mine is), addressing this issue is beyond the scope of this paper.
3
slightly dierent account of what, for example, the asymmetric dependence between physics and biology amounts to.
Methodological Independence: Each science is able to formulate gen-
eralizations and support them evidentially via induction, and each science is
able to determine its own conceptual structure.
Counterfactual Robustness: The generalizations of the special sciences
are counterfactually robust:
that is, they both support counterfactuals and
hold in a variety of counterfactual situationsincluding, plausibly, counterfactual
situations in which the laws of lower-level sciences do not hold.
Mutual Constraint: Distinct sciences cannot make inconsistent predic-
tions, including predictions about what would occur in merely counterfactual
situations, and cannot provide inconsistent constraints on belief or credence.
Asymmetry: Explanations between sciences go in one direction only; this
direction of explanation creates a heirarchy roughly lining up with the direction
of mereological depenence, where the entities of higher-level sciences are made
up of the entities of lower-level sciences.
3.
We are looking for a view of laws that explains these four aspects of the relationship between the sciences while retaining descriptive adequacy: the closer
the laws posited by the view resemble those of our current sciences, the better.
It should be believable that the solution under discussion is a view about the
laws of our sciencesif the view does not allow some special scientic generalization to be a law, or requires us to add to the fundamental laws, this is a
demerit of the view.
This is a defeasible requirement.
For the laws we have
now are not the nal laws; and the divisions we now carve between our sciences
are somewhat arbitrary. So a philosopher has it within her rights to argue that
our nal theory will have features no current theory has; and she may likewise
argue that some laws which are currently considered to be in science A actually
belong in science Bor that the division between science A and science B isn't
the division we should be worried about.
2.1
The Imperialist
The imperialist view holds that the lawhood of the special sciences are derivative
to the fundamental laws.
The imperialist may hold that they can be
derived
from the fundamental laws, but she need not: she may hold instead that they are
metaphysically necessitated by the fundamental laws, or that they are
grounded
in the fundamental lawswhere A grounds B only if A metaphysical necessitates
B and A explains B.
A prototypicalif datedimperialist is F. P. Ramsey (1927), who held that
there are three grades of law: fundamental laws, laws that are derived from
the fundamental laws alone, and laws which are derived from the fundamental
laws and some 'robust' initial conditions. We might add a fourth category, not
available to Ramsey: laws derived from the fundamental laws and
a posteriori
3 This claim is controversion and so plausibly should not be one of our desiderata. Some
views about the laws of the special sciences
4
4
necessities, like 'water=H2 O ' . Finally, we should remember that it is open to
imperialists may add to the set of fundamental laws so that they have suciently
strong implications for the special sciences.
Our FISA, according to the imperialist, starts with a set of fundamental
laws. These laws may be sentences which together maximize strength and simplicity, as the Humean holds (Lewis (1980, 1983), Beebee (2000), Loewer (2007,
2008, 2009)), they may be generalizations which are backed by a relationship of
necessitation between universals (Armstrong (1983, 1997)), or they may be sentences which describe the dispositional essences of the properties which feature
in them (Ellis (2000, 2001), Bird (2007)) She then works out the consequences
of these laws. On the most austere view, her conditional credences now encode
the fundamental laws and the laws of the special sciences. But on more permissive views, she isn't done.
On a permissive view, all she has now are the
fundamental laws. She may still conditionalize on either some special set of the
initial conditions, or she may conditionalize on
a posteriori
necessitiestypically
property identities. Once she has done this, says the permissive imperialist, she
has at her disposal both the fundamental laws and those of the special sciences.
It's worth noting here that the laws of the special sciences need not receive
probability 1. Indeed, likely they should not. For the laws of the special sciences
are not exceptionless, as are the laws of physics.
So an adequate account of
special scientic law, imperialist, anarchist, or democratic, ought to hold that
the conditional credences assigned to the special scientic laws are not unity.
Before we look at the problems with imperialism, we should note its advantages.
Imperialism clearly and coherently explains two features of the co-
ordination problem:
mutual dependence and asymmetry.
According to
imperialism, the laws of the various scientic disciplines must be compatible
because some of them are a consequence of others (together, for the permissive imperialist, with robust initial conditions or
a posteriori
necessities).
If
we discover a contradiction between the apparent predictions of two sciences,
it's impossible that one of them is derived from the other. Consequently one
5
of them must have the wrong laws .
And the asymmetry of the sciences is
neatly explained as well, because the laws of less fundamental sciences are a
consequence of those of the more fundamental science, but not vice versa. The
asymmetry of the sciences is just the asymmetry of deduction.
As to the methodological independence of the special sciences, the
4 There's a fth possible type of law: one dependent of the fundamental laws, robust initial
conditions, and a posteriori necessities. But these will not improve the situation for the
imperialist: if neither laws derived from the fundamental together with initial conditions,
or laws derived from the fundamental together with a posteriori necessities can solve the
coordination problem, then neither can the two together.
5 Imperialism doesn't hold that the mistaken science must always be the special science;
we might take physics and thermodynamics together to be fundamental, but recognize that
the contradiction between physics+thermodynamics and geology, recognized by Kelvin in the
19th century, told against the then-dominant theory of physics rather than the then-dominant
theory of geology. Because the geological laws yielded a dierent age for the earth than
physics+thermodynamics, and because if B contradicts A, it's not the case that A implies B
(given that A is self-consistent), we know that one of A or B must not be a law. We don't
know whether to take this as a modus ponens of ~B or a modus tollens of A.
5
imperialist gets a weak pass. For the imperialist is not committed to our FISA
actually representing scientic reasoning; we may not be able to perform the
computations which FISA performs.
She is fairly ideal, and so may be ideal
in ways in which we are imperfect. Soperhapswe with our limited cognitive
resources are forced to engage in standard inductive reasoning to discover the
laws of the special sciences, rather than simply deriving them from the laws of
physics (together with whatever else). According to the imperialist, the fact that
some special science generalization is inductively supported is strong evidence
that it is a law, and so strong evidence that it is a consequence of the laws (and
'robust' facts) of physics.
This pass is a weak one.
For the imperialist has given us no reasonat
least not yetto believe that the inductively supported generalizations of the
special sciences will line up with those derivable from physics. Note that it is
not enough for the imperialist to note the counterfactual robustness of special
scientic laws and claim that this robustness must come from the laws of physics.
6
For the source of this counterfactual robustness is precisely what is at issue !
Rather, she must provide some independent reason to believe that higher-level
inductive reasoning will arrive at the consequences of physics, rather than some
other generalizations.
Despite these successes imperialism lacks the resources to explain counterfactual robustness while retaining descriptive adequacy. To see this, let's
rst examine
austere imperialism.
Austere imperialism holds that the laws of
the special sciences are a consequences of the laws of physics alone. We can see
right away that austere imperialism will simply not do: for the laws of physics
alone have too few direct consequences to underwrite all of the special science
laws. And this is reected in the structure of the laws of physics and the laws
of the special sciences. The laws of physics are temporally symmetric, excep-
7
tionless, and deterministic .
The laws of the special sciences are temporally
asymmetric, have exceptions, and are often statistical. So the special scientic
8
laws could not be a result of the laws of physics on their own .
Now consider the permissive imperialist who adds
a posteriori
necessities.
It's not at all clear how this could help. For if the laws of physics are temporally
symmetric, exceptionless, and deterministic, adding a metaphysically necessary
6 Loewer (2008) argues that, because the higher-level frequencies are determined by statistical mechanical probabilities, observations of higher-level frequencies give us evidence about
the underlying fundamental probabilities. I will address this later.
7 Quantum mechanics, on either the orthodox or Ghirardi-Rimini-Weber formulation, is
indeterministic and temporally asymmetric. But this should not concern us: rst, the orthodox interpretation is widely regarded to be inadequate, both in specicity (it posits collapses,
but does not say when or how they occur) and in internal consistency (the indeterministic
collapse postulate is in tension with the deterministic evolution of the wavefunction). Meanwhile, the GRW interpretation makes empirical predictions which are distinct from those of
orthodox quantum mechanics, but enjoy limited imperical support. In either case, it's doubtful that the temporal asymmetry and indeterministic nature of quantum mechanics underlies
the asymmetry and indeterminism in the special sciences. Finally, on either of the other two
leading interpretations of QM (Bohmianism and Everettianism), physics is deterministic and
temporally symmetric.
8 For a more thorough and engaging discussion of this problem, see Loewer (2008).
6
lasso between these laws and some higher-level terms will not introduce an
asymmetry, exception, or indeterminism.
So to retain descriptive adequacy, the imperialist ought to become more
permissive. She ought to include, not only the laws of physics and
a posteriori
necessities, but also some 'robust' initial conditions. To make this work, she will
need a clear notion of robustness: one which will lead to an explanation of the
lawhood of special scientic laws. By adding facts about the past, and not the
future, we can secure the temporal asymmetry, exceptions, and indeterminism
of the special sciences. But note that adding these initial conditions immediately makes this aspect of the coordination problem more pressing: for if the
initial conditions are not
themselves
laws, how can they make
other
general-
izations laws? It seems that the imperialist must talk fast if she is to explain
counterfactual robustness (it is just this issue that leads Beatty (1994)
to argue that biology is without laws).
The contention here is not that accidental facts never support counterfactuals. They do: the accidental fact that my favorite mug just appeared on a TV
show makes it the case that
if I were to sell it on Ebay, I would make $70.
The
worry is instead that the laws of the special science are robust in a way that
these accidents are not. The fact that all of the coins in my pocket are quarters
makes some counterfactuals true, but it's not the case that if this nickel were in
my pocket, it would become a quarter. The laws of biology are not like this: it's
true that if I were a bear, I would hibernate through the winter. This second
class of counterfactuals, about what would occur under some manipulation, is
the sort of counterfactual that can be grounded by laws but not accidents.
Next, without a specication of which initial conditions are
robust, the impe-
rialist's solution to the problem of methodological independence is even
more fraught.
explain why
For whichever initial conditions she chooses, she will need to
those
initial conditions, and not the others, make a generalization
available for inductive discovery at the higher level. But there is no reason to
believe that there is any set of conditions on robustness that will do this. In
fact, there is reason to believe the opposite.
To see this, we may do well to examine one of the most worked out extant
imperialist theories: that of Loewer (2008, 2009). Loewer recognizes that initial
conditions on their own cannot a counterfactual support; so he argues that
some initial conditions ought to be included in the book of laws. Specically, he
thinks that, in addition to the laws of physics, our fundamental lawbook should
include PROB, a law that species a probability distribution (or density) over
possible initial conditions that assigns a value 1 to PH [the initial low entropy
condition] and is uniform over those microstates that realize PH, (Loewer,
2008:19).
As this low-entropy initial condition is a law, it is just as able to
underwrite counterfactuals as the other laws in our fundamental lawbook. And
PROB, Loewer argues convincingly, deserves to be in our lawbook for the same
reason other laws are: adding it dramatically increases the informativeness of
the lawbook without unduly complicating it.
So far, Loewer looks to have solved the problems of austere imperialism
without adding the paralyzing complications of the permissive view. PROB is
7
temporally asymmetric and probabilistic, and so can underwrite similar temporal asymmetries and probabilistic higher-level laws that don't follow from
physics alone. But because PROB (according to Loewer and Albert) is a law,
it neatly explains the counterfactual robustness of its consequences.
Unfortunately, PROB and the laws of physics cannot save imperialism. They
are, by themselves, too permissive: many generalizations will have high probability, according to themmore than are counted as laws by the special sciences.
This is because many highly probable generalizations will be burdensomely gruesome: we can take any two special scientic laws, which we can assume are
given a high probability by the Loewer-Albert system.
We can then dene
gruesome predicates by pasting together terms from each law, and thereby arrive at a gruesome generalization at least as probable as the conjunction of the
two laws. If they have a high enough probability, this generalization will also
have a probability above whatever threshhold we set for lawhood, but because
of its gruesomeness, will not be a law.
And there is no guarantee that the laws of the special sciences we have will
be given a high initial probability by these two. To see this, consider a law of
population genetics. Such a law will depend sensitively on contingent facts early
in the evolution of modern animals (it is just this problem which is discussed in
Beatty (1995)). But PROB does not give a high probability to these historical
factsor at least does not probilify them over their alternatives. So it is unable
to distinguish the laws as counterfactually robust as we had hoped.
Loewer recognizes this, and the view he arrives at is closer to the permissive
imperialist view: The special science laws that hold at t are the macro regularities that are associated with high conditional probabilities given the macro state
at t (Loewer, 2008: 21). As the universe evolves... the probability distribution
conditional on the macro state will also evolve. We can illustrate this with our
FISA as follows: she starts out with credence 1 in the laws of physics, and in the
low-entropy macrocondition. Her conditional credences are uniform with respect
to the those microstates that realize the low-entropy macrocondition. As the
universe evolves, our FISA conditionalizes on macroscopic informationthat is,
information about the positions of middle-sized dry goods, their temperatures
and densities, locations and velocities. At any time, having conditionalized on
all of the universes macroinformation, those generalizations with high probability are the special scientic laws at that time.
Here we have a permissive imperialist view with a well-dened notion of
robustness:
the robust initial conditions are those which are encoded in the
world's macrostate. But we can see immediately that this too is problematic:
rst, not all true macroscopic generalizations are laws; but all true macroscopic
generalizations will get probability 1 on the scheme advocated by Loewer. Second, some true macroscopic generalizations will be laws despite
not
having high
probability conditional on macroscopic information. For our generalizations of
population genetics. Presumably these are true because of some facts about the
structure of the chemicals which convey our genes. But these chemicals are
not
macroscopic; they are microscopic. So they will not be conditionalized on by
our FISA, and the generalization will not be a law.
8
Perhaps there is a way of tweaking the Albert/Loewer view to account for
this; but I'm doubtful that there is an independently speciable set of facts
such that conditionalizing the uniform distribution over microstates on these
facts will yield a high probability to all and only special scientic laws.
And
this generalizes: for a permissive imperialist view to work, there must be some
non
ad hoc
way of specifying which initial conditions are 'robust' enough to
ground higher-level laws. Without such a specication, the imperialist has no
way to distinguish laws from non-laws at the higher level. And without a way
of distinguishing the laws from non-laws, we will not have the beginning of an
explanation of counterfactual robustness and methodological independence. In order to explain why the special scientic laws are supported by
induction and support counterfactuals, we must rst distinguish between them
and the non-laws, which are
not
supported by induction or counterfactually
robust. The permissive imperialist cannot do this.
2.2
The Anarchist
The anarchist holds that the laws of the special sciences are laws for the same
reason that the fundamental laws are. What makes the special science laws lawful? This question will be answered dierently by dierent anarchistsHumean
anarchists, like Craig Callender and Jonathan Cohen, claim that they provide
the best systematization of facts in the language of their science (though, for
Callender and Cohen, the choice of language is arbitrary or pragmatic). AntiHumean anarchists, like Nancy Cartwright, hold that the laws of the special
science, like the laws of physics, encode dispositions or capacities which manifest
in the controlled environments that that science studies. There are, according
to Cartwright, no principles coordinating the laws outside of these controlled
environments.
While Callender and Cohen and Cartwright agree that the laws
and
facts
of the special sciences and physics depend on one another symmetrically if at
all, this is not a requirement of anarchism. I will call this breed of anarchism
'radical anarchism'.
According to the radical anarchist, our FISA will have a number of distinct, possibly incomplete credal functions available to her. Each of these will
be dened over a dierent set of propositions:
Fbiology (A|B), F physics (C|D)....
According to Callender and Cohen, A and B, C and D are dierent propositions
because they come from dierent ways of partitioning the space of worlds; there
may be some overlap between, say, A and C, and there may even be a translation
between the AB partition and the CD partition, but the probability functions
are distinct and dened over dierent propositions. Which credal function FISA
uses depends, according to Callender and Cohen, on which is easiest for FISA
to apply to the situation at hand. Which evidence propositions are most easily
veried in this situation? Which conditional probabilities easiest to calculate?
Similarly for Cartwright, FISA will avail herself to a variety of disjoint credal
functions, but instead of each being complete over a partition of the space of
worlds, they will each be incomplete and only dened within certain controlled
9
situations. So in situations in which
Fphysics (A|B)
is dened,
Fbiology (A|B)
is
not. The situations in which physics yeilds a conditional probability are those
with x-rays and scanning-tunnelling microscopes; the situations in which biology
yeilds conditional probabilities are those in which groups of animals interact.
Which credal function FISA uses will depend on the situation in which she
ndsor creates forherself.
It's compatible with anarchism that the
depend asymmetrically on the
deny that the
laws
facts
facts
at the special scientic level
at the fundamental level; but anarchists
so depend. Views of this latter sortaccording to which the
laws are in some way emergent, despite the dependence of the facts at the higher
level depending on the facts of fundamental physics, are held by Fodor (1974),
Lange (2009), and Armstrong (1983).
According to these philosophers, the
independence of the higher-level l arises because the laws of the special sciences
describe patterns which are visible only at the coarse-grained higher level, or
are not the result of the laws of physics alone, or are the result of the laws
of physics together with any suitably special initial conditions, or are backed
by modal facts (necessitation relations or irreducible counterfacts) which are
independent of both the lower-level modal facts and the higher-level categorical
facts. Because this version of anarchism allows some dependence between facts
at dierent scientic levels, we will call it 'moderate anarchism.'
Both varieties of anarchism score well in accounting for the methodological independence and, at rst brush, the counterfactual robustness
of the generalizations of the special sciences. The counterfactual robustness of
special scientic generalizations is explained the same way as the lawhood of
fundamental generalizations: either modally or in terms of unicatory power.
Similarly, the methodological independence of the special sciences is explained
easily by the metaphysical independence of the laws. Special scientists are able
to perform inductions in the same way physicists are because their laws are the
same as those of physics.
Radical anarchism does poorly in accounting both for the mutual constraint and the asymmetry of the special sciences and physics. On Cartwright's
view, any two sciences don't attempt to describe the same world; rather, the
make predictions about distinct controlled situations. No rules govern how they
interact with one another, but plausibly the capacities of any science can overturn those of any other. So it's surprising that scientists seek information from
one another, and that contradictory predictions are taken to indicate that one
or another science's laws must be altered.
Radicals realize this; both Callender and Cohen and Cartwright argue that
9
neither of these hold .
Unfortunately I do not have space to address their
9 Callender and Cohen reject asymmetry, but accept mutual constraint. On their view, each
science forms a deductive system in an independent vocabulary. Because the vocabularies describe the same world, they must agree on the categorical facts of the world. Consequently no
generalization at any level can imply that another generalization is (actually) false. However,
nothing in their view guarantees that the laws will agree on what happens in counterfactual
situations: a systematization could rule that, for some merely possible event A, if A were to
happen, then B would, while another could rule that if C were to happen, then D would,
10
arguments here; so we will give them a demerit for failing to account for these
relations, but note that this consequence of their view is not one these folks take
to be a negative.
Moderate anarchism does better in explaining mutual constraint and
asymmetry of the coordination problem. According to these views, constraint
and asymmetric dependence arise from the metaphysical dependence of the
facts
of the special sciences on the facts of fundamental physics. We were understandably mistaken in our belief that these constraints held at the level of laws.
The moderate position is not, unfortunately, able to explain some features of
the asymmetric dependence of the special scientic laws on the laws of physics.
First, the laws of the special sciences have exceptions; a traditional way to deal
with these exceptions is to claim that these laws have unstated
conditions.
ceteris parabis
This move, though popular, has been the subject of a sustained
attack (see Woodward, (2003):
section 6, and Cartwright (1980)).
or not the laws of the special sciences have
built in
Whether
ceteris parabis conditions,
frequently specifying situations in which they do not hold requires us to take
which are not a part of the special science in question. The
provided an asteroid does not strike
More subtly, explaining for which species the Hardy-Weinberg law
on board concepts
predictions of economics can be trusted
the market.
holds can only be done by discussing properties of DNA; explaining which
highly unlikelyscenarios are entropy-increasing and so violate thermodynamics' second law can only be done only by citing the momenta of the particles
underlying the system.
But meteors impacts are not describable in the con-
ceptual scheme of economics (we have astrophysics for that), describing DNA
proteins requires chemical, and not merely biological, concepts, and discussing
the (non-aggregate) features of the particles which make up a gas is outside of
the conceptual sphere of classic thermodynamics.
Though the anarchic view
may be able to explain the force of the special scientic laws, it is unable to
explain why their exceptions are often outside of the conceptual scope of the
science in which they feature.
All versions of anarchism face the
conspiracy problem
(see Callender and
Cohen 2010 for a discussion). If the laws of physics and the laws of the special
sciences are independent, how is it that they conspire together to produce a
unied world? That is, why is that the laws of physics somehow 'know' not to
push elementary particles around in a way which violates the laws of the special
sciences? And how do the special scientic laws, like those of psychology, fail to
license violations of the laws of physics? The conspiracy problem is a challenge
to the anarchist solution to mutual dependence; the anarchist claims that
the sciences describe the same world; but if she is radical and holds that their
laws are metaphysically independent, how do they combine to create a coherent
world?
A distinct, challenge for both anarchist viewsbut especially the moderate
where A metaphysically entails C but B and D are mutually contradictory; if A does not
occur Callender and Cohen can't guarantee that this would not be the case. Similarly, they
cannot guarantee that the chances assigned by various laws will yield compatible constraints
on credence.
11
anarchistlies in explaining the counterfactual robustness of the special
sciences. We gave anarchists a strong pass on this earlier: their explanation of
special scientic lawhood is, presumably, the same as their account of fundamental scientic lawhood.
Together, these problems create a dilemma for anarchist views.
For the
moderate anarchist: if the fundamental laws govern the fundamental facts, and
the fundamental facts explain the special scientic facts, what is left for the
laws of the special science to do?
For the radical anarchist:
if the laws all
independently determine the facts, how do they manage to produce a consistent
world? The more radical an anarchist is are, the less she can explain mutual
constraint.
The more moderate she is, the less she can account for the
counterfactual robustness of the special scientic laws.
The anarchist response is to claim that the special scientic laws
explain
in
a way which is not reducible to the laws of physics. But note that this requires
us to (a) take lawhood to be deeply tied to explanation, rather than governing,
and (b) accept an overdetermination of explanation. While many philosophers,
especially of the Humean strain, will not nd either of these especially troubling,
philosophers who take lawhood to be connected with governing,and who take
explanation to be similarly tied to causation (rather than unication), will nd
this especially troubling.
3
The Democratic View
I've argued that a successful solution to the coordination problem cannot take
the lawhood of the special sciences to be dependent on the laws of the physics.
And I've further held that the laws of the each science cannot be made laws
entirely be facts within the domain of that science. Both views leave at least
one of our explanandamethodological independence, counterfactual
robustness, mutual constraint, and asymmetry unaccounted for. How,
then, can these desiderata be met?
In this section, I'll present a view according to which the sciences work
together to generate a unied body of knowledge. The generalizations in any
science are laws, not because of their explanatory capacity given the facts of that
science, or because of their relation to more fundamental generalizations, but
because of their contribution to the informativeness of the total set of scientic
laws.
The mutual constraint the laws exercise on one another is a result of
the fact this informativeness is evaluated holistically: the laws of all sciences
taken together contribute to the informativeness of our system. So they need
produce an internally consistent and mutually reinforcing set of predictions.
And the independence of the various sciences is also accounted for: each science
contributes laws to the overall system independently.
12
3.1
The Best System Account of Laws
The view I'll advocate is a development of the Best System Account of laws
oered by Lewis (1983).
Lewis held that the laws of nature are those gener-
alizations which maximally balance strength, simplicity, and t. Strength is a
measure of the informativeness of the laws.
A system L is stronger than an-
other L' if and only if L rules out more possibilities than L'. The simplicity of a
system is measured by its syntactic simplicity when it's phrased in a language
whose predicates correspond to he natural, or fundamental, properties of the
world. A system L is simpler than a system L' if and only if L is syntactically
shorter than L' when phrased in this natural language. Finally, systems which
contain probabilistic or statistical laws can t the world better or worse than
one another: the
t
of a system measures how well the statistical predictions of
the system match the frequencies of the the world. A system L ts the world
better than a system L' if and only if L assigns a higher probability to the actual
sequence of events then L'.
The motivation of the Best System Account is simple: we are interested in
generalizations which can be used by us and give us a lot of information about
the world.
The virtues identied (strength, simplicity, and t) are justied
because they either measure how usable by us the set of generalizations are
(this is what simplicity does) or because they measure how much information
either binary or probabilisticthe laws convey.
Unfortunately there is a widespread concensus that the story I've just told
cannot be right.
Lewis's account mismatches our interests in discovering sci-
entic laws and it fails to identify the virtues scientists are actually interested
in. This is something Humeans have recognized for some time; Hoefer (2007),
Loewer (2007), Hall (MS), and Callendar and Cohen (2009, 2010) each hedge
10 .
their bets on Lewis's notions of simplicity and strength
The complaint against Lewis' system are multifarious:
against strength,
Lewis' notion of strength only allows us to compare two systems if one of them
implies a subset that which the other implies. Further, scientists seem to care
more about how information is packaged than did Lewis:
we want dynamic
laws which provide a great deal of information without relying heavily on initial
conditions (see Hall (MS) for this criticism). And we're typically interested in
information, not about the world as a whole, but about how experiments we
could carry out will turn up.
Against simplicity, the notion of simplicity which is elucidated by conrmation theory bears little resemblance to that Lewis describes, both in form and
function. In form, the simplicity which features in solutions to the curve tting
problem or discussions of inference to the best explanation is not syntactic in
terms of sentence length; instead, it invovles either counting the number of free
10 Their strategies for doing this vary; Hoefer says that simplicity should be understood not
syntactically but as a measure of inductive accessibility; Loewer and Hall each defer to the
virtues recognized by idealized physicsts; and Callendar and Cohen argue that there are no
domain-independent notions of simplicity and strength, but that instead each scientic discipline has wide leeway in determining how various virtues are measured against one another.
13
parameters in a theory or counting the number of auxilliary hypotheses which
the theory appeals to
11 . In each case, the relevant measure of simpicity is chosen
because either it will lead to more accurate predictions (as in the curve tting
problem) or because theories with fewer free parameters or auxilliary hypotheses
will be more conrmed by evidence than there competitors.
The important thing to note here is that, if either the literature on inference
to the best explanation or the curve tting problem is heading in something
like the right direction, our interest in simplicity arises not because we want our
nal
law system to be more manageable (as Lewis suggests), but because we
believe employing a strategy which favors simpler theories will get us to the true
laws faster and with fewer intervening errors. This also explains why scientists
never seem to sacrice inferential strength for simplicity when choosing between
systems of laws
12 : rather, they adopt new systems when the new system excedes
its predecessor in strength, and evaluate competitors at this later stage by their
simplicity or unicatory power.
We've seen two descendents of Lewis' view already: rst, Loewer's imperialist view is one according to which the laws include PROB in addition to the
laws of fundamental physics, and Callender and Cohen's anarchist view, which
eschews talk of fundamental or natural properties and instead holds that each
science formulates laws in its own independent vocabulary. Both Loewer and
Callender and Cohen are motivated by a drive to nd measures of simplicity
and strength without appealing to natural or fundamental properties. But neither of these views address the deeper problems with taking these virtues to be
constitutive of lawhood. As we'll see, nding a better measure of the informativeness of a law system is necessary if we are to understand the relationship
between the special sciences and physics.
3.2
The Democratic Best System
What have we learned from the foregoing discussion? First, we are interested in
nding the most informative lawbook we can. But we prefer that this information come in the form of widely applicable dynamic lawslaws which operate as
functions from states of the world at one time to states of the world at another.
Finally, while scientists employ simplicity considerations in theory choice, they
do so because simpler laws are better evidentially supported and lead to more
accurate predictions.
Taking these lessons in hand, we should recognize that aim of science is to
not
formulating simple and informative laws in the sense Lewis described. Rather,
science aims to formulate strong and
well-supported
laws.
Our laws need to
be as informative as they can be, compatibly with their being discovered and
conrmed through repeatable experiments. The laws are those most informative
generalizations which we can formulate by repeated observation. Our preference
for dynamic laws is explained by their repeatability: dynamic generalizations,
11 For a discussion of the debate about the former, see Forster and Sober (1994) or Kukla
(1995), for the latter see Lipton (1999), Rosenkrantz (1989).
12 This point is made forcefully by Woodward (2013).
14
but not initial conditions, can be observed in action over and over. Similarly,
we prefer simpler laws because they can be more quickly supported evidentially
and because they provide more accurate predictions.
To see how this bears on the coordination problem, let's return to our
FISA and consider her interests in formulating lawful generalizations.
interested in discovering the most informative set of
F (P |B),where
P
is a prediction and
B
conditional
She is
probabilities
is a set of boundary conditions.
Her
lawbook can be strong in two ways: rst, it can be strong by being accurate:
the conditional probabilities can be such that
which P and B, and
F (P |B) u 0
F (P |B) u 1
for situations in
when B and ~P. But her lawbook can also
be strong by being more applicable: that is, it can give her predictions for a
wider range situations, represented by the boundary conditions B. Call the rst
variety of strength
accuracy, and the second comprehensiveness.
Accuracy and comprehensiveness trade o against one another: a lawbook
can gain comprehensiveness by applying to situations with less uniform phenomena, although by doing so it will be unable to provide as accurate predictions of
their behavior. Maximizing the combination of these virtues is hindered by the
fact that the laws need to be in some sense
repeatable :
they must be formulated
in such a way that multiple distinct situations have, according to the laws, the
same boundary conditions, and the laws must yield the same predictions in situations with the same boundary conditions. This is a requirement if the laws
are to be discovered and evidentially supported by induction.
The laws are
generalizations which we can learn in one context and apply to another.
Thinking of strength in this way combines the notions of strength and t:
although accuracy is a rough analog of t, and comprehensiveness is a rough analog of strength, neither precisely maps on to the Lewisian notion. Repeatability
plays part of the role in trading o against comprehensiveness that simplicity
does in the traditional best system, but it is not a perfect match; and for our
laws accuracy trades o against comprehensiveness and repeatability together.
We can have more accurate probabilities that are tailored to each experimental
situation, but they will not be repeatable; we can have a probability functions
which is highly accurate but only by excluding some situations, but it will not
be comprehensive; and we can have a repeatable, comprehensive probability
function that moves further away from 1 for some true predictions and further
away from zero for some false ones.
Fundamental physics is extremely accurate.
For any maximally ne-grained propositions
B
But it is not comprehensive.
and
P,
a deterministic physics
will give zero to P if and only if P is false and assign one to P if P is true. But
physics is silent about less coarse-grained propositions: let B be the proposition
that the temperature of a gas is
T, its volume is V, and its pressure is P.
Our agent's information about the boundary conditions of a system need not
be maximally ned grained. But if she conditionalizes on the proposition that
some system's pressure and volume increase, what does fundamental physics
say about the gas's temperature? Unfortunately, nothing. For even adding
posteriori
a
identities relating physical properties to thermodynamic properties,
we still will not arrive at a prediction for the temperature of the gas: there are
15
physical states compatible with the boundary conditions which are temperature
increasing, and physical states compatible with the boundary conditions which
are not. So while a set of laws in terms of maximally ne-grained propositions
may be accurate, it will not be comprehensive.
To increase the comprehensiveness of the laws, we may add laws which take
us from course-grained stateslike temperature and pressureto other course
grained states. This is project of thermodynamics. Or we can add a probability
function over the ne-grained states which is invariant under the ne-grained
dynamics.
This is the project of statistical mechanics.
In either case, our
predictions will diverge perfect accuracy, so we will lose some accuracy in the
overall system. But we will gain comprehensiveness. I claim that each scientic
discipline increases the comprehensiveness of the overall lawbook.
By adding
more laws at a higher level, we increase the comprehensiveness of the overall
system with some moderate sacrice to its accuracy. The view here builds on
13 .
the work of Handeld and Wilson (2013)
Let's see how our FISA will behave on this way of understanding the laws.
She will begin with a set of fundamental laws; she'll work out the consequences
of these laws, and generate a probability function
F0 (P |B).
This probability
function will be incomplete; it will only be dened for maximally ne-grained
propositions
P
and
B.
So FISA will see if there is a set of more coarse-grained
variables in which she can formulate fairly accurate and repeatable laws. She'll
add these to her lawbook, and work out the consequences, arriving at an extended credal function
F1 (P |B).
But this credal function still won't be dened
over propositions at all levels of grain, either because these coarse-grained laws
don't imply
more
coarse-grained laws or because these implications are cogni-
tively intractable for FISA (recall that FISA is not logically omniscient; she, like
us, nds some inferences to complex to complete). So she will nd another set
of repeatable yet accurate generalizations at a coarser level of grain and
her lawbook, generating
F2 (P |B). When does this stop?
these
to
Whenever FISA either
FISA's credal function is dened over all propositions (unlikely) or she's unable
to nd laws that have an acceptable degree of both repeatability and accuracy.
Let us make this more precise. We require our lawbook to be formulated in
terms of a series of variables. Setting all of these variables determines a
of the world.
state
The variables thereby partition the space of nomically possible
worlds, with each cell of the partition corresponding to a unique state of the
world
14 . This requirement is not motivated by considerations of fundamentality
13 Handeld and Wilson deliver an apparatus for combining distinct objective probability
functions at various levels of grain without generating the sort of contradictions described in
Meacham (2013), but they do not oer a metaphysical view of probability to motivate their
hierarchy. The view described below provides a motivation for the sort of heirarchical view
described by Handeld and Wilson and extends the account to deterministic laws.
14 Thus far, nothing prevents the two worlds from diering without diering with respect to
the quantitative properties the laws concern. We should take this to be a benign consequence:
for if physics is complete, each cell in the partition induced by the variables of physics contains
exactly one world. But if it is not, some worlds dier without diering physically, and so have
the same physical state despite being distinct. Whether or not physicsor any scienceis
complete in this sense should be expressible by our theory of laws but not determined by it.
16
(as is Lewis' naturalness constraint).
Rather, to be repeatable and compre-
hensive our lawbook must identify some situations as identical with respect to
the quantities about which it yields predictions, and its predictions must be
functions of those quantities.
But our information about boundary conditions can vary in its degree of precision; less precise information is a coarse-graining of more precise information.
The fundamental physical laws, together with boundary conditions specifying
the heat and volume of a gas, yields no predictions about the gas's future state.
This gives us reason to include both coarse and ne-grained variables in complete lawbook. Given a set of ne-grained variables V, we can expand to include
a more coarse grained set by adding variables V' which are such that, for each
state S identied by setting the variables of V, there is some state S' dened by
the variables of V' such that S
`S'
but not vice versa. If this is the case, then
each cell of the partition induced by the variables of V' will be a disjunction of
cells induced by the variables of V. We can call the union of these two variable
sets V+.
These variables will either represent the fundamental quantitative properties
of the world or be coarse-grainings of variables that represent the fundamental
15 . We can now evaluate law-systems which
quantitative structure of the world
include information at dierent levels of grain.
We will evaluate the informativeness of a lawbook by evaluating the probability function it generates. But to do so, we need a recipe for generating a
probability function from the laws. We will do so as follows: if
able
from our lawbook, then
lawbook, where
f (x)
P (A|B) = 1;
if
P (A) = f (x)
A→B
is
deriv-
is derivable from our
is some function of our variables, then
where B is a proposition giving the values of the variables in
P (A|B) = f (B),
x.
The notion of derivability here is importantly weaker than implication; for
austere lawbooks will imply many facts which are cognitively inaccessible to
agents like us because of the computational complexity involved in deriving
them.
In such a case, we can and ought to add higher-level laws by hand,
even if these laws decrease the overall accuracy of the lawbook. Rather, it is
something closer to
cognitive accessibility:
a generalization is derivable in the
sense here specied if suciently idealized scientists could derive it. This adds
a parameter to our theory of laws: if our scientists are
very
idealized, then they
will be able to generate a more informative probability function from fewer laws;
if they are more like us, they will need manually increase the informativeness
of the lawbook by adding more laws. Call this parameter
accessibility, where a
15 As it stands, this requirement on our variable space looks similar to the naturalness
constraint of Lewis (1983). But it is just suspicion about this constraint that let Callender
and Cohen to reject the traditional best system! Appealing to it here is suspicious at the least.
But we can drop the requirement that the variables be metaphysically fundamental; instead,
we can hold that the laws identify some set of variables as fundamental, and that all laws are
coarse-grainings of these (nomically, but perhaps not metaphysically) fundamental variables.
We can then require that our lawbook contain a set of macroscopic variables in which we
are particularly interested as a coarse-graining of its fundamental variables, and evaluate its
informativeness in tandem with its terms using a method similar to that of Loewer's (2007)
Package Deal Account.
17
16 .
lawbook is more accessible when its laws are derivable by less ideal scientists
Because not all of the implications of our laws are accessible, our probability
function will be incomplete: it will not be dened for boundary conditions which,
when taken as inputs to the equations of our laws, yield equations too complex
to be solved by our less-than-ideal scientists.
The accuracy of the lawbook is evaluated as follows: each lawbook is given
an
accuracy score
17 . The probability function
using some scoring rule function
the lawbook generates is a conditional one; to evaluate it accuracy, we look
at situation which have the boundary conditions described by the laws.
We
update the conditional probabilities on those boundary conditions, and see what
probability the laws assign to the actual outcome of the situation. The closer
the probability assigned by the laws is to the actual outcome, the higher the
laws score on accuracy.
The comprehensiveness of the laws is determined as follows: given the lawbook's accessibility, over how many
actually instantiated propositions is it deall actual situations, nor need it be
ned? Recall that it need not be dened for
dened at all levels of grain. The more situations and levels of grain over which
it is dened, the more comprehensive it is.
This is the democratic view: each science represents a distinct level of grain,
at which we must balance accuracy and repeatability to formulate laws. But the
justication for adding new sciences is to improve the overall score of FISA's
credal state in terms of accuracy, repeatability, and comprehensiveness. How,
then, do we satisfy our four requirements?
Methodological Independence:
The laws of each science are added
to the lawbook because they individually increase the informativeness of the
lawbook. Determining which generalizations will ll this role at some level of
grain is the job of each special science.
Counterfactual Robustness: The laws of each science are laws for the
same reason: they increase the comprehensiveness of our system of laws without
weakening its accuracy or repeatability. They support counterfactuals for the
same reason the fundamental laws do. Of course, for a Humean, this story is
complicated; the short version says that the laws are counterfactually robust
because they ground coutnerfactuals by at; the longer version justies this
stipulation by the pragmatic utility of holding these particularly informative
16 Rather than being a disadvantage of the view, the addition of this parameter allows us
to generate a hierarchy of lawlikeness. For some laws will only feature in the most accessible
lawbooks; these laws are more approximate than those which feature in the least accessible,
most austere lawbooks. Take, as an example, the laws of classical mechanics. These laws
ar only approximately true, but they somehow manage to support counterfactuals, appear in
explanations, and underwrite predictions. Nonetheless they are in some way less deserving of
the name 'laws' than the laws of quantum mechanics. On my view, this is because the laws of
classical mechanics are unneccessary for agents with access to the laws of relativistic quantum
eld theory and unlimited cognitive capabilities, but extremely useful and informative for
agents more like us, who nd the equations of quantum feild theory impossible to solve exactly
except in very simple situations.
17 For more detail on scoring rules, see Joyce (1998), (2009), and Leitgeb and Pettigrew
(2010). Although some features of scoring rule functions are agreed uponand these are
features of quadratic scoring functionsthere is single agreed-upon function.
18
and supported generalizations xed while evaluating counterfactuals.
Mutual Constraint: Because the accuracy, comprehensiveness, and re-
peatability of a law system is evaluated holistically, we can expect the laws not
to contradict one anotherif they did so, the accuracy of the lawbook would be
obviously compromised, and we should expect the various sciences to inform
one another. Discovering connections between sciences allows us to insure the
mutual consistency of our overall belief structure.
Asymmetry: The facts at each level is a coarse-graining of some lower level.
If B is a coarse graining of A then setting the value of A determines the value
of B (but not vice versa). So more ned-grained information screens out more
coarse grained information, and the facts of the higher level science are implied
by the facts at the lower level sciences
18 .
I conclude that the democratic view neatly explains all four features of
the coordination problem: methodological independence, lawhood, mutual constraint, and asymmetry. Its explanation of methodological independence
and counterfactual robustness are reminiscent of the radical anarchist;
its explanation of mutual constraint and asymmetry are close to those of
the imperialist and the moderate anarchist, respecively. In this way it poaches
the best features of each of the views I've discussed.
3.3
Further Advantages of Democracy
It's worth noting here that even on the most austere inaccesible lawbook, the
laws of fundamental physics will not be wholly comprehensive. For while they
will be dened over all ne-grained propositions, they will not have any dened
probabilities conditional on coarse-grained information.
For a coarse-grained
proposition is a collection of innitely many nely delineated microphysical
states; there are innitely many arrangments of fundamental particles corresponding to the proposition that
the heat of this gas is forty Kelvin,
for ex-
ample, and nothing about that proposition gives us reason to take any of its
microphysical underlyers to be more likely than any others.
Albert (2000) and Loewer (2009) argue on the basis of this sort of considaration that the lawbook must contain PROB, a law specifying a probability distribution over initial states. Such a distribution will yeild conditional probabilities
conditional on any macroscopic proposition compatible with microphysics. I've
argued previously that Loewer and Albert do not go far enough because they
cannot account for the lawfulness of special science generalizations; the view I
defend here justies the inclusion of the laws of the special sciences by appeal
to the cognitive intractability of deriving conditional probabilities from PROB
for most special science generalizations.
Interestingly, though, on my view a
creature twofold: rst, Lewis' notion of strength doesn't allow with unlimited
18 It is just this asymmetry that requires us to add special scientic laws to our system:
though the ne-grained information settles the coarse-grained states, coarse grained boundary
conditions tell us nearly nothing about their ne-grained realizers. So we need add higher-level
laws to make predictions given coarse-grained information.
19
cognitive capacities would be interested in including PROB in her lawbook, but
no other special science laws. So on my view PROB has a special status.
This view of laws neatly accounts for two other features of special scientic
laws which have been recognized by various authors (Mitchell (2000), Woodward
(2003, 2013)): rst, the laws of special sciences have exceptions, but these exceptions cannot be captured in ceteris parabis clauses using the concepts of the
special science (Woodward (2003), Cartwright (1997)). On the view sketched,
special scientic generalizations are lawful if and only if they feature in a system which acceptably balances accuracy, comprehensiveness, and repeatability.
Laws which have exceptions can lack perfect accuracy but, by being repeatable
and extending the comprehensiveness of the system, be worthwhile additions to
the lawbook. Their inclusion does not require their exceptionlessness, nor does
it require that there be formulable or nonredundant ceteris parabis conditions
limiting their scope.
Secondly, the worthiness of special science vocabulary is not dependent on its
denability in fundamental terms. On Lewis' view, whether a term is eligible for
use in a special science depends on its degree of naturalness; degree of naturalness depends, for Lewis, solely on the length of its denition in perfectly natural
terms. This means, among other things, that the predicate 'electrino', which we
stipulate to refer to electrons created before 2015 and neutrinos created after
2015, is more eligible to feature in a special scientic law than is the term 'mammal,' which presumably has an extremely complex and disjunctive denition in
perfectly natural terms. 'Electrino' is not more natural than 'mammal', and independently of our view of special science vocabulary we should recognize that
mammals are more similar to one another than electrons are to neutrinos, whenever they are created. On the view here oered, the eligibility of a term instead
depends on whether the comprehensiveness of a set of laws can be suciently
increased by adding laws in those terms to our complete lawbook
19 .
While it is a requirement of the view that the higher-level terms force a
partition of worlds which is a coarse-graining of those oered by the lower-level
terms, this minimal constraint does not make the relative eligibility of coarsegrainings dependent on anything other than the informativeness of the laws so
phrased, as measured by accuracy, comprehensiveness, and repeatability.
Thirdly, laws come with various degrees of lawfulness. Some laws are less
modally robust:
they hold in fewer situations and they are less stable than
others. There is a continuum of laws, starting with the laws of physics, which
are exceptionless and maximally modally robust, moving through the central
principles of special sciences, like the principle of natural selection or the thermal
relaxation time of a certain sort of liquid, and culminating in mere accidental
generalizations. The view sketched, unlike the dispositional account of laws, has
the capacity to account for this. For there are more than one way to weight the
three virtues this view rests on: if perfect (or near-perfect) accuracy is given
maximal weight, then only the laws of physics are included in the lawbook. By
19 For a more in-depth discussion of the diculties involved in tying the Lewisian notion of
naturalness to our account of laws, see Loewer (2007) and Eddon and Meacham (2013); for a
discussion of this problem focusing on special scientic laws, see Callender and Cohen (2009).
20
varying our permissiveness for accuracy, we will vary the generalizations which
are permitted in the lawbook.
Those which count as laws on more accurate
rankings occupy a more privileged place on this continuum than those which do
not.
This hierarchy can be tied to the counterfactual robustness of the laws. For
the view sketched is Humean, according to which counterfactuals are made true
by the laws. Plausibly
20 , the strength of counterfactual support varies with the
21
accuracy of the laws . So counterfactuals which are made true by the laws
of physics, our most accurate set, override those made true by biology.
And
within a science, the counterfactuals made true by more accurate laws trump
those made true by less accurate lawsso the counterfactuals made true by
quantum mechanics trump those made true by classical mechanics.
I've claimed that a particular form of the Best Systems Account of lawhood
can explain the relevant features of the relationship between laws in various
scientic disciplines.
doubtful.
Can a more metaphysically robust view do this?
I am
For a key feature of the view is taking the informativeness of the
lawbook, measured in a particular way, to be partially constitutive of lawhood.
Anti-humeans reject the claim that the laws are, by their nature, informative.
So no way of measuring the informativeness of the lawbook will suce to make
some higher-level generalization a law.
Nonetheless proponents of metaphysically robust views who hold that
only
the fundamental laws are backed by modally robust fundamental facts can appeal to the view I've defended to distinguish between accidents and laws at a
higher level. Many philosophers who doubt that a fully Humean story can be
told about the fundamental structure of the world are subject to the criticisms
laid at the feet of the imperialist in section 2.1.
Although the view that re-
sults will have a dierent explanation of the counterfactual robustness of the
special science laws from that of the laws of physics, they will inherit the other
advantages of the democratic view.
4
Conclusion
Extant views describing the relationship between distinct scientic disciplines
leave key features of this relationship unexplained. This failure manifests itself in
philosophical views about the lawhood of special scientic laws; these views, no
matter their metaphysical commitments, fail either to account for the autonomy
20 or by stipulation!
21 Woodward (2003), 6.12, argues against Sandra Mitchell's (2000) notion of stability and
Brian Skyrms (1995) notion of resiliency on the basis that these nonmodal notions cannot
capture what we are really interested in in discovering laws, vis, their counterfactual stability
(this point is also made by Lange (2009)). On the view oered, as in other Humean views,
counterfactual stability is grounded in occurant facts, in this case, a sort of stability across
situation in a similar vein to that described by Mitchell and Skyrms. So it would be a mistake
to criticize this view for missing the counterfactsthey are true because of the occurant facts
described. Of course, the proof is in the pudding: does the sort of stability here described
generate the right counterfactuals? I hold that it does.
21
of the special sciences or for the mutual dependence of scientic disciplines. A
Humean view, which takes the informativeness of the laws to be partially constitutive of their lawhood, measures informativeness by the accuracy of predictions
made by the laws on the basis of repeatable boundary conditions, and evaluates
the informativeness of all sciences together, in uniquely able to capture these
features of the relationship between laws. I then point out additional advantages
of the view: it accounts for the degrees of lawfulness of special scientic laws,
the fact that special scientic laws have exceptions, and the fact that explaining
these exceptions often requires concepts that are not a part of the science in
which the law is formulated.
References
[1]
Albert, David (2000).
Time and Chance.
Cambridge, MA: Harvard Uni-
versity Press.
[1] Armstrong, David (1983).
What Is a Law of Nature?
Cambridge: Cam-
bridge University Press.
[2]
Armstrong, David (1997).
A World of States of Aairs. Cambridge:
Cam-
bridge University Press.
[2] Beatty, John (1995). The Evolutionary Contingency Thesis. in Wolters,
Comcepts, Theories, and Rationality
in the Biological Sciences. Pittsburgh: University of Pittsburgh Press.
Gereon, and James G. Lennox, eds,
[3]
[4]
[5]
Beebee, Helen (2000). The Non-Governing Conception of Laws of Nature.
Philosophy and Phenomenological Research
Beebee, Helen (2006). Does Anything Hold the Universe Together?
these
Syn-
149 (3):509-533
Beebee, Helen (2011). Necessary Connections and the Problem of Induction.
[6]
61: 571-594
Nous
45 (3): 504-527
Bird, Alexander (2007).
Nature's Metaphysics: Laws and Properties.
Ox-
ford: Oxford University Press.
[7]
Callender, Craig, and Jonathan Cohen (2009). A Better Best System Account of Lawhood Philosophical Studies 145 (1): 1-34.
[8]
Callender, Craig, and Jonathan Cohen (2010). Special Science, Conspiracy,
and the Better Best System Account of Lawhood.
[9]
Cartwright, Nancy (1983).
Erkenntnis
73: 427-447
How The Laws of Physics Lie. Oxford:
Oxford
University Press.
[10] Cartwright, Nancy (1999).
of Science. Cambridge:
The Dappled World: A Study at the Boundaries
Cambridge University Press.
22
[11] Troy Cross, (2012). Goodbye Humean Supervenience,
Metaphysics
Oxford Studies in
7:129-153.
[12] Demarest, Heather (forthcoming). Powerful Properties, Powerless Laws.
Putting Powers to Work: Causal Powers in Contemporary Metaphysics. Oxford: Oxford University Press.
In Jacobs, Jonathan (ed.)
[13] Dretske, Fred (1977). What Is a Law of Nature?
Philosophy of Science
44: 248-268.
[14] Earman, John, and Roberts, John (2005a). Contact with the Nomic: A
Challenge for Deniers of Humean Supervenience about Laws of Nature
(Part I)
Philosophy and Phenomenological Research
71(1): 122.
[15] Earman, John, and Roberts, John (2005b). Contact with the Nomic: A
Challenge for Deniers of Humean Supervenience about Laws of Nature
(Part II)
Philosophy and Phenomenological Research, 71(2):
253286.
[3] Eddon, Maya, and Chris Meacham (2013). No Work for a Theory of Universals.
[4] Elga, Adam (2004)
[5] Forster, Malcom and Elliot Sober (1994). How to Tell When Simpler, More
Unied, or Less ad hoc Theories Will Provide More Accurate Predictions.
British Journal for the Philosophy of Science
[16] Goodman, Nelson (1955).
45 (1): 1-35.
Fact, Fiction, Forecast. New York:
Harvard Uni-
versity Press.
[6]
Hall, Ned (2004). [7]
Hall, Ned (2004). Two Mistakes About Credence and Chance.
[17] Hall, Ned. (MS). Humean Reductionism about Laws of Nature. <philpapers.org/rec/HALHRA> accessed September 13, 2013.
[18] Hicks, Michael, and Schaer, Jonathan (manuscript) Derivative Properties
in Fundamental Laws.
[8] Kukla, Andre (1995). Forster and Sober on the Curve Fitting Problem.
British Journal for the Philosophy of Science
46 (2): 248-252.
Laws and Lawmakers: Science, Metaphysics, and the
Laws of Nature. Oxford: Oxford University Press.
[19] Lange, Marc (2009).
[20] Lewis, David (1980). A Subjectivist's Guide to Objective Chance. In
Richard C. Jerey, ed,
Studies in Probability and Inductive Logic.
Uni-
versity of California Press.
[21] Lewis, David (1983). New Work for a Theory of Universals. In
Metaphysics and Epistemology.
23
Papers in
[22] Lewis, David (1994). Humean Supervenience Debugged. Mind 103 (412):
473-490.
[23] Loewer, Barry (1996). Humean Supervenience.
Philosophical Topics
24
(1):101-127
[24] Loewer, Barry (2007). Laws and Natural Properties.
Philosophical Topics
35 (1/2): 313-328
[25] Loewer, Barry (2008). Why There Is Anything Except Physics. in Jakob
Being Reduced: New Essays on Reduction, Explanation, and Causation. Oxford: Oxford University Press.
Hohwy & Jesper Kallestrup (eds.),
[26] Loewer, Barry (2009). Why Is There Anything Except Physics?
Synthese
170 (2):217-233
[27] Loewer, Barry (2012). Two Accounts of Laws and Time.
Studies
Philosophical
160 (1): 115-137.
[28] Maudlin, Tim (2007).
The Metaphysics Within Physics.
Oxford: Oxford
University Press.
[29] McKenzie, Kerry (forthcoming). No Categorical Terms: A Sketch for an
Alternative Route to Humeanism about Fundamental Laws. In M. C.
Galavotti, S. Hartmann, M. Weber, W. Gonzalez, D. Dieks and T. Uebel
(eds.)
New Directions in the Philosophy of Science. New York:
[30] Mill, John Stewart (1843).
Springer.
A System of Logic, Ratiocinitive and Inductive.
Reprint, New York: Echo Library, 2009.
[31] Ramsey, Frank Plimpton. (1927). Laws and Causality. Reprint. In D. H.
Mellor (ed.)
F. P. Ramsey: Philosophical Papers.
Cambridge: Cambridge
University Press, 1990.
Inference, Method, and Decision: Towards a
Bayesian Philosophy of Science. New York: Reidel.
[9] Rosenkrantz, Roger (1977).
[32] Woodward, James (2013). Simplicity in the Best Systems Account of Laws
of Nature.
British Journal for the Philosophy of Science
0:1-33.
Metaphysics
and Science, Mumford, Stephen, and Matthew Tugby, eds. 48-72. Oxford:
[33] Woodward, James (2013). Laws, Causes, and Invariance. In
Oxford University Press.
24