ISAAC LEVI
CONTRACTING FROM EPISTEMIC HELL IS ROUTINE
ABSTRACT. I respond to Erik Olsson’s critique of my account of contraction from inconsistent belief states, by admitting that such contraction cannot be rationalized as a deliberate
decision problem. It can, however, be rationalized as a routine designed prior to inadvertent
expansion into inconsistency when the deliberating agent embraces a consistent point of
view.
1. INADVERTENT EXPANSION INTO INCONSISTENCY AND COERCED
CONTRACTION
Erik Olsson has recently mounted a vigorous and intelligent critique of the
account of inadvertent expansion into inconsistency and coerced contraction that I first proposed in Levi (1980) and restated in modified form in
Levi (1991). He has also suggested a remedy for the difficulties he finds
with my proposals. The difficulties he poses call for a better response than
the anticipations of them to which I gestured in Levi (1980a) or Levi
(1991). But the proposals I made in Levi (1991) are not threatened as
seriously as Olsson claims. What I shall undertake to do here is first offer
a brief sketch of the account of routine expansion, inadvertent expansion
into inconsistency and coerced contraction I have proposed. Following that
I shall summarize Olsson’s objections and how he proposes to avoid them.
I shall close by elaborating my own response to Olsson’s objections and
my preferred remedy.
I have argued that there are three ways in which expansions from K to
K+
h (the deductive closure of K and h) may take place legitimately:
(1) Deliberate or Inferential Expansion: A potential answer to a question
in a given context of inquiry is a potential expansion of K. The inquirer X recognizes a given roster of potential answers. Each potential
answer is evaluated with respect to the risk of error its adoption incurs
(given the truth of K) and the value of the information its endorsement
affords. The inquirer chooses the potential answer that offers the best
trade-off between the concern to minimize risk of error and to maximize the value of the information added to K. Such deliberate expansion
Synthese 135: 141–164, 2003.
© 2003 Kluwer Academic Publishers. Printed in the Netherlands.
142
ISAAC LEVI
is used in choosing between rival theoretical conjectures, in parameter
estimation and other forms of statistical inference, and more generally
in what may be called inductive or ampliative inference.
(2) Expansion by Choice: Decision maker Y presupposes, in advance of
making a choice among the options available in the context of a decision problem (whether it is practical or cognitive), that no matter
which of the options available is chosen, given that Y chooses an option that option will be implemented. When Y chooses a given option,
Y becomes committed to full belief that Y has so chosen and, hence,
that the option thus chosen will be implemented.
(3) Routine Expansion: The inquirer Z is committed prior to expansion
to adding new information to Z’s initial commitments K in conformity with a program for forming new beliefs in responses to signals
from Z’s environment that are specified in advance of implementation.
Routine expansion takes place in one of at least three ways:
(a) When Z adds new information to K in response to sensory inputs.
(b) When Z comes to full belief that h because witness or expert W
on whom Z relies for information on some particular topic has
testified that h is true.
(c) When Z makes an interval estimate of a statistical parameter in
conformity with the procedures of confidence interval estimation
or rejects a null hypothesis at a given level of significance by using
a most powerful test.
Unlike the other forms of expansion, routine expansion is implemented
by responding to inputs (sensory stimulation or testimony of witnesses)
according to some program rather than being inferred from information
believed to be true about such occurrences. Even though the implementation of such a program is not expected by the inquirer to produce a belief
inconsistent with the initial belief state, such conflict can occur. Indeed,
the inquirer X’s might rule out such inconsistency incurring as a serious
possibility relative to X’s point of view prior to implementing the program
for routine expansion. Nonetheless, on the belief contravening assumption
that such expansion into inconsistency does occur, X should acknowledge
that the expansion might occur in conformity with the program for routine
expansion that X has undertaken to implement. This should not be so for
either deliberate expansion or expansion by choice.
On the account of routine expansion I offered, expansion into inconsistency in this fashion is “inadvertent”. The rational agent ought to take steps
to contract the inconsistent belief state so as to avoid this untenable condition. The contraction is “coerced” in order to retreat from inconsistency
(Levi 1991, ch. 4. See also Levi (1980a).).
CONTRACTING FROM EPISTEMIC HELL IS ROUTINE
143
Coerced though it may be, I contended that that the contraction is determined via a two-stage decision problem. The inquirer expanded into
inconsistency by adding information h by routine expansion to the inquirer’s initial belief state K where K entails ∼ h. The expansion K+
h =
K⊥ .
To extricate him or herself from inconsistency, the inquirer should contract the inconsistent belief state K⊥ . Observe, however, that K⊥ can be
contracted in many different ways. On the view I have taken, the inquirer
should seek to minimize loss of valuable information among all the options
for contraction available to him or her. But what constitute the options
available for contraction?
My contention has been that in retreating from inconsistency, the
inquirer’s options should reduce to three:
I. Contracting from K⊥ to K∗h . K∗h is the AGM revision of K by adding
h. The AGM revision of K by adding h is the expansion by adding
h to K−
∼h = the contraction of K by removing ∼ h from K with a
minimum loss of valuable information. The inquirer ends up giving up
information in K and retaining the information obtained via routine
expansion.
II. Contracting from K⊥ by throwing out the information obtained via
routine expansion. This process may be tantamount to returning to
the status quo ex ante. Normally, however, it will be somewhat more
complicated. In the first place, the information that a conflicting report
was made will be retained while the assumption in K as to the reliability of the program for routine expansion may be called into question.
This net result is a “residual shift” from K in the terminology of Levi
(1980). In any event, the proposition in K that entailed ∼ h is retained
in the net modification of K. h is removed. Whether the reliability of
the program for routine expansion is called into question or not may
depend on the losses in informational value to be incurred from doing
so.
III. Contracting from K⊥ to the join of I and II. The inquirer throws out
the information obtained via routine expansion and the background
information challenged by it as well.
In the case of option I, a decision between rival contraction strategies
removing ∼ h from K is required before adding h to form K∗h . In the case
of option II, a decision whether to call the reliability of the program for
routine expansion into question is required. If it is called into question, a
further decision involving a choice between rival contraction strategies is
in order.
144
ISAAC LEVI
Often the losses incurred by strategies I and II will be equal or noncomparable. In such cases, III should be endorsed. Occasions can arise,
however, where options I or II should be favored.
Erik Olsson has challenged this account of coerced contraction that I
urged in Levi (1991). I shall begin my response by reviewing some features
of routine expansion in closer detail.
2. INPUT AND EVIDENCE
The new information obtained by routine expansion (regardless of which
type it is) is not “inferred” from data taken to be premisses of an argument. The new belief is acquired in response to input from the inquirer’s
environment in conformity with a program specified in advance of implementation. In implementing the procedure, the data are not evidence
but input. The new information is acquired directly in that it is not inferred from or otherwise justified on the basis of prior information. The
new information is not believed “as a result of immediately perceiving
it” as Olsson suggests (p. 2). That is to say, the acquisition of the new
information according to the program of routine expansion presupposes
the reliability of that program prior to its implementation. In many contexts, the presupposition includes a rather sophisticated theory of the way
the program for routine expansion works. Thus, routine expansion is the
sole method of non-inferential or “direct” acquisition of new information.
However, routine expansion is also “theory laden”. Routine expansion is
direct (that is, non-inferential) but is mediated (by background beliefs and
theories).1
In confidence interval estimation, the program for routine expansion
(or more generally for routine decision making) is chosen deliberately
with some definite objective in mind. In the other two types of routine
expansion, the program is often the product of nature or nurture. Such
program may indeed be modified when a difficulty calling for critical reflection requires it. But it would be misleading to suggest that expansion
that appeals to the testimony of the senses or of witnesses conforms to a
deliberately chosen program in all or even most contexts.
In confidence interval estimation, the interval estimate is selected in
response to the data (that serve as input and not as premisses) according to
the chosen program. A simple example derived from an illustration of Ian
Hacking’s for another purpose will help explain how this works.
Suppose X is certain that a given urn has either 99 black balls and one
white (HB ); or 99 white balls and one black (HW ). X is also convinced
CONTRACTING FROM EPISTEMIC HELL IS ROUTINE
145
that X will observe the color of a ball selected at random from the urn.
The problem is to make a prediction as to the color of the ball.
X might begin by considering X’s state of credal probability judgment prior to observing the outcome of the random selection. Suppose
it is numerically determinate assigning HB the “prior” probability x and
HW 1 − x. If X responds to observation of the outcome of the draw
via a routine expansion of type (a) by expanding X’s initial state K by
adding the information that the ball drawn is black (B), the proposition
can then be used as evidence to update X’s state of credal probability.
Conditionalization via Bayes’ theorem yields the result that Q(HB /B) =
x(Q(B/HB )/[xQ(B/HB )(1 − x)Q(B/HW )] = 0.99x/[0.99x + 0.01(1
− x)].
One can then decide whether to expand deliberately by adding HB , HW
or to remain in suspense. Such a decision is controlled in part by the aims
of the inquirer in adding new information to the inquirer’s belief state.
According to the view I have taken of this matter since Levi (1967a, b), the
aim should be to obtain new error free information.
According to the proposal in Levi (1967b), the informational value
accruing from adding HB to the initial state K is 1 − M(HB ) and from
adding HW is 1 − M(HW ). M is a probability measure such that M(HB ) =
1 − M(HW ). The epistemic utility accruing from adding HB when HB is
true is 1 − qM(HB ). When HB is false, it is −qM(HB ). The expected
epistemic utility accruing from adding HB prior to finding out the result
of sampling from the urn is Q(HB ) − qM(HB ). Here q is the index of
boldness and takes positive values less than or equal to 1. After sampling,
the expected epistemic utility is Q(HB /B)−qM(HB ).2 Credal probability
should not, however, be numerically determinate in all contexts. Indeed, in
our toy example and the more serious examples of parameter estimation
that it simplifies, prior credal probability should go indeterminate in typical cases. Classical statisticians, in effect, took for granted that such prior
probabilities should be maximally indeterminate. The permissible values
of x range in that case from 0 to 1 inclusive. Applying conditionalization
via Bayes theorem to each permissible value yields a posterior y for HB
that also ranges from 0 to 1 inclusive. In the situation envisaged, Nothing
can be learned about the truth or falsity of HB from using the data (the
information about the outcome of the experiment) as evidence.
In particular, if one seeks to choose via deliberate expansion whether to
add HB , HW or to remain in suspense before sampling and after sampling,
the set of permissible expected utility assessments remains the same.
When several options are E-admissible, one might invoke additional
value commitments to choose among the E-admissible ones. There is no
146
ISAAC LEVI
TABLE I
f1
f2
f3
f4
f5
f6
f7
f8
f9
B
W
HB
HW
HB
HW
HB∨W
HB
HW
HB∨W
HB∨W
HB
HW
HW
HB
HB∨W
HB∨W
HB∨W
HB
HW
principle of rationality to invoke to mandate appeal to a standard secondary criterion. However, one commonly invoked principle is the injunction
to maximize minimum expectation among the expectations conditional
on HB and conditional on HW . In deliberate expansion, the minimum is
−qM(HB ) for adding HB and −qM(HW ) for HW whether or not the
calculations are made before or after the color of the ball selected is
ascertained.
To address the problem of the uselessness of observational data as evidence when priors go maximally indeterminate, Peirce and later Neyman,
Pearson and Wald proposed another way to use the data – using it as input
into a program.3 In the most general case, it is input into a program for
routine decision making (“inductive behavior” as Neyman and Pearson
originally called it). When the “output” is the formation of a full belief
or an estimate, I call it “routine expansion”.
Such a program specifies in advance of implementation, a function from
data inputs to belief outputs. Table I specifies a list of functions that could
be considered:
In addition, one might consider mixtures of these programs. We shall
suppose that these are not optional here.
The expected epistemic utility of f1 conditional on HB is 1 − qM(HB ),
conditional on HW is −qM(HB ). The corresponding numbers for f2 are
−qM(HW ) and 1 − qM(HW ). For f3 , they are {0.99[1 − q(M(HB )] −
0.01qM(HW )} and {0.99[1 − q(M(HW )] − 0.01qM(HB )}. For f4 , they
are {0.01[1 − qM(HB )] − 0.99qM(HW )} and {0.01[1 − qM(HW )] −
0.99qM(HB )}. For programs that have the payoff HB∨W (complete suspense), 1 − q is the appropriate epistemic utility value to use in the
calculations.
CONTRACTING FROM EPISTEMIC HELL IS ROUTINE
147
TABLE II
f1
f2
f3
f4
f5
f6
f7
f8
f9
1 − 0.5q
−0.5q
0.99 − 0.5q
0.01 − 0.5q
1−q
1 − 0.505q
0.01 − 0.505q
f − 0.995q
0.99 − 0.995q
−0.5q
1 − 0.5q
0.99 − 0.5q
0.01 − 0.5q
1−q
0.99 − 0.995q
1 − 0.995q
0.01 − 0.505q
1.0 − 0.505q
Let M(HB ) = M(HW ) = 0.5. Given this assumption, the appropriate
conditional expectations for the nine “pure” programs are given in Table II.
When the prior credal probability of HB is numerically determinate,
the expected epistemic utilities of all these programs are determinable.
The program of routine expansion recommended before the outcome of
sampling is ascertained recommends adopting the same beliefs in response
to the outcome of sampling as deliberate expansion maximizing expected utility on the total evidence available after finding out the results of
sampling.
However, when prior credal probability goes maximally indeterminate
and q is greater than 0.02, f1 , f2 , f3 , f6 and f9 are E-admissible programs for routine expansion. If maximizing minimum expectation among
E-admissible programs for routine expansion is deployed, f3 is recommended. By using the program f3 , the inquirer will end up coming to
judgment HB or HW depending on the outcome of observation but the
belief formed by the inquirer as to that outcome will not have served as
evidence for the decision in favor of HB or HW .
The inquirer X need not actually make the observation. X may use an
automaton or some other reliable observer to make the observation and to
calculate the prescription as to what X should come to believe according
the program X has chosen. Using an automaton or other stooge in implementing the program for routine expansion is one way of depriving oneself
of the option of reneging on the implementation of the program in the light
of the information obtain by observation.
Even if X does not delegate implementation of the process to a stooge
or automaton, X may have precommitted him or herself in a manner that
deprives X of the opportunity to renege before the process of routine expansion is completed. X is to obey the instructions X set for him or herself
148
ISAAC LEVI
beforehand. Like Ulysses, X binds him or herself to the mast and sails
between Scylla and Charybdis. To the extent that X can precommit in this
fashion, X may use data as input.
The use of data as input in the context of statistical estimation problems is based on a rationale for precommitment to a policy undertaken
before information obtained from observation is acquired. Response to the
testimony of the senses and the testimony of experts and witnesses can
be understood in the setting of precommitment policies or programs for
routine decision making as well (Levi 1980a).
Normally such rationalization is unnecessary in the case of empirical
observation. The programs for routine expansion via observation are not
as a rule chosen deliberately. They are rather the product of training and
natural endowment. Agent X responds to stimuli that X need not have any
independent way of characterizing. The response may be a fit of doxastic
conviction or a disposition to have such fits. The response carries content
when it is redescribable as X’s undertaking a doxastic commitment to
believe some proposition such as that a given liquid is red.
However, programs for expansion via observation are not written in
stone. Just as an inquirer’s full beliefs should be changed when there is
good reason for doing so, programs for routine expansion ought to be
subjected to criticism, modification and improvement when there is good
reason for doing so. Modifications happen through retraining. It can also
happen with the aid of prosthetic devices such as telescopes, microscopes,
hearing devices and the like.
We may think of an inquirer X as committed by nature, nurture or prior
inquiry to a roster of programs for routine expansion at a given time. Such
programs are, in the given context, default programs in the sense that X
is committed to using such programs in routine expansion via observation
unless good reasons emerge justifying their modification or replacement.
Such good reasons could arise because doubts legitimately arise concerning the reliability of the programs in operation. Or the default programs do
not yield information sufficiently precise for the problems under investigation. But in the absence of such challenges, the default programs remain
in operation.
In the default cases, X is not using data identifiable propositionally as
input that could in some other setting be used as evidence. The input is
the stimulation from the environment. But when a modification is sought,
a distinction is introduced between the commitment that would have been
taken according to the default program and the commitment undertaken
according to the modified program. X might look at the stick in the water,
become convinced that it is straight while acknowledging that it appears
CONTRACTING FROM EPISTEMIC HELL IS ROUTINE
149
bent. The appearance need not be some episodic propositional attitude intermediate between the signal source and the conviction. Rather it alludes
to what X would have come to believe had X not been trained into a different program for routine expansion. Insofar as “The stick appears bent to
X” ascribes an attitude to X, the attitude supports a conditional judgment
that X would have come to believe that the stick is bent had X followed the
default program for routine expansion rather than the modified program X
is currently endorsing. We may call this attitude an “observation report”.
But the content of such a report is parasitic on the attitude that would have
been formed if the default program had been implemented.
When witnesses and experts are consulted in forming expansions, their
testimony, which carries propositional content, is, nonetheless, so I contend, used as input into a program for routine expansion. The content
of the testimony is used to determine the belief formed according to the
program. Thus the program might stipulate that if the expert says that p,
add information that p to the new belief state. Even so, the belief that is
added is not inferred from the premise that the expert said that p. The
expert’s saying that p is taken to be an input into a program for routine
expansion even though the saying that p is regarded as an assertion by the
expert expressing the expert’s opinion.
Observation reports have been likened by Hume of others to the
testimony of witnesses. In confidence interval estimation and hypothesis
testing, expansion via observation does use observation reports. It does
not, however, use such reports as evidence or premisses from which inferences are drawn but as intermediate inputs into a program whose eventual
output is the estimate. In our toy example, sensory stimulation produces
the belief forming response B or W . The program specifies for each of
these inputs what “estimate” of the contents of the urn to come to believe.
The testimony of the senses is used as input rather than output.
Even though the data are not used as premisses, the programs for
routine expansion utilized are tacitly or explicitly (as the case may be)
presupposed to be reliable procedures for acquiring valuable information.
Such assumptions of reliability can be called into question. When such
challenges are serious, there is a call for reconsidering the program for
routine expansion.
In observational routines, the notion of data as an intermediate stage
need not strictly speaking be assumed. The data reports are outputs of
what is taken to be the default program for routine expansion. But for the
purpose of discussing modifications of such programs, it is convenient to
be able to formulate them by means of functions from data reports to belief
acquisitions as in the case of confidence interval estimation. As long as this
150
ISAAC LEVI
manner of speaking is not taken to imply any privileged observation base
or testimony of the senses, the conceit is a useful one.
Routine and deliberate expansion superficially correspond to the foundationalist distinction between acquiring information directly and noninferentially, on the one hand, and acquiring information through inference.
But there is nothing foundationalist about the distinction as understood
here. The information obtained by routine expansion can sometimes be
obtained by deliberate expansion and vice versa. And information obtained
by routine expansion is just as susceptible to revision and is no more
certain than information obtained by inference.
We need to use programs for routine expansion because we seek information that may not be available via inference at a specific stage of
inquiry.
That is why empirical and social dimensions of inquiry are important.
We do not appeal to empirical data because information obtained from
such data are more certain and incorrigible than information obtained by
inference. To the contrary, inquirers should normally regard themselves
as incurring a risk of error in using programs for routine expansion using
the testimony of the senses. But whether they are absolutely certain that
the programs for routine expansion they use produces error free beliefs
or recognize that there is some positive probability of error, following a
program for routine expansion can lead to expansion into inconsistency.
Or so I maintain. The threat of expansion into inconsistency is tolerated
because of the opportunity to obtain new information afforded by routine
expansion via the testimony of the senses.
The significance of the social aspect of inquiry manifested in reliance
on the testimony of witnesses and experts should be seen in much the same
way. The acquisition of new information in this way is just as direct (that is
to say, noninferential) as is the acquisition of information via the testimony
of the senses.4 Yet the information obtained lacks the immunity to revision
demanded by foundationalists. And the testimony of witnesses and experts
often contradicts initially held convictions. As in the case of the testimony
of the senses, we put up with the conflict injecting capacity of relying on
the testimony of witnesses and experts because of the information such
testimony promises.
3. INCONSISTENCY IS HELLISH
Olsson points out that my principles entail that inconsistency is hell (just as
Gärdenfors maintains) in the sense that it should never be used as a standard for serious possibility. I have insisted that X’s state of full belief at t is
CONTRACTING FROM EPISTEMIC HELL IS ROUTINE
151
the standard for distinguishing serious possibilities from impossibilities at
that time. The distinction between serious possibility and impossibility endorsed by X at t constitutes the framework relative to which X’s judgments
of probability and of value at time t are formed.
The salient feature of the inconsistent state K⊥ is its uselessness as a
standard for serious possibility. Every proposition is possible and every
proposition is impossible. Every proposition is true and is also false. My
claim has been that routine expansion can yield inadvertent expansion
into inconsistency resulting in the belief state K⊥ . For X to deliberately
decide to contract from K⊥ , X should, according to my approach, deliberate from the vantage point of K⊥ . Olsson is right to complain. I do
not think, however, that the remedy is to guarantee against expansion into
inconsistency.
4. ROUTINE EXPANSION IS INEVITABLY EXPOSED TO EPISTEMIC HELL
Olsson suggests instead that inadvertent expansion into inconsistency be
avoided. We should, instead, allow routine expansion to yield anomaly as
I point out could happen in Levi (1991) if certain programs are adopted.
This can lead to the results of routine expansion having a similar critical
bite as the results of inadvertent expansion into inconsistency.
For the sake of simplicity of exposition, generality will be sacrificed by
resorting to the example where X deploys a program for routine expansion
such as the “matching colors” program discussed for illustrative purposes
in Levi (1991, 3.4). I shall, however, consider situations where the default
program is not called into question and situations where it is.
X knows that the outcome of observation of the color of a given liquid
is one of three responses: R, W , B. These three responses are coming
to be certain that the liquid is red, white and blue respectively. In effect,
they are three possible expansions of X’s initial state of full belief initiated in reaction to sensory stimulation. We need not make any specific
assumptions as to how X came to be subjected to responding to sensory
stimulation in that way. Presumably some mix of visual processing and
training is involved. Perhaps, the program has been modified in previous
deliberations. In the context under consideration, however, the program is
a mode of information acquisition that is a default program for routine
expansion concerning colors.
X may sometimes deliberately modify these responses according to
some plan adopted before making observation. The result is a new program
for routine expansion. Sometimes these modifications may be represented by describing the new, modified responses as adjustments of the old
152
ISAAC LEVI
responses according to some rule. When that happens, the three old responses R, W , and B are no longer instances of coming to be certain or
fully believing. They are reportings that the color is red, white and blue
respectively or appearances of red white and blue. The new responses to
sensory stimulation are adjustments or modifications of the old where the
adjustments are representable (at least on some occasions) as functions of
the old responses (the reportings) to the new responses that are expansions
of X’s initial state. So, for example, X might lose confidence in X’s ability
to distinguish white from blue and respond to both W and B by suspending
judgment as to whether the color of the liquid is white or blue. Yet, W and
B might be recognized as distinct kinds of reports. (See note 2.)
Suppose (for the sake of the argument) that X is able to adjust the
default so as to come to be certain that the color is red, white, blue or
any Boolean combination of these. This kind of adjustment, so we shall
suppose may be made in response to any one of the three reports or response to the default program. For example, if X responds to observation
by becoming certain that the color of the liquid is red (R), X might readjust and become certain that the liquid is blue or might suspend judgment
between red and blue. Such readjustments may be decided upon prior to
making an observation according to a program.
Whatever the reasons might be for reconsidering a default program,
when the reasons warrant considering adjustments, X faces a choice
between many programs. For each of the three observational responses
in our toy example, there are eight possible expansion strategies. Hence,
there are 24 functions representing programs for routine expansion.
Using the default program is an expression of conviction that the program is a “reliable” source of new information. Presumably, the chances
of coming to believe that the color of the liquid is not red, given that it is
are tolerably low. The same is true for the other colors. In challenging the
program, X may question these assumptions of reliability. Alternatively X
may consider whether even if the chances are correct, there is not some
way of improving the program.
In this discussion, I shall pretend that the information available to X
is very precise. In real life, the information about chances will tend to be
much less precise; but I hope realism will not be compromised excessively.
I shall suppose that X is certain that conditional on the liquid being red,
the chances of R, W and B are 0.99, 0.0009 and 0.0001 respectively, conditional on the liquid being white, 0.001, 0.99 and 0.009 and conditional
on the liquid being blue 0.009, 0.001 and 0.999.
In addition to having some such view of the reliability of the default
program for routine expansion (that is to say, the chance distribution over
CONTRACTING FROM EPISTEMIC HELL IS ROUTINE
153
the distinct reports), X also has, prior to observation, some view of the
color of the liquid. In Levi (1991), I first considered a case (I shall call it
Case 1) where X is in suspense as to whether the color of the liquid is red,
white or blue.
X is in no doubt that the color of the liquid is exactly one of the three
colors mentioned and that X’s response to making an observation will be
R, W or B. X is absolutely certain that the chances are as specified given
HR (liquid is red), HW (liquid is white) and HB (the liquid is blue).
So much is clear explicitly or implicitly in Levi (1991, 3.4). I did not
make clear the importance of the difference between the version of case 1
where X is committed to using the default matching color program without
question and the version of case 1 where that is subject to challenge. It is
only in the latter case that evaluating the default program against the 23
rival programs could become relevant.
So now let us consider a situation where X is committed to using the
default program without challenge. From X’s point of view, there is no
serious possibility that X will expand into inconsistency provided that X
implements the default program correctly. In any particular context where
X does adopt the program, X does take for granted that the default program
will be implemented. So in the setting where X initiates the running of the
program say by looking at the liquid, X rules out the prospect of importing
inconsistency as a serious possibility.
There is a logical possibility that this can happen even if X makes
no mistakes in implementing the default program. By way of contrast, in
deliberate expansion, expansion into inconsistency cannot happen as long
as X has started from a consistent belief state, has coherent probabilities, made no mistakes in calculations required when computing expected
utility and, more generally, has proceeded rationally. It is in this sense
that routine expansion is potentially conflict (i.e., inconsistency) injecting
whereas deliberate expansion is not.
Nonetheless, from X’s point of view in the context at time t when the
program is to be implemented, there is no serious possibility that an inconsistency will result from using the program on that occasion. Why then
should X be concerned by a mere logical possibility that is not a serious
possibility? At that time, indeed, X should not be concerned.
Consider, however, some time t ∗ well before the context where X looks
at the color of the liquid. The default program for routine expansion is one
that X can use in looking at the color of other liquids or, indeed, other
objects. If X were to consider whether the default program can inject conflict (i.e., inconsistency), X would acknowledge commitment to the view
that this is not a serious possibility conditional on proper implementation
154
ISAAC LEVI
of the program. However, X is not committed to ruling out the prospect
of improper implementation in all future applications. In general, X will
countenance the serious possibility of incorrect application of the program
occurring. This is so, even though X is convinced at t ∗ that at the time t
(later than t ∗ ) of the application X will be certain that X is implementing
the program correctly.
At t ∗ , X may recognize as a serious possibility, therefore, that X will
come to believe at t that the liquid is green and expand into inconsistency.
Or at t ∗ , X may recognize as a serious possibility that X will respond by
suspending judgment concerning the color. That is to say, X might fail to
report R, W or B. Both eventualities lead to inconsistency in X’s state of
full belief at t when X assesses serious possibilities at t ∗ although not when
assessing serious possibilities at t when X elects to make the observations.
The shift in the assessment of serious possibility between t ∗ and t is
due to X’s conviction at t that X is in control of the implementation of
the program. If X is in any doubt about this, X cannot run the program.
X may give up on attempting to find out about the color of the liquid at
t by routine expansion or X may persist in trying to do so. Such trying is
itself the implementation of a different program for routine expansion over
which X takes for granted X has control. And the remarks I have just made
can then be reapplied to that program.
Of course, the prospect of expanding into inconsistency is not a matter
concerning which X should be concerned at t given that it is not, according
to X, a serious possibility. It is only so at t ∗ . At t, it is merely a logical
possibility.
According to Olsson, if X at any point prior to implementing the
program for routine expansion recognizes as a serious possibility that it
will lead to inconsistency, X should avoid implementing the program so
designed. If I am right, prior to implementing the program for routine
expansion, the program will be regarded as potentially conflict injecting.
The conclusion seems to be that routine expansion ought to be prohibited.
Neither Olsson nor I are prepared to endorse this conclusion.
The logical possibility of such expansion into inconsistency is present
even in the other version of case 1 where the default program has been
challenged and X is contemplating a choice of one of the 24 programs.
That there are only 24 programs available presupposes that X will either
report R, W or B and that the potential answers to the question under
consideration are Boolean combinations of HR , HW and HB .
Before proceeding further, let us consider how a choice between the 24
programs would be evaluated according the account of the goals of inquiry
I favor.
CONTRACTING FROM EPISTEMIC HELL IS ROUTINE
155
If expansion is adding h and forming the deductive closure, the epistemic utility is 1 − qM(h) if the color of the liquid entails the truth of h
and is −qM(h) if the color entails the falsity of h. (Here the M-function
is a probability distribution over the strongest consistent potential answers
to the question under investigation. The index of boldness q reflects the
relative importance of avoidance of error and the acquisition of valuable
information. It may take any nonnegative real value less than or equal to
1.)
Given the probabilities assigned to R, W and B conditional on the color
of the liquid, an expected epistemic utility for implementing a program
conditional on the color of the liquid may be computed.
We shall suppose that if h is any one of the three hypotheses as to
the color of the liquid, M(h) = 1/3. There are 5 other hypotheses distinct
given the initial belief state that are relevant to the issue under investigation
(the color of the liquid) – including expansion into inconsistency. The M
value for expansion into inconsistency is 0; for complete suspense is 1 and
the remaining 3, the value is 2/3.
The expected epistemic utility given that the color is red (white, blue)
is thus well defined.
Assuming that the three hypothesis about the color of the liquid carry
equal informational value 1 − M(h) = 2/3, the expected epistemic utility
of implementing the program conditional on the color being red (white,
blue) is equal to 0.99 − q/3.
If an unconditional credal probability distribution over the hypotheses
that the color of the liquid is red, white or blue is available, the expected
epistemic utility of running the matching color program for routine expansion could be evaluated and compared with rival programs. However,
if the program for routine expansion to be adopted is to be convincing
to a broad range of potential and actual inquirers, the program should be
endorsable without presupposing a determinate prior probability. In most
contexts where the program is to be improved, the credal state should be
indeterminate. There may be no way to identify a best among the rival
programs in the sense in which a best program maximizes expected epistemic utility according to all prior distributions that are “permissible” in
the sense that they are not ruled out of consideration. Some programs can
be weeded out because they are never best according to any permissible
determinate credal probability distribution. But this will leave many alternative programs. One way to choose between these alternative E-admissible
programs is to consider the minimum expected epistemic utility conditional on the color being red, white or blue. The suggestion is to maximize
156
ISAAC LEVI
the minimum expected epistemic utility (conditional on the true “state”)
among the E-admissible programs.
Thus, the minimum expected utility for the default matching color program is 0.99 − q/3. This program will be chosen over recommending
suspense between all three hypotheses (that carries minimum expected
utility of 1 − q) as long as q is greater than 0.015. Suspending judgment
between red and white, if R, white and blue if W and blue and red if B
carries minimum expected utility of 0.999 − 2q/3. The matching color
program wins as long as q is greater than 0.027.
So let q be greater than 0.027. There are still some situations where
the default matching color program could be rejected according to maximin among the E-admissible options. Suppose the program recommends
expanding by adding the hypothesis that the color of the liquid is red
regardless of the sensory response. The minimum expectation of this program is −q/3. However, if the unconditional probability that the color is
red is very close to 1, the unconditional expected utility of this program
approximates 1−q/3 and, hence, could exceed the unconditional expected
utility (0.99 − q/3) of the matching color program for this case.
If, however, we had a numerically determinate unconditional probability distribution over the three hypotheses about the color of the liquid
so decisively skewed in favor of red (or, for white or for blue), deliberate
expansion by adding the hypothesis that the color is red would be justified.
Routine expansion would become unnecessary. The situations we should
be considering are those where either the unconditional probability distribution over the three hypotheses is quite indeterminate or not skewed.
In those cases, a sufficiently bold inquirer could readily warrant using the
matching colors program for routine expansion even in those cases where
there is some call for reconsideration.
In sum, in a wide variety of case 1 predicaments, the default matching colors program best promotes the aims of the inquirer even if one
contemplates all 24 programs. Nonetheless, it is logically possible that
inadvertent expansion into inconsistency occurs. From the inquirer X’s
point of view, there is no serious possibility that this can happen when
the default matching color program is used. So X need take no step to
prevent it. Yet, if X were to consider the running of the program at t from
a vantage point at t ∗ well before that moment when X may coherently be
in some doubt as to whether X will successfully implement the program, X
should countenance expansion into inconsistency as a serious possibility.
Olsson suggests that X should avoid inadvertent expansion into inconsistency. He thinks inconsistency is too hellish. But if from X’s point of
view at t there is no serious possibility of expanding into inconsistency,
CONTRACTING FROM EPISTEMIC HELL IS ROUTINE
157
X has no reason to take steps to avoid expansion into inconsistency. To
explain why Olsson thinks otherwise, let us turn to case 2.
In some collateral inquiry, X expands the belief state of case 1 by
adding the proposition that the color of the liquid is not blue. Conditional
on the color being red, the chance of B is 0.001 and conditional on white,
the chance of B is 0.009. In such a predicament, X becomes convinced that
conforming to the default program will lead to error when implementation
yields B.
This does not mean that X should reconsider the default program when
confronting case 2. In that event, it is clear that from X’s point of view
there is a serious possibility that use of the default program can lead to
inadvertent expansion into inconsistency.
Olsson thinks that the presence of such a serious possibility is a
sufficient reason for taking steps to change the program. I disagree.
Here is the case for X’s concluding that X should retain the default program. (This argument is advanced in less elaborate form in Levi
(1991).)
X could contemplate a deliberate inductive expansion seeking to predict whether the report to be made will be R, W , or B. So if the
informational values of R, W and B are all 2/3, the hypothesis that a
report B will occur upon making an observation will be rejected as long
as q > 0.027. We are supposing that X is bolder than that if X adopts the
matching color program in case 1. So, prior to implementing a program for
routine expansion at t, X may be taken to have deliberately expanded so
as to rule out B.
After the deliberate expansion, the default matching color program
continues to beat all others with the following exceptions: In the belief
contravening case where B occurs, modified program 1 recommends that
X suspend judgment between the color of the liquid being red and being
white, i.e., fail to expand at all. Modified program 2 recommends that X
become certain that the color is red were B to occur and modified program
3 recommends that X become certain that the color is white were B to
occur.
Keep in mind, however, that from X’s point of view once the deliberate
expansion ruling out a B report has been implemented, there is no serious
possibility that B will occur. Consequently, none of the modified matching
colors programs are better than the unmodified program according to X’s
point of view after deliberate expansion ruling out B.
As in case 1, there is no serious possibility according to X that implementing the default matching colors program will lead to inconsistency.
So once more there is no reason for X to worry about expanding into
158
ISAAC LEVI
inconsistency. Yet there is a logical possibility that X will do so. And prior
to the deliberate expansion ruling out B, there is a serious possibility.
Olsson points out, however, that from X’s point of view prior to deliberately expanding by ruling out a B report as a serious possibility, the use
of the default rule after such expansion might lead to inconsistency. Olsson
concludes from this that X should never have engaged in the deliberate expansion ruling out the prospect of B as a serious possibility. The deliberate
expansion should be avoided in order to avoid inadvertent expansion into
inconsistency not at the next change but at least two steps away.
Suppose that X follows Olsson’s advice. Then the default program
should be modified in order to prevent expansion into inconsistency at the
next step. In Levi (1991), I pointed out that under the conditions required
by Olsson, version 1 of the modified program yields a higher expected
epistemic utility than the default program regardless of whether the color
of the liquid is red or white. If the version 1 modified program is adopted,
inadvertent expansion into inconsistency is avoided.
In the version 1 modified program, the erstwhile expansion into inconsistency is replaced by responding to B by remaining in suspense between
the hypothesis that the color of the liquid is red and that it is white. X
should be very surprised that no new information is extracted upon making
an observation of the color. Such surprise is a mark of a possible anomaly.
If there is an explanation of the anomaly that X currently rules out as impossible but would judge to be a good one were it true, the inquirer X may
have good reason contract so as to convert the conjectured explanation into
a serious possibility. In Levi (1980, 1991), I argued that seeking explanations that remove anomalies is a good reason for contraction alternative to
retreating from inadvertent expansion into inconsistency.
Thus, using the version 1 modified program does not avoid the pressure on the investigator to recognize the proposition that the liquid is
blue as a serious possibility. That hypothesis is a potential explanation of
the anomaly. Olsson thinks avoiding epistemic hell in this way is a huge
benefit.
I failed to point out in 1991, however, that the version 3 like version
1 is also E-admissible. Moreover, it weakly dominates version 1. This
does not decisively support version 3 over version 1 but it may persuade
some to choose version 3 given that the prospect that B occurs is not ruled
out.5 This program recommends becoming certain that the color is white
if B occurs. Will X be able to tell whether B has occurred? As long as X
can recognize version 3 as a modification of the default program, I do not
see why he should not. X could, therefore, regard the occurrence of B as
anomalous. As in the case where the version 1 modification is deployed,
CONTRACTING FROM EPISTEMIC HELL IS ROUTINE
159
there is an incentive to contract by recognizing the proposition that the
liquid is blue as a serious possibility. Again from Olsson’s point of view,
this is a major advantage.
I remain unclear as to what the advantage could be. Whether X adopts
the version 3 or the version 1 modification, there still remains a logical
possibility of expansion into inconsistency just as in the use of the default
program even if no deliberate expansion is adopted. Suppose that none
of the responses countenanced as seriously possible occurs. Suppose, in
particular, that X reports the color of the liquid to be orange. X will expand
into inconsistency.
This prospect is not a serious possibility according to X at t when the
modified program is to be implemented. But it is a serious possibility at t ∗ .
Since Olsson thinks that X at t ∗ should take steps to avoid such expansion
into inconsistency, X should avoid using the modified program whether it
is version 1 or version 3.
Olsson might respond that X should be in a belief state where the orange response is a serious possibility along with all the others. Indeed X
should embrace the matching color program for all colors in the spectrum
and refuse to rule out the possibility of any response even though X believes the color to be red white or blue. Case 1 is then seen to be treatable
as a version of case 2. X should not deliberately expand by ruling out an
orange response or any other response beforehand because to do so will
yield expansion into inconsistency two steps down the line.
But even this will not remove expansion into inconsistency as a logical
possibility or as a serious possibility at some point t ∗ when failure to implement the program properly is judged to be a serious possibility. It is a
preprior serious possibility that X runs the comprehensive matching color
program and in response forms a belief that a bell is ringing.
There is no way to avoid this conclusion short of refusing to countenance implementation of any coherent routine. If useful modes of routine
expansion are to be allowed, inquirers cannot hope to rule out the preprior
possibility of expanding into inconsistency in this fashion even when they
implement the programs with perfect rationality.
The trouble is that perfect rationality does not preclude the logical
possibility that one of perfectly rational X’s background assumptions
guaranteeing the impossibility from X’s point of view of expansion into
inconsistency is false. More relevantly such a logical possibility is a serious
possibility from X’s point of view at some earlier time. And this is enough,
according to Olsson, to raise the question whether there is anything that X
can do to exxtricate him or her self from the predicament.
160
ISAAC LEVI
If routine expansion is legitimate, no program for routine expansion can
guarantee against the preprior serious possibility according to X at some
earlier time t ∗ . Olsson is concerned to guarantee against the serious possibility of expanding into inconsistency via routine expansion not merely
from X’s point of view immediately prior to implementation but one or
more steps before that. Olsson seeks to do this by devising routines that
preclude such preprior serious possibility of expansion into inconsistency.
I have been arguing that this program cannot be implemented. Yet, Olsson
is right to insist that epistemic hell is hell. So what is to be done?
5. FINESSING HELL
The point I have emphasized in the previous section is that insisting, as
Olsson does, that X should take steps to avoid expanding into inconsistency as long as it is a serious possibility at some time t ∗ prior to the
moment t is futile. Routine expansion would have to be avoided.
Instead, however, of avoiding expansion into inconsistency, one might
consider a requirement on all routine expansion imposed prior to implementation as to how retreat from inconsistency is to proceed. Any
deliberation involved in characterizing how this is to be done could be
based on the use of any consistent standard for serious possibility-that is
on any consistent state of full belief.
As the example of routine expansion illustrates, it is possible for X to
precommit to changing X’s point of view as the outcome of some process
over which X has no control once the process has been initiated. X may
have surrendered control to an automaton, stooge or random process. X
may have bound him or herself by the bonds of commitment.
Peter Gärdenfors advocated AGM revision as a way of finessing epistemic hell. The only way I understand the finesse is by treating the
revision as the result of a program in which the input was some prompt
to adding information incompatible with the initial belief state that retreats from inconsistency into an AGM revision. That is to say, I interpret
Gärdenfors’s idea as a species of routinizing retreat from inconsistency.
Olsson, in collaboration with S.O. Hansson saw the same difficulty with
Gärdenfors’s suggestion that I myself had seen independently – to wit, that
it gave priority to the new item of information when on some occasions the
new “datum” should be thrown out. Hansson and Olsson had suggested the
idea of “prioritized” revision where priority may be on some occasions to
the new datum and on other occasions to the initial information. I had
favored including the option of suspense between the two moves.
CONTRACTING FROM EPISTEMIC HELL IS ROUTINE
161
As I understood the situation previously, Gärdenfors’s proposal was
unsatisfactory because it did not allow for the possibility that the datum
is thrown out or that X moves to suspense between removing background
information and removing the new datum. It also seemed to me that once
one abandons the single output model, the routinization of the response to
epistemic hell is unsatisfactory. One needs a way to consider and deliberate
between several alternatives. But a deliberate decision theoretic approach
is precisely the one that paves the road to epistemic hell. In this paper,
I am suggesting that perhaps the idea I attributed to Gärdenfors (namely,
routinizing the response to epistemic hell) can work even if there is more
than one possible alternative to consider. To reexplore this line of thinking
entails a partial retrenchment from my earlier thinking.
X can recognize the serious possibility of expanding into inconsistency prior to the time of implementation because X at that time can with
good sense recognize it to be a serious possibility that the routine will
not be properly implemented later on. Yet at the time of implementation, X takes for granted that proper implementation is subject to X’s
control so from that point of view there is no serious possibility of implementation paper. Moreover, X at the earlier time should assume that
X at the later time will judge implementation failure to be epistemically
(seriously) impossible. Consequently, at the earlier time, X will recognize
X’s subsequent inadvertent expansion into inconsistency to be a serious
possibility.
Deliberate extrication from epistemic hell is hopeless as Olsson reminds us. However, we cannot sanitize the use of routine expansion against
the spectre of epistemic hell as Olsson seems to think. A strong case can be
made, therefore, for regarding the planning of a precommitment strategy
as requiring a game plan for contracting from inconsistency to be added to
every program for routine expansion.
Return to the case 2 predicament. Suppose X contemplates deliberate
expansion whose recommended conclusion is ruling out the response B.
As Olsson points out, prior to the deliberate expansion, it is a serious
possibility according to X that such expansion will be followed by routine
expansion into inconsistency by making the report B and coming to believe that the color of the liquid is Blue (as the default program mandates).
At that point X can precommit to a procedure for extricating X from the
inconsistency should it arise. Retreat from inconsistency can in this sense
be routinized.
In point of fact, X can precommit to such a procedure even when X
does not countenance expansion into inconsistency as a serious possibility.
No matter what X’s point of view prior to routine expansion, X can coher-
162
ISAAC LEVI
ently formulate a policy for retreating from an inconsistent expansion even
if the supposition that X will expand into inconsistency is belief contravening and, hence, not a serious possibility. Moreover, the instructions for
such a procedure can conform to the recommendations I have suggested
for retreat from inadvertent expansion into inconsistency
As I have argued in connection with my discussion of the Peircean
Messianic Realism, X should not be concerned to avoid error except at
the next change. So the precommitment program should not be concerned
with whether the retreat from inconsistency would or would not avoid error
as assessed. From the point of view prior to implementing the program,
X can coherently identify the relevant contractions from inconsistency
as I have proposed them. And the methods for assessing loss of damped
informational value can be formulated as I have suggested.
The procedure for retreating from inconsistency will, of course, vary
depending upon how the inquiring agent expanded into inconsistency. And
since such inconsistency will not even be anticipated as a serious possibility on many occasions, we cannot plausibly suppose that the inquirer has
prepared a program for contracting from inconsistency in advance. Consequently, it may be objected that X has no good reason for preparing a
program for retreating from inconsistency.
Nonetheless, insistence on precommitment to a game plan for retreat
from inconsistency can be defended as a general policy. From any consistent point of view prior to the point of view when routine expansion is
contemplated or initiated, adopting a program for routine expansion in a
way that leads to inconsistency is a serious possibility. This is due to the
serious possibility from this preprior point of view that implementation
will not be perfect.
To be sure, at most one can prescribe a general recipe for contracting
from inconsistency when the occasion arises. Perhaps, the recipe is the one
I offered above or some variant thereof. Moreover, the details of the recipe
in the case of a specific routine could have been formulated from the point
of view at t when the use of the program for routine expansion is contemplated and implemented. X may be committed to some such specific
routine even if X fails to explicitly acknowledge it.
Such considerations suggest that we can require the inquirer to contract from inconsistency according to the recipe I have proposed or some
improved variant on it depending on how informational value is assessed and the source of the conflict inducing inadvertent expansion into
inconsistency. Inquirer X is in this sense precommitted.
CONTRACTING FROM EPISTEMIC HELL IS ROUTINE
163
This brings us to the issue of implementing this precommitment policy
for retreating from inconsistency. The need for implementation arises only
when X has inadvertently expanded into inconsistency.
At that stage, X is in epistemic hell already. Olsson contends that on my
view X may legitimately recover from inconsistency only if in doing so he
fulfills all his commitments. He is right. But the critical question is which
commitments? In implementing a precommitment policy, the commitments
are those undertaken prior to expansion into inconsistency. This is so even
if implementation occurs during the inquirer’s sojourn in epistemic hell
(where X may use some stooge, automaton as an aid in implementation).
Olsson’s objections to my discussion of retreat from inconsistency are
predicated on my former insistence that contraction from inconsistency is
a deliberate decision problem. If it is then Olsson’s complaints are entirely
legitimate. Contraction from inconsistency would have to be legitimated
incoherently from the inconsistent point of view.
Once it is appreciated that contraction from inadvertent expansion into
inconsistency is routine, there is no need to invoke K⊥ as a standard for
serious possibility to justify a contraction strategy. I do not see any necessity to modify in any further substantial way the account of contraction
from epistemic hell I have favored.
NOTES
1 Olsson fails to understand (p.13) why I do not presuppose that the inquirer has dir-
ect cognitive access to observation reports. I deny direct “cognitive access” to anything
unmediated by background assumptions. This view I share with the classical American
Pragmatists. Routine expansion is intended to model direct but mediated acquisition of
information. I do agree that routine expansion is non-inferential acquisition of new beliefs
upon implementation of a program. The execution of such a program involves the making
of responses to some sort of “input”. But neither the initial input nor any intermediate
step need (though it may) involve any further episodic propositional attitude occurring
additional to the expansion. The introduction of the distinction between appearance and
reality becomes relevant when programs for routine expansion fail and are modified. I
discovered in my thirties that I was mildly color blind and learned to be more cautious
in making discriminations between dark blue, grey, green and black. A pair of pants that
appeared dark blue, I believed to be dark blue, grey, black, or green. By this I meant that in
cases where I would have initially responded by forming the belief that the pants are black.
I now suspend judgment between these four colors.
2 The account of epistemic utility in Levi (1967a) was modified in Levi (1967b) and the
latter remains my favored account for cases where epistemic utility is determinate. In Levi
(1980), I allowed for indeterminate epistemic utilities and made adjustments in my account
of inductive expansion to accommodate them.
3 See Levi (1980b) for a discussion of this matter.
164
ISAAC LEVI
4 “Direct” is to be understood in the epistemic sense just indicated. Information acquired
directly via the senses or the testimony of witnesses is not necessarily linked with the signal
by a “direct” causal link. I am not sure whether there are such things. But if there are, I
am not requiring any such direct causal dependence. An inquirer may trust a program for
routine expansion on the basis of conviction that the program is reliable without a good
explanation or understanding of its reliability. Of course, the quest for good explanations
of the reliability of a program can result in improvements not only of our understanding
but of the programs for routine expansion themselves.
5 Olsson considers a program P according to which the belief that the color is Red is
4
adopted in response to B. This program does not maximize minimum expectation. The
version I am suggesting does.
REFERENCES
Gärdenfors, P.: 1988, Knowledge in Flux. Modeling the Dynamics of Epistemic States, MIT
Press.
Levi, I.: 1967a, Gambling with Truth, An Essay on Induction and the Aims of Science,
Knopf. Reprinted by MIT in 1973 in paperback.
Levi, I.: 1967b, ‘Information and Inference’, reprinted in Levi (1983).
Levi, I.: 1980a, The Enterprise of Knowledge, MIT Press.
Levi, I.: 1980b, ‘Induction as Self Correcting According to Peirce’, in D. H. Mellor (ed.),
Science, Belief and Behaviour: Essays in Honour of R. B. Braithwaite, Cambridge
University Press.
Levi, I.: 1983, Decisions and Revisions, Cambridge University Press.
Levi, I.: 1991, The Fixation of Belief and Its Undoing. Changing Beliefs Through Inquiry,
Cambridge University Press.
Olsson, E. J.: 2003, ‘Avoiding Epistemic Hell: Levi on Testimony and Consistency’,
Synthese 135, 119–140.
I. Levi
Department of Philosophy
Columbia University
700 Philosophy Hall
New York, NY 10027-6900
USA
© Copyright 2026 Paperzz