1 Miriam Schoenfield Bridging Rationality and Accuracy (Draft of 4/13/14) 1. Introduction Rationality and accuracy have to be connected somehow. It’s not a coincidence that rational people tend to do a better job navigating the world than irrational people. But how are they connected? The aim of this paper is to explore this question. The paper is structured around three sub-‐questions: Preference Question #1: Why should we generally prefer rational credences to irrational credences? Preference Question #2: Why should we generally prefer credences based on our total evidence to credences based on a proper subset of our evidence? Constitutive Question: What constitutive connection is there, if any, between rationality and accuracy? For the sake of simplicity I will make the following two assumptions: First, precision – the claim that (rational) agents’ doxastic states are representable by precise credence functions and second, uniqueness – the claim that there exists a unique function from evidence to doxastic states which assigns to each body of evidence the doxastic state that is rational given that evidence. Although I think that (a version of) the arguments I develop in this paper are available to those who reject precision and uniqueness, I will not argue for this here. One of the main takeaway messages from the arguments in this paper will be that what we say about how rationality and accuracy are connected will depend heavily on what we think about the claim that agents are rationally required to be certain of all of the facts about rationality. This claim (which will be made more precise later) I call Rational Omniscience. The paper is divided into three parts. In Part I, I introduce some terminology and make the preference questions more precise. Part II explores what can be said about the three questions assuming the truth of Rational Omniscience, and Part III examines how 2 these questions can be answered given popular views about higher order evidence that are inconsistent with Rational Omniscience. Part I – Setting Up The Questions 2. Terminology. Before proceeding to the arguments, I need to introduce some terminology. (a) Epistemic Value (Accuracy) An epistemic value function (or scoring rule), A, is a function that takes as input a credence function, c, and a state of the world, s, and outputs a number, A(c,s) representing how accurate the credence function is in that state. Intuitively, we can think of the epistemic value of a credence function as a measure of its “distance” from the truth. Assigning higher credences to truths and lower credences to falsehoods makes for greater accuracy. (b) Expected Epistemic Value (Expected Accuracy) The expected accuracy of an arbitrary credence function, c, is calculated relative to an epistemic value function A, and the agent’s credence function p as follows (S is the set of possible states s): EAA,p (c) = ∑ p (s)A(c, s ) s∈S In other words, the expected accuracy that an agent assigns to some credence function c is the sum of the values c would have in states s, weighted by the agent’s credence that those states will obtain. (c) Credence Functions Under Descriptions Sometimes we’re interested in the expected accuracy of a credence function under some description. For example, I might want to calculate the expected accuracy of Jane’s credence function, or, the credence function I will have tomorrow even if I don’t know which 3 credence functions satisfy these descriptions. The expected accuracy of a credence function under description D is: EAA,p(D) = ∑ p (s)A(D(s), s ) s∈Sd where Sd is the set of states s in which some credence function satisfies D, and D(s) is the credence function that satisfies D in s. In what follows the distinction between the expected accuracy of credence functions specified as such and credence functions specified under a description will be important. When I talk about the expected accuracy of a credence function specified as such I am assuming that the agent is aware of the identity of the credence function in question. (That is, the agent is aware that the credence function in questions assigns, for example, credence 0.1 to p, credence 0.72 to q, and so forth). Lower case italicized letters are used as stand-‐ ins for credence functions specified as such. When I talk about the expected accuracy of a credence function under a description, I leave it open whether or not the agent is aware of the identity of the credence function in question. (d) Strictly Proper Value Functions There are many different epistemic value functions, and my arguments do not rely on the use of any particular one. All I assume is that the epistemic value function is strictly proper. Strictly proper value functions are ones according to which every probability function regards itself as more expectedly accurate than any other probability function specified as such. This means that any probability function, f, will assign greater expected accuracy to itself than to a distinct probability function g. However, it may regard as more expectedly accurate some other function specified under a description such as “the probability function that assigns 1 to all truths and 0 to all falsehoods.”1 1 See Wallace (2006), Gibbard (2008), Moss (2011) and Horowitz (forthcoming) for discussion of the motivation for using strictly proper value functions. 4 3. Refining the Preference Questions The preference questions ask why we should, in general, prefer rational credences to irrational ones, and credences based on our total evidence to credences based on some proper subset of that evidence. In this section, I describe two principles which are meant to capture the sense in which we should prefer the rational to the irrational, and the credences based on the total evidence to the credences based on a proper subset. The preference questions then become questions about why the two principles I describe are true (if, in fact, they are). 3.1. Rational Trumps Irrational The first principle concerns the way in which we should prefer rational credences to irrational ones. I will set the stage by describing the case of Bill, a competitor on a game show. GAME SHOW 1 Bill is on a game show in which contestants make bets about tomorrow’s weather. Bill hasn’t being doing so well, so the host offers him a special deal: “In this round, you will be betting on whether or not it will rain tomorrow. I will give you a choice between two pills. If you take the rationality-‐pill, you will have the credence in RAIN that is rational given your total evidence. 2 Alternatively, you can choose any number between 0 and 1 that you know is an irrational credence to have in RAIN given your evidence. I will then give you a pill that will make you have the credence corresponding to your choice.” Which of these two options should Bill choose? Plausibly, if Bill is trying to do well on the game show, he should pick the rationality pill. I will use “R(E)”, throughout, as a stand-‐in for the description: “the rational credences given E”. It is important to remember that, since R(E) is standing in for a description, when I talk about an agent’s attitudes towards R(E), I do not assume that the agent knows the identity of R(E). 2 Here, and throughout, I am talking about rationality in the sense of propositional rather than doxastic justification, so there is no problem with gaining a rational credence, in this sense, by taking a pill. 5 The judgment about Bill suggests: RatPref (preference): If E is your total evidence, you should prefer R(E) to any function g which you know differs from R(E). Prefer in what sense? Roughly, the sense that is relevant to your interest in doing well on game shows (or, in the world, more generally). More precisely, I will follow Allan Gibbard (2008) in using “the guidance value” of a credence function to refer to the value of that credence function as a guide to choice in pursuit of other values. Gibbard describes a result from Schervish (1989) which shows that the credences with the highest guidance value are the credences that maximize expected accuracy according to a strictly proper value function.3 So we can also think of RatPref as the claim that you should assign greater expected accuracy to R(E) than to g when you know that g differs from R(E): RatPref (expected accuracy): Suppose that R(E) = r. EAA,r(R(E)) > EAA,r (g) Where r(R(E) = g) = 0. RatPref (expected accuracy) says that if the rational credence function given E is r, then r will assign greater expected accuracy to the credence function under the description “the rational credence given E” than to any particular credence function which r is certain is irrational. Since the preference version and the expected accuracy version are so tightly connected I will simply talk about RatPref, sometimes in terms of preferences and sometimes in terms of expected accuracy. 4 3 This is not to say that it can never be in your best interest to form an irrational belief. Sometimes, having a belief with a certain content will promote your aims, regardless of its accuracy. The guidance value of a credence function is determined by the value of that credence function as a guide to other pursuits, insofar as your success depends on the accuracy of your doxastic states, and not their content. 4 If you think that you can know p without assigning credence 1 to p, the expected accuracy version of the principle will be weaker than the one stated in terms of knowledge. This potential difference between the two principles won’t matter for what follows. 6 RatPref can explain why Bill should choose the rationality pill: Bill knows that by taking the rationality pill, he will end up with the credence in RAIN (whatever it may be) that is rational, rather than some credence which he knows is irrational. If RatPref is true then Bill should assign greater expected accuracy to R(E) than to any particular credence he is certain is irrational. Furthermore, since the credences with the highest expected accuracy are the ones with the highest guidance value, if he’s concerned about doing well on the show he should choose the rationality pill. Doing well on game shows is certainly not the only reason we care about being accurate. Many people, at least sometimes, have an intrinsic interest in truth. Whether or not my arguments will succeed in showing that an interest in accuracy for its own sake warrants a preference for rational credences will depend, in part, on whether the way in which these truth seekers care about accuracy can be represented by a strictly proper value function. If so, my argument for RatPref will apply to the intrinsic truth valuers as well. Gibbard (2008) is skeptical that an intrinsic interest in truth will involve valuing accuracy in a “strictly proper” way. If he is right, the motivation I will provide for RatPref will not motivate the claim that seekers of truth for its own sake should prefer rational credences to irrational ones. That is okay. My aim is only to show that a concern about accuracy in the “strictly proper” way warrants a preference for rational credences. When we care about accuracy for the purposes of making bets, choices, and navigating the world, we care about accuracy in the strictly proper way. Whether other sorts of interest in accuracy will take this form is not a question that I will be addressing in this paper. Before moving on to a discussion of total evidence, let me emphasize that RatPref is by no means trivial. I will show in the following sections that it can be derived from Rational Omniscience, but if it can be rational to be uncertain what credence is rational, it is a very substantive claim that one should always assign greater expected accuracy to the credence function under the description “the rational credence given E,” where E is one’s evidence, than to any credence function that one is certain is irrational. To see this, consider, for a moment, a different description. For example, let “ROLF” stand in for the description “Rolf’s favorite credences.” A rational agent may surely be uncertain what Rolf’s favorite credences are. And crucially, even if as a matter of fact, Rolf’s 7 favorite credences are the rational credences given one’s evidence, there’s no reason to expect that EAA,r(ROLF) > EAA,r(g) Where g is a credence that r is certain is irrational. For suppose that r(p) = 0.9, and r is certain that 1 is irrational, but r is quite confident that Rolf’s favorite credence in p is .2. Then, r might assign greater expected accuracy to credence 1 than to “Rolf’s favorite credence” despite the fact that, in actuality, Rolf’s favorite credence is the rational credence (.9). Now, if one can be uncertain about what credence is rational, just like one can be uncertain what credence is Rolf’s favorite, there is, as yet, no reason to think that one should always assign greater expected accuracy to credences under the description “rational” than to credences one is certain are irrational. If RatPref is true, it is a substantive claim that requires an argument. One might think that the argument for RatPref will be simple. After all, you might think, it is simply constitutive of the notion epistemic rationality that it is, in some sense, truth conducive. If so, it’s no great mystery that, if you care about truth, you should prefer the truth conducive credences to the non-‐truth conducive ones. Is that all that RatPref amounts to? Not quite. Part of the aim of this paper is to give an account, in more precise terms, of the exact sense in which epistemic rationality is truth conducive. For note that there are many senses of truth conduciveness such that you shouldn’t always regard the output of the truth conducive process as more expectedly accurate than the non-‐truth conducive one. For example, suppose that you have a generally reliable thermometer but you think that there’s something about your current circumstances which makes it likely to be less accurate than a thermometer you know is much less reliable. In such a case you will assign greater expected accuracy to the output of the unreliable thermometer than to the output of the reliable thermometer even though, there’s a sense in which the reliable thermometer is more truth conducive. Similarly, if you thought that the epistemically rational credences were, generally (though not always), reliable indicators of the truth, why couldn’t you think that, in some particular case, the evidence is likely to be misleading? (Just as you might think that, in some particular case, the reliable thermometer is likely to give the wrong answer). And if it can be rational to think that in, some cases, the evidence 8 is likely misleading, why shouldn’t you ever think that your case is one in which the evidence is likely misleading? And if you thought your evidence was likely misleading, why couldn’t you assign greater expected accuracy to a credence you know is irrational than to the credence satisfying the description “rational?” If RatPref is true, whatever sense of truth conduciveness is constitutive of the notion of epistemic rationality is a rather strong one. It’s one that makes it irrational to ever think that some particular credence that doesn’t satisfy the truth conducive condition is more expectedly accurate than the one that does, despite the knowledge that evidence can sometimes be misleading. I find a principle like RatPref attractive, and closely connected to whatever constitutive connection exists between rationality and accuracy. But what I’m interested in here is what exactly it is about rationality that explains why it must be truth conducive in the rather strong sense that would be required to justify a principle like RatPref. 3.2. More Evidence Trumps Less GAME SHOW 2 On an advanced round of the game show, Bill is offered a slightly different deal. As before, he can choose a pill which will give him the rational credence in RAIN, given his total evidence. Alternatively, he can choose a particular subset of his evidence, which he knows supports a credence that differs from the credence that’s rational given his total evidence. The host will then give him a pill which will cause him to adopt the rational credence given his chosen subset. Once again, it seems like Bill should choose the pill which will give him the credence rationalized by his total evidence. This suggests: TotPref (preference): Let E be your total evidence and let E-‐ be a particular set of propositions which form a proper subset of E. You should prefer R(E) to R(E-‐), when you know that R(E) differs from R(E-‐).5 5 Note that since E-‐ refers to a particular proper subset (or a proper subset of evidence specified as such), E-‐ can’t be, for example, “the proper subset of my evidence that excludes all misleading evidence.” 9 Once again, the relevant sense of “prefer” is the sense that is relevant to making good choices in pursuit of other values. So we can also think of the principle is as follows: TotPref (expected accuracy): Suppose that R(E) = r. EAA,r(R(E)) > EAA,r(R(E-‐)) Whenever E-‐ is a particular a proper subset of E and and r((R(E) = R(E-‐)) = 0 Note that TotPref does not follow from RatPref. RatPref says that you should prefer a credence function under the description “the rational credence given E” to any credence function specified as such (that you’re certain is irrational). TotPref says that you should prefer a credence function under the description “the rational credence given E” to a credence function under a different description: “the rational credence given E-‐” (when you are certain that they differ). To see how these principles can come apart note that you shouldn’t prefer R(E) to the credence function satisfying the description: “assigns 1 to all truths and 0 to all falsehoods.” This is so even if you’re certain that the credence function satisfying the above description is irrational. Furthermore, RatPref does not entail the false claim that you should prefer the rational credence to the one satisfying the above description. This is because RatPref only says that you should prefer the rational credence to any credence function specified as such that you know is irrational. So it doesn’t follow from the fact that you should prefer R(E) to a credence function specified as such, that you know is irrational, that you should prefer R(E) to a credence function under some description (such as R(E-‐)) when you know that the function satisfying the description is irrational. RatPref and TotPref make explicit the way in which agents should prefer rational credences to irrational ones, and credences supported by their total evidence to credences supported by a particular proper subset. Before proceeding, I want to be clear about what these principles are not saying. First, RatPref and TotPref are not meant to be addressing 10 the question: “Why be rational?” or “Why care about rationality?”6 RatPref and TotPref are principles about what you should prefer, and the “should” here is a rational should. The question is, why is it rational to prefer, for the purposes of doing well on the game show, the credence that is rational? If you’re skeptical about the value of rationality, the claim that it’s rational to prefer the rational credences may not have much force. But the preference questions are by no means trivial either. I am not asking why it’s rational to be rational. The fact that one should have the credences that are rational doesn’t mean that one should prefer to have the credences that are rational. For example, it’s rational for Bill to prefer to have the credence function that assigns 1 to the truth about RAIN even though it’s not rational for Bill to have a credence that assigns 1 to the truth about RAIN. So the question of whether one should always prefer, for accuracy purposes, to have the credences that one, in fact, should have is a substantive question. In the next part of the paper I will show how the preference questions and the constitutive question can be answered if we assume that an agent is rationally required to be certain of all the facts about rationality. Part II – Bridging Using Rational Omniscience Here is the claim that will be assumed for the purpose of this part of the paper: Rational Omniscience: For all bodies of evidence E, rational credence functions r and credence functions f: r(R(E) = f) = 1 if and only if R(E) = f. Michael Titlebaum (forthcoming) has recently defended Rational Omniscience, but the principle is highly controversial. You might think that even if there is some default justification for certainty in facts about rationality, this justification can be defeated. But, at this point in the paper, I do not want to evaluate the truth of Rational Omnsicience. Rather, I want to see what can be said about rationality and accuracy if it is true. 6 These questions are discussed by Kolodney (2005) and Broome (2007). 11 4. Deriving RatPref and TotPref It is easy to derive RatPref and TotPref if we assume Rational Omniscience. Let’s start with RatPref. The argument for RatPref below is based on an argument from Sophie Horowitz (2013): Let R(E) = r By Rational Omniscience (1) r((R(E) = r) = 1 By the fact that we’re using strictly proper scoring rules, it follows that, for every g that is not equal to r, (2) EAA,r(r) > EAA,r(g) Now whenever an agent is certain that some credence function f satisfies a description D, the agent will have to assign equal expected accuracy to f and D. Thus, since r is certain that r satisfies the description “R(E)” (3) EAA,r(r) = EAA,r(R(E)) Using the equality in (3), we can substitute into (2) to get: (4) EAA,r(R(E)) > EAA,r(g) Here’s the main idea: if Rational Omnsicience is true, then a rational agent will assign greater expected accuracy to R(E) than to any credence she knows to be irrational. Why? Because she’ll assign greater expected accuracy her own credence than to any other credences, and since she’s certain that her credences satisfy R(E) she must also assign greater expected accuracy to R(E) than to any irrational credences.7 Here is the derivation of TotPref: Let R(E) = r and R(E-‐) = g By Rational Omniscience Interestingly, RatPref can also be derived from Rational Reflection which Elga (2012) states as follows: if P is “the credence function of a possible rational subject S… P(H|P’ is ideal) = P’(H)” (130). (Thanks to [omitted] for this suggestion). However, since, as Elga shows, Rational Reflection actually entails Rational Omniscience, I prefer the derivation that only appeals to Rational Omniscience. Indeed Rational Reflection is a rather strong claim (which leads to certain puzzles, like that of the “unmarked clock” (see Elga)). RatPref is much weaker (it is much closer to the very weak principle Elga calls “Certain”). 7 12 (1) r((R(E) = r) = 1 and (2) r(R(E-‐) = g)) = 1 By strict propriety, (3) EAA,r(r) > EAA,r(g) Because of (1) and (2) we know that (4) EAA,r(R(E)) = EAA,r(r) (5) EAA,r (R(E-‐)) = EA A,r(g) And thus, by substitution into (3) we get (6) EAA,r(R(E)) > EAA,r(R(E-‐).8 We’re not quite finished. For you might have the following worry: when we judged that Bill should choose the rationality pill, or the total evidence pill, on the game show we weren’t thinking to ourselves that Bill was certain what the rational credences were, and that they were his own, and, that that’s why he should prefer them to any other. We thought, even if Bill doesn’t know which credences are rational, he should prefer the rationality pill. And even if Bill doesn’t know which credences some proper subset of his evidence rationalizes, he should prefer the credence that’s rational given his total evidence. Can we give an explanation for these judgments? If Rational Omniscience is true, then if we ask what Bill should do when he’s not certain what credences are rational then we’re asking how someone who is already irrational should proceed. That can be a problematic question at times, but if Rational Omniscience is true, it describes a very idealized notion of rationality. And if rationality is so idealized, it had better turn out that we can make some true judgments about what agents who aren’t rationally omniscient should do and believe. Otherwise, all of our judgments 8 Good (1967) gives an argument for a principle like TotPref. Good’s argument aims to justify, using expected accuracy considerations, the principle of total evidence. However, Good’s argument only successfully establishes that you should assign greater expected accuracy to the credences based on your total evidence before getting the evidence. It doesn’t justify the claim that, once you get the evidence, you should assign greater expected accuracy to the credence that’s rational given the total the total evidence than to the credence that’s rational given a proper subset of this evidence. For this, you need Rational Omniscience. Good does suggest a reply to a similar worry, but it is unsatisfactory. Diagnosing the flaw in Good’s response would take us beyond the scope of this paper, but it’s also not necessary. For we’ll see later that without Rational Omniscience we can get straightforward counterexamples to TotPref. 13 about the rationality of actual human beings would be false! So it seems, at very least, that when a non-‐ideal agent knows what choice is rationally required, we can correctly judge that it is rational for her to make that choice. This is a very weak constraint on the correctness of rationality judgments about non-‐ideal agents. What does this mean for Bill? Even if Bill isn’t always certain what credences are rational, so long as he knows that a perfectly rational agent would choose the rationality pill over the irrationality pill, it follows from the weak constraint above that Bill should choose the rationality pill as well. But can Bill know that this is what an ideally rational agent would choose? Yes. For as long as Bill knows that Rational Omniscience is true, he can go through the reasoning above to convince himself that an ideally rational agent would choose the rationality pill (and the total evidence pill). You may not find this explanation of why non-‐rationally-‐omniscient Bill should choose the rationality pill or the total evidence pill completely satisfying. For you might think that our judgment that Bill should take these pills wasn’t based on the assumption that Bill should believe the highly controversial claim that Rational Omniscience is true! Surely Bill should choose the rationality pill or the total evidence pill even if he’s not rationally omniscient, and even if he doesn’t believe that Rational Omniscience is true! As I will illustrate later, this thought is, perhaps surprisingly, misguided. If you don’t think that Rational Omniscience is true it is by no means obvious that you should choose the rationality or the total evidence pill. In fact, as we’ll see, popular views in the literature about higher order evidence have the consequence that Bill shouldn’t always choose these pills. Our inclination to judge that it is rational to prefer, for accuracy purposes, the credences that our total evidence rationalizes is, in fact, based on an implicit commitment to the thought that a rational agent is always certain about what is rational. This will be argued for in Part III. 5. A Potential Answer to the Constitutive Question The constitutive question asks: what constitutive connection is there, if any, between rationality and accuracy? In this subsection I will describe a possible answer to 14 that question which someone who accepts Rational Omniscience can appeal to. But first, I need to do a little setup. Call a function from potential bodies of evidence (including the set with no evidence), to doxastic states, an “epistemic plan.” We can think of a doxastic state as a set of conditional and unconditional credences. So, for example, an epistemic plan might map a certain body of evidence E, to a doxastic state which includes having a 0.9 credence in the proposition that it will rain in London on February 15, 2014, and a 0.98 credence that it will rain, conditional on the weather reporter claiming that it will.9 Imagine that a perfectly rational agent with no empirical information (I’ll call her Pria (perfectly rational ignorant agent)) is contemplating what plan she would “program” into a robot if her goal was to maximize the expected accuracy of the credences the robot will have upon entering the world. (By “program” I mean that Pria is choosing a function which will fully determine the conditional and unconditional credences the robot will have, as a function of the evidence that it receives). More precisely, Pria is looking for an epistemic plan, A, such that, for all bodies of evidence, E, subjects S, propositions p, and plans B that differ from A in the credence assigned to p given E, EA(the credence in p that S would have if S had E and followed A|S has E) > EA(the credence in p that S would have if S had E and follows B |S has E). EA here is calculated relative to Pria’s credence function. The expected accuracy of a credence function conditional on some proposition X, relative to a probability function Pr, is just the (unconditional) expected accuracy of the credence function relative to Pr*, where Pr* is the function that would result from Pr by conditionalizing on X. Call such a plan the accuracy-‐optimizing epistemic plan or A for short. 10 9 It’s consistent with the definition of an epistemic plan that one’s conditional credences given different bodies of evidence will differ. But it might be that no such epistemic plain is rational. 10 Two notes: First, if p or E are centered propositions, they should be understood as centered on S. Second, although, for simplicity, I defined the accuracy optimizing plan as one that optimizes the expected accuracy of the agent’s (unconditional) credence in p for any body of evidence E, the accuracy optimizing plan should also optimize the expected accuracy of the agent’s conditional credences. I have not, in this paper, defined the 15 We can also talk about the function from evidence to belief states that assigns to each body of evidence the doxastic state that is rational given that evidence. Let's call that the rational epistemic plan or R for short. Here is a possible answer to the constitutive question: Accuracy-‐Optimizing is Rational (R=A): The rational epistemic plan is the accuracy optimizing epistemic plan. In other words, the rational epistemic plan is the one that a perfectly rational agent would, a priori, program into a robot if the goal was to maximize the expected accuracy of the robot’s credences in response to whatever evidence it receives. A more detailed discussion of R=A will appear later in the paper, but I will note here that R=A is prima facie plausible. For note that if R=A were false, and if Pria knew this, she could think to herself as follows: “Here are two plans, the rational one, and the accuracy optimizing one. They are different. So I hope I don’t go into the world and form credences rationally! If I want to be accurate, I’d rather adopt these particular credences (which I know are irrational) in response to E, than adopt these other particular credences (which I know are rational) in response to E!” It would certainly be a surprising result if this was a rational attitude to adopt a priori. 11 Let’s take stock. I started out asking three questions: Why should we prefer rational credences to irrational ones? Why should we prefer credences rationalized by our total notion of accuracy or expected accuracy for conditional credences. Brian Weatherson has a note on this here: http://brian.weatherson.org/ConditionalAccuracyNote.pdf. 11 You might think – not only is R=A prima facie plausible – it follows trivially from some very minimal assumptions! Indeed, R=A does follow from some commonly accepted assumptions, namely the claim that (a) it’s rational to conditionalize and (b) conditionalizing maximizes expected accuracy (a claim argued for by Greaves and Wallace (2006)). But as we’ll see, if we reject Rational Omniscience because of higher order evidence considerations, one of these two assumptions will have to be abandoned, and R=A, far from being trivial, will turn out false. (The argument from (a) and (b) to R=A goes as follows: Suppose that an agent will always regard conditionalizing on her credences as maximizing expected accuracy and call Pria’s current credences r. Pria will thus regard the epistemic plan that involves conditionalizing on r to be more expectedly accurate than any alternative epistemic plan. If conditionalizing on rational credences is rational, it follows that the Pria will regard the rational plan as more expectedly accurate than any other. Thus the accuracy optimizing plan will be the rational plan. I explain why conciliatory views about higher order evidence have to reject one of (a) or (b) in my [omitted]). 16 evidence to credences rationalized by a proper subset of evidence? What is the constitutive connection between rationality and accuracy? I made the first two questions more precise by positing two principles, RatPref and TotPref and then asking why those principles are true. In an attempt to answer the three questions, I started out by assuming Rational Omniscience. I then showed that we could use Rational Omniscience to derive RatPref and TotPref. I also proposed a particular answer to the constitutive question, namely, the claim that the rational plan is the accuracy optimizing plan (R=A). In Part III of the paper I will show if we accept certain plausible views about higher order evidence, views that involve the rejection of Rational Omniscience, the picture sketched in Part II of how rationality and accuracy are connected is not available. R=A is false. TotPref is false. And RatPref is false or unmotivated, depending on how one spells out the details of the view. These views, then, leave us with a puzzle about how rationality and accuracy are connected. At the end of Part III I will describe an alternative proposal, according to which the principles of rationality aren’t the ones such that Pria would regard following them as maximizing expected accuracy. Rather, they are the ones such that she would regard trying to follow them as maximizing expected accuracy. I will end by showing that we may be able to reconcile some the tensions that arise when thinking about how rationality and accuracy are connected by distinguishing two notions of epistemic rationality, corresponding to different ways in which we care about accuracy. Part III – Bridging while Calibrating The purpose of this part of the paper is to explore how one might engage in the bridging project given certain views about higher order evidence that require rejecting Rational Omniscience. 6. Higher Order Evidence Consider: HYPOXIA Imagine that you’re flying an airplane. You have just done some calculations to determine whether or not you have enough fuel to make it to Hawaii. You then get a 17 message from ground control: “Dear pilot, you are flying at an altitude which makes you susceptible to hypoxia, a condition that impairs your ability to reason properly. The judgments12 people at your altitude make concerning how far they can fly are correct only 50% of the time.” (This is a variant of case from Elga (ms.)) How confident should you be that you have enough fuel to make it to Hawaii? Plausibly, you should be only 50% confident.13 I will call the view that motivates this sort of thought “calibrationism.” Call the likelihood that your judgment will be correct your “expected degree of reliability.”14 Calibrationism: If your expected degree of reliability concerning whether p, given E, is 0.5, you should be 0.5 confident in p. Two notes: First, calibrationism is by no means uncontroversial,15 and my purpose here isn’t to defend the view. All I am claiming is that the view has some intuitive plausibility. Since, as I will argue, the view is in tension with the answers to the three questions we started out with, we will, eventually, have to make a choice about which principles to keep. Second, the version of calibrationism I presented above is simpler than some, and weaker then some, but the various bells and whistles that one might add (for example, to avoid skeptical conclusions) need not concern us here. I’ll be applying calibrationism only to the most uncontroversial cases in which it is most plausible for such a principle to apply. 7. Calibrationism is Inconsistent with… The connection between rationality and accuracy that I described in Part II, along with the various principles it supported, are undermined by calibrationism. 12 The agent’s judgment is just the proposition that the agent is (or would be) more confident in than not on the basis of the first order evidence alone. (I borrow this term from Horowitz and Sliwa (ms.) and Weatherson (ms.)). 13 Defenders of the view include Feldman (2009), Christensen (2010), Horowitz and Sliwa (ms.), Elga (ms.) and Vavova (ms.) 14 As the authors above have pointed out, to get the intuitive verdicts in the cases they are interested in, this likelihood needs to be determined in a way that is independent of the reasoning in question. 15 For opponents see Lasonen Aarnio (2014), Kelly (2005, 2011) and Weatherson (ms.) 18 7.1. Rational Omniscience The first thing that is important to note is that calibrationists must reject Rational Omniscience. To see why, recall that the calibrationist thinks that, in the hypoxia case, one should suspend judgment. The calibrationist thinks this because she thinks that the announcement gives the pilot a reason to doubt that her judgments are rationally supported by her first order evidence. It is because the calibrationist thinks that the pilot should become uncertain about what her first order evidence supports (in other words, she should be uncertain of a rationality fact), that she should come to doubt whether she has enough fuel. 7.2 R=A To see why calibrationists have to reject the claim that the rational credences are the accuracy optimizing ones, recall that Pria is looking for an epistemic plan which optimizes the expected accuracy of the credences an agent following this plan would adopt. Now, put yourself in Pria’s shoes. Suppose you’re trying to decide how you want an agent to respond to some ordinary first order evidence about the weather, E. You determine that the optimal attitude for someone to have given E is to believe that it will rain (R). You now consider the evidence which consists of the same first order evidence, but also higher order evidence suggesting that the agent is unable to correctly evaluate the first order evidence. If you want to maximize the expected accuracy of this agent’s credences you should still recommend that the agent believe R. For you shouldn’t think that it is any less likely to rain in circumstances in which the agent has E in addition to evidence that her cognitive capacities are impaired, than in the circumstances in which the agent has E. But the calibrationist thinks that what is rational given these two bodies of evidence does differ. Thus, according to calibrationism, the plan which assigns to each body of evidence the belief state that is rational given that evidence is not the same as the plan that a perfectly rational agent would regard, a priori, as accuracy optimizing. Can calibrationists provide a deeper explanation of why the rational plan isn’t the accuracy optimizing one, one that doesn’t appeal to calibrationism? R=A seemed like a promising proposal about the connection that rationality bears to accuracy. If it is to be 19 rejected, it would be nice if the calibrationist could both explain away its original appeal, and provide an alternative answer to the constitutive question. Here is one potential explanation: One might claim that we should never have been tempted by the thought that the accuracy optimizing plan is the rational plan because A is not a plan that we can expect to successfully follow. A tells us to have the attitudes supported by our first order evidence even in cases in which our higher order evidence tells us that we will likely fail at determining what those attitudes are! Since, in those circumstances, we won’t be able to do what A recommends, it can’t be true that we are required to follow A. This strikes me as an unpromising route. Suppose my evidence includes information to the effect that I am bad at evaluating evidence about the future outcomes of sports matches that I care about a great deal. Say that, for any given credence I adopt on such matters, 50% of the time it turns out that I am a bit overconfident due to wishful thinking, and 50% of the time I am a bit underconfident due to fear of disappointment. I just can’t manage to get it right. If my evidence includes this information, then I can’t expect to successfully adopt the credences that are supported by my sports related evidence. But that doesn’t mean that I’m not rationally required to! Beliefs that are unsupported by the evidence due to wishful thinking or fear of disappointment are irrational even if the wishful thinkers or fearers can’t help themselves and know that they can’t. So it can’t be that the reason A isn’t the rational plan is that we can’t expect to always succeed at following A. Here is a second proposal: perhaps the problem with A is that it is the plan a perfectly rational agent would regard as accuracy optimizing. But we’re not perfectly rational agents! So why should we have to follow the plan that an agent, so different from ourselves, would regard as accuracy optimizing? Perhaps the rational plan is the one that a rational person (with various limitations) would regard, a priori, as accuracy optimizing. This proposal will not help. For even the accuracy optimizing plan that a rational person would come up with is inconsistent with calibrationism. Consider a case of a simple entailment (pàq) which an agent with the capacities of an average rational person could recognize as an entailment. Such a person would think that the accuracy optimizing attitude for an agent that has p as part of her evidence would contain a belief that q, even if 20 that person had higher order evidence suggesting that she was incapable of detecting such entailments. But this is inconsistent with calibrationism for the same reasons that the accuracy optimizing plan that Pria would come up with is inconsistent with calibrationism. So the calibrationist will have to reject R=A, but so far, we don’t have a way of explaining away the initial temptation. There’s also a puzzle about what the calibrationist can say about the connection between rationality and accuracy if she rejects R=A. Let me put to one side what I take to be an unsatisfactory answer to the question about the connection that rationality bears to accuracy: the credence function that is, in fact, rational given E, maximizes expected accuracy relative to itself. The reason the answer is unsatisfying is that every probability function maximizes expected accuracy relative to itself. So saying that the rational function maximizes expected accuracy relative to itself doesn’t tell us what’s special about being rational. We will come back to the question of how the calibrationist should respond to these worries in the next section. But, for now, let’s move on. If calibrationism is true, and so Rational Omniscience is false, the arguments I presented for RatPref and TotPref will be unsound (recall that they both appealed to Rational Omniscience). So what should the calibrationist say about these principles? Are they true? And if, so, how should we motivate them? I will begin with TotPref. 7.3. TotPref The calibrationist will have to reject TotPref. In other words, the calibrationist will think that sometimes we should prefer having credences supported by a proper subset of our evidence to the credences supported by our total evidence. To see why, suppose that you’re flying the plane in the hypoxia case when you get higher order evidence suggesting that you’re impaired. Before you received this evidence, you were confident that you had enough fuel, but now you have reduced your confidence to 0.5. Nonetheless, you should still regard the expected accuracy of the credences that satisfy the description “the credences that are in fact rational given my first order evidence” as greater than the expected accuracy of your current credences. (Note that I’m not claiming that you should 21 think that the credences you would have on the basis of the first order evidence alone are more expectedly accurate than your current credences). If this doesn’t seem intuitive to you, think about what credences you’d prefer to have your actions guided by (and recall, that this is the sense of “prefer” that is relevant in this paper). If your actions are guided by the credences that your total evidence supports, you’re going to play it safe and go home. This would be a very sad way to commence your much anticipated vacation! On the other hand, if you could choose to have your actions guided by the credences that are in fact rational given only the first order evidence, you would probably only return home if, in fact, you didn’t have enough fuel. If you did have enough, you would probably keep flying to Hawaii and be sipping a Pina Colada on the beach within a few hours. The expected value of either having a shot at arriving safely in Hawaii or returning home is greater than the expected value of a certain return home. So you should prefer the credences rationalized by a subset of your evidence to the credences rationalized by the total. This is inconsistent with TotPref. 7.4. RatPref Whether or not the calibrationist will have to reject RatPref depends on what kind of calibrationist one is.16 But arguing for this will take us beyond the scope of this paper. All I will say about RatPref here is that calibrationism leaves the principle unmotivated. So far, we don’t have an account of what it is about rationality that could explain why, for accuracy purposes, we should alwas prefer rational credences to irrational ones. If we do reject the picture sketched in Part II what can we say about the connection between rationality and accuracy? In the next section I will sketch an alternative proposal. 8. The Trying Account The aim of this section is describe an alternative connection between rationality and accuracy (that is, an alternative answer to the constitutive question) that is consistent with (and, as a bonus, motivates!) calibrationism. For reasons of space, I will not be able to do 16 For example, the conciliatory view discussed (ref. omitted) turns out to be inconsistent with RatPref. 22 justice to the account’s benefits or its problems. The aim here is just to put the proposal on the table and suggest that something like it lies at the heart of calibrationism. The Trying Account: The principles of rationality are the principles that a perfectly rational agent would choose, a priori, if she were aiming to maximize the expected accuracy of the credences an agent would have as a result of trying to follow these principles. Let’s begin by considering what this account has going for it. (a) Motivating Calibrationism Why should the pilot suspend judgment rather than have the attitude supported by her first order evidence? To see how the Trying Account can explain this judgment, let’s contrast the calibrationist principle with an alternative, one that says “if you think that your judgments are only 50% reliable, have the credence that your first order evidence actually supports.” We can expect such a principle to be one that is good to follow, but bad to try to follow. Since you know that you’ll be impaired in the relevant cases, you can predict that one of two things will happen as a result of trying to follow such a principle. One possibility is that you’ll throw up your hands and think: “I have no idea what the principle recommends in this case!” If that is the result of attempting to follow the principle, we can’t make any reliable predictions about the credences you will adopt. Alternatively, your attempt to follow the principle might involve believing the propositions that you think the first order evidence supports, and these, we can predict, will be wrong 50% of the time. (Practically, this means a 50% chance of death in the hypoxia case!). So the expected accuracy (and the expected value) of the results of trying to follow calibrationism is greater than the expected accuracy of the results of trying to have the attitude that the first order evidence supports.17 17 I argue for this in greater detail in my [omitted]. Lasonen-‐Aarnio (2010) makes a similar point. She suggests that there may be a benefit to having the dispositions that accord with calibrationism. 23 (b) Trying and Epistemic Guidance The Trying Account sits nicely with the idea that the principles of rationality should be guidance giving. For there is good reason to think that the norms that govern guidance giving, in general, have more to do with the guidance that’s best to try to follow, than the guidance that’s best to follow. For example, I might know that the route to the train station that it would be best to follow involves lots of twists and turns, back alleys and shortcuts. It’s a beautiful route, and if you follow it you’ll get to the train station in ten minutes. But if I’m giving directions to a confused stranger, directing her to follow this route would be bad advice. I should give the stranger a more direct and less scenic route, not because that’s the route it would be best for the stranger to follow, but rather, because it’s the route that would be best for the stranger to try to follow. Since good guidance, generally, seems to require giving advice that’s good to try to follow, if the principles of rationality are meant to be guidance giving, we shouldn’t be surprised if they turn out to be the principles that are best to try to follow.18 Despite its benefits, the Trying Account faces some challenges. First, it may be unclear how a perfectly rational agent could evaluate the results of trying to follow certain principles a priori. This is a large issue which I will just briefly touch on. What Pria has to go on, when considering which principles to recommend, is whatever information about the agent is in the body of evidence she is considering. For example, Pria’s principles won’t recommend that agents with evidence like mine assign credence 1 to the truth about whether the millionth digit of pi is even. Pria didn’t have to know anything about me to get this result. Contemplating what would be good for agents with my evidence (which includes evidence about my capacities) is all that she would need to realize that a principle requiring agents in my situation to be certain of the truth, would lead to worse results than a principle recommending suspension of judgment. There is, however, another worry for the Trying Account. I suspect that if rationality is what the Trying Account says it is, rationality will not be able to serve many of the 18 Note that nothing I say here is inconsistent with Williamson’s (2000) claim that a piece of advice can be good even if an agent isn’t always in a position to follow it. All I am claiming is that, even if she isn’t always in a position to follow the advice, we need to think that, at least in some instances, some good will come of her trying to follow it. 24 purposes we may have wanted it to serve. This is because we don’t only use the notion of rationality in the context of first personal deliberation. We also use it in third personal evaluations of others. For example, Ichikawa and Jarvis (2013) argue that two important roles that the notion of rationality plays involve making sense of epistemic improvement, and explaining the success of rational agents. Their arguments suggest that these roles will not be well served by a notion that satisfies the Trying Account. Additionally, it is not obvious how we can answer the preference questions on the Trying Account. Even if the calibrationist must reject RatPref and TotPref, there is still a question about why it usually makes sense to prefer rational credences based on our total evidence. I want to end by suggesting that we may be able to have our cake and eat it too. We can imagine Pria, up in rationality heaven, creating two epistemic plans. She creates the one that she regards as best for agents to follow, and she creates the one that she regards as best for agents to try to follow. Perhaps the word “rational” is ambiguous between these two plans and which plan we’re interested in, in some particular instance, depends on our purpose. If we’re in the business of giving epistemic guidance that is useful from a first personal perspective, we are going to be interested in the plan that she regards as best to try to follow. But if we’re making third person evaluations, (perhaps to determine whether we should defer to another’s opinion), what is relevant may be the extent to which the person is conforming to the epistemic plan that Pria regards as best to follow. Additionally, RatPref and TotPref are true when we’re thinking about rationality as the plan that is best to follow but not the plan that’s best to try to follow. On this approach, there is no single answer to the constitutive question. There are two important ways in which an epistemic plan can be connected to accuracy, and two notions of rationality corresponding to our interest in each of these plans. 9. Conclusion I began by posing three questions: why should we prefer rational credences to irrational ones, why should we prefer the credences rationalized by our total evidence to those rationalized by a proper subset of it, and what, if anything, can be said about the connection that rationality bears to accuracy? I argued that prima facie plausible answers 25 to all three of these questions can be given if we assume Rational Omniscience – the claim that rationality requires certainty in all the facts about rationality. I then showed that calibrationism is inconsistent this assumption and some of the claims it motivated. This left us with a puzzle: how is the calibrationist thinking about the connection between rationality and accuracy? I suggested an account on which what’s important isn’t the expected accuracy of the result of following a principle. Instead, what we’re interested in is the expected accuracy of the results of trying to follow a principle. Although the Trying Account has some attractive features, I claimed that it will not do everything we want the notion of rationality to do for us. So rather than abandoning the original claim about the connection between rationality and accuracy, we might instead recognize that there are two interesting epistemic notions that serve two distinct roles: there is the plan that one would regard a priori as best to follow, and the plan that one would regard a priori as best to try to follow. I believe that the difference between these two kinds of plans, and their importance, lies at the heart of the debate about higher order evidence. References Broome, J. (2007). “Wide or Narrow Scope.” Mind 116(462):359-‐370. Christensen, D. (2010). “Higher Order Evidence.” Philosophy and Phenomenological Research 81(1):185-‐215. Elga, A. (ms.). “Lucky to be Rational.” Elga, A. (2012). “The Puzzle of the Unmarked Clock and the New Rational Reflection Principle.” Philosophical Studies 164(1):127-‐139. Feldman, R. (2009). “Evidentialism, Higher Order Evidence and Disagreement.” Episteme 6(3):294-‐312. Gibbard, A. (2008) “Rational Credence and the Value of Truth.” Oxford Studies in Epistemology Volume 2. Oxford University Press. Good, I.J. (1967). “On the Principle of Total Evidence.” British Journal for the Philosophy of Science 17(3): 319-‐321 26 Greaves, H. and Wallace, D. (2006). “Justifying conditionalisation: conditionalisation maximizes expected epistemic utility.” Mind 115(459): 607-‐632. Horowitz, S. (2013). “Immoderately Rational.” Philosophical Studies 167 (1):1-‐16. Horowitz, S. and Sliwa, P. (ms.) “Respecting All The Evidence.” Ichikawa, J.J. and Jarvis, B.W. (2013) The Rules of Thought. Oxford University Press. Kelly, T. (2005). “The Epistemic Signifcance of Disagreement” in T. Szabo Gendler and J. Hawthorne eds. Oxford Studies in Epistemology Vol. 1. Oxford University Press. Kelly, T. (2011). “Peer Disagreement and Higher Order Evidence.” In Alvin Goldman and Dennis Whitcomb (eds.) Social Epistemology: Essential Readings. Oxford University Press. Kolodney, N. (2005) “Why be Rational?” Mind 114(455):509-‐563 Lasonen-‐Aarnio, M. (2010). “Unreasonable Knowledge.” Philosophical Perspectives 24(1):1-‐21. Lasonen-‐Aarnio, M. (2014). “Higher Order Evidence and Limits of Defeat.” Philosophy and Phenomenological Research 88 (2):314-‐345. Moss, S. (2011). “Scoring Rules and Epistemic Compromise.” Mind 120 (480):1053-‐1069. Schervish, M. (1989). “A General Method for Comparing Probability Assessors.” Annals of Statistics 17(4):1856-‐1879. Titlebaum, M. (forthcoming) “Rationality’s Fixed Point.” Oxford Studies in Epistemology. Vavova, K. (ms.). “Irrelevant Influences” Weatherson, B. (ms.). “Do Judgments Screen Evidence?” Williamson, T. (2000). Knowledge and its Limits. Oxford University Press.
© Copyright 2026 Paperzz