About this Study Guide Science and Pseudoscience

About this Study Guide Each section will open with the main questions Dr. Aberdein has associated with that topic. These are sourced from the syllabus. It will then provide an introduction or overview of the topic. This comes from the Curd and Cover book. Then it will include summaries of approximately one page for each of the readings. These are from presentation notes. These summaries are just of the reading and its main points. Placing them in the framework of answering main topic questions is a difficult and idiosyncratic job, and probably what Dr. Aberdein will want on the test. After the reading summaries, there will be a short conclusion talking about connections between the readings. This is sourced again from the Curd and Cover book. The final part of each section will include a list of questions that I’m keeping in mind for an exam. I have no way of knowing if they’re similar to exam questions. But I assume it will be something similarly focused. Remember to add anything you think will be helpful. Best of luck! Science and Pseudoscience Main Questions What is science? How does science differ from other forms of knowledge? What are scientific theories, and how may they be distinguished from pseudoscience? Introduction Science is conferred with a lot of trust. If science says something, we generally believe it. But there are groups of people out there doing what looks like science but is not, this is called pseudoscience​
. These groups gain plausibility because they look like science. Many groups (creationists, psychoanalysts, astrologers, etc) have been called out as ​
pseudoscientists​
. But defining why these things are ​
pseudoscience​
is a lot harder than claiming that they are. Philosophers have proposed ​
demarcation criteria​
, characteristics that any discipline must possess in order to qualify as genuine science, to help define what is and isn’t science. Exactly what these ​
demarcation criteria​
are is what philosophers disagree on. Popper – ​
Science: Conjectures and Refutations Karl Popper claims that the criterion which ​
demarcates​
science from ​
pseudoscience​
is falsifiability​
. He reaches this conclusion by comparing four different theories: Einstein’s theory of relativity, Marx’s theory of history, Freud’s psycho­analysis, and Alfred Adler’s individual psychology. Popper identifies constant confirmations as a defining feature of the ​
pseudoscientific theories. It seemed that no matter the situation, these theories could be made to fit; moreover, the same situation could be viewed just as acceptably using any of these theories. ​
Pseudoscientific theories were always confirmed, no matter the situation. Einstein’s theory, on the other hand could be disproved. This leads Popper to several conclusions. Because confirmations of a theory are easily found when sought out, the only confirmations accepted in science must be those that are the result of ​
risky predictions​
. Einstein’s theory was ​
risky​
because the result would have been unexpected without the theory to predict it. Popper also claims that scientific theories forbid certain events; these events become the tests used to refute the theory. Popper’s final conclusion states that any attempt to re­interpret a theory around a failed test lowers its scientific value. Popper’s ​
demarcation principles​
call for a scientific method which revolves around skepticism and ​
falsifiability​
. There are problems with this, however. First, ​
refutations​
(or the confirming evidence​
) could be difficult to come by. Popper also ties ​
riskiness​
to unexpectedness. Not all science is quite as exciting as Einstein’s theory of relativity. Popper also advocates disregarding theories as soon as they fail a single test. This is unnecessarily harsh. While continued restructuring on a case­by­case basis may be evidence of a fanatic clinging to his pseudoscience, it may also be the fine­tuning of a scientific theory until the necessary conditions are fully understood. Lakatos – ​
Science and Pseudoscience Boudry – ​
Loki’s Wager and Laudan’s Error Maarten Boudry focuses on the ​
demarcation problem​
itself rather than the actual demarcation principles​
it attempts to define. He discusses whether or not the ​
demarcation problem​
is dead (what Laudan claims). Boudry defines two different types of problem – territorial demarcation​
and ​
normative demarcation​
. The first problem deals with separating epistemic endeavors like philosophy, history, metaphysics, and even everyday reasoning from science. The second problem, ​
normative demarcation​
, separates science from pseudoscience. The ​
normative demarcation ​
problem is the one of interest to Boudry. He claims that the territorial demarcation​
is philosophically sterile since it is practically impossible to separate reasoning from science. He adds that to give the Popperian demarcation (​
normative​
) problem any teeth, we need to require that the theory be both ​
falsifiable​
and survive repeated attempts at falsification​
. Scientists do not abandon their ideals the minute a theory fails a single apparent test. But they do after repeated failures. This ​
demarcation problem ​
is of interest to philosophers and should continue to be. The title of this article gets its name from the story Boudry uses as a metaphor for the demarcation problem. “In Norse mythology, the trickster god Loki once made a bet with the dwarfs, on the condition that, should he lose, the dwarfs would cut off his head. Sure enough, Loki lost his bet, and the dwarfs came to collect his precious head. But Loki protested that, while they had every right to take his head, the dwarfs should not touch any part of his neck. All the parties involved discussed the matter: some parts obviously belonged to the neck, and others were clearly part of Loki’s head, but still other parts were disputable. Agreement was never reached, and Loki ended up keeping both head and neck. In argumentation theory, Loki’s Wager is known as the unreasonable insistence that some term cannot be defined and therefore cannot be subject of discussion.” Kitzmiller v Dover Decision In October 2004, the Dover Area School District of York County, Pennsylvania, changed its biology teaching curriculum to require that ​
intelligent design​
be presented as an alternative to evolution theory, and that ​
Of Pandas and People​
, a textbook advocating ​
intelligent design​
, was to be used as a reference book. Eleven parents of students in Dover, York County, Pennsylvania, near the city of York, sued the Dover Area School District over the school board requirement that a statement presenting ​
intelligent design​
as "an explanation of the origin of life that differs from Darwin's view" was to be read aloud in ninth­grade science classes when evolution was taught. The plaintiffs (plaintiffs were all parents of students enrolled in the Dover Area School district) successfully argued that ​
intelligent design​
is a form of ​
creationism​
, and that the school board policy violated the Establishment Clause of the First Amendment to the United States Constitution. The judge's decision sparked considerable response from both supporters and critics. ​
Page 64 on the PDF which can be found online at Aberdein.pbworks.com discusses why the judge ruled ​
intelligent design​
to be not science. The paragraph below is copied from the document and succinctly explains why/how ​
intelligent design​
fails to be a science. “After a searching review of the record and applicable caselaw, we find that while ID arguments may be true, a proposition on which the Court takes no position, ​
ID is not science. We find that ​
ID fails on three different levels​
, any one of which is sufficient to preclude a determination that ID is science. They are: (​
1) ID violates the centuries­old ground rules of science by invoking and permitting supernatural causation; (2) the argument of irreducible complexity, central to ID, employs the same flawed and illogical contrived dualism that doomed creation science in the 1980's; and (3) ID’s negative attacks on evolution have been refuted by the scientific community.​
As we will discuss in more detail below, it is additionally important to note that ID has failed to gain acceptance in the scientific community, it has not generated peer­reviewed publications, nor has it been the subject of testing and research.” Conclusion Throughout these articles, we’ve seen different ways of defining ​
demarcation principles​
. They all agree that there are ​
demarcation principles​
; they just don’t agree on what they actually consist of. Popper’s view is that a scientific theory must be open to ​
refutation​
– science must be falsifiable​
. Lakatos disagrees with Popper’s views on ​
falsifiability​
. He replaces it with research programmes. Laudan, and others, have claimed that the ​
demarcation problem​
is not of philosophical interest. Boudry refutes this claim and defines two different types of ​
demarcation​
. The Kitzmiller decision is important because it shows us how a non­scientific source (the judge) defines science. Questions to Keep in Mind Is the demarcation problem of interest to philosophers of science? What kind of interactions does demarcation have on philosophy, science, and the public? Why is defining science so difficult? Induction Main Questions What is inductive inference? Can it provide a sound basis for scientific knowledge? Introduction Deductive Inference ​
is an inference such that the conclusion must follow from the premises – the conclusion can be deduced from the premises. Here is an example. Premisses: All beans in this bag are white. This bean is from this bag. Conclusion: This bean is white. ​
Inductive Inference​
, on the other hand, is ampliative. It takes the premises and amplifies this knowledge into a generalized conclusion. Here is an example. Premisses: These beans are from this bag. These beans are white. Conclusion: All beans from this bag are white. Scientific laws and theories are generalizations which are ampliative. It’s accepted by philosophers that some sort of non­deductive inference must be used – induction is often chosen to fill this role. Traditionally, induction fills two roles: ​
creative inference (logic of discovery)​
, which leads from evidence to the formulation of new theories, and ​
confirmation (logic of justification)​
, which connects evidence to theories after they have been formulated. Induction as a ​
logic of discovery​
has largely been abandoned. Most modern philosophers view induction as a logic of justification​
. Despite its universality in science, ​
inductive inference​
is philosophically controversial. Popper – ​
The Problem of Induction Lipton – ​
Induction Goodman – ​
The New Riddle of Induction Nelson Goodman discussed the viability of the ​
Theory of Confirmation​
(that confirming evidence makes an inductive theory stronger). He proposed that current definitions excluded nothing (any given evidence could be used to support any given theory). He claims that in order to refine the theory, we must distinguish between ​
lawlike​
statements (for which confirming evidence would be relevant) and ​
accidental​
ones (for which confirming evidence would be irrelevant). Similarity to other confirmed and accepted hypotheses makes a statement more ​
lawlike​
. Example: “All pieces of copper are conductive” is similar to “all pieces of gold are conductive” and “all pieces of the same substance have the same conductivity”. However, this measure of lawlike­ness​
is too permissive since an example like “all things on my desk are conductive” is also a similar statement. One could define ​
lawlike​
statements as having no spatial or temporal terms. For example, “All grass is green” covers all times and places, while “all things on my desk are conductive” is limited to my desk. However, if one considers logically equivalent statements, some ​
lawlike​
statements have logically equivalent statements with temporal or spatial terms. For example, “All grass is green in London and elsewhere” contains special terms but is logically equivalent to “All grass is green”. (This point here follows from logical rules. If you’re not familiar with truth tables and the AND/OR operators you might need a quick refresher. Think Boolean logic, computer science, and discrete math for solid grounding outside of a logic class.) One cannot even define ​
lawlike­ness​
in terms of how qualitative a statement is, because qualities/qualitativeness is/are relative (see the Bleen/Grue emeralds example, which is complex and maybe not worth trying to memorize unless it made at least some sense the first time you read it. Just focus on its implication). The conclusion here is that there is not yet a good enough definition of ​
lawlike​
to have a precise theory of confirmation, meaning that we cannot say for sure whether confirming evidence is valuable to the strength of a theory. Goodman terms this the ​
new riddle of induction​
. Conclusion Induction​
is commonly used in scientific practice. But how it is used, and whether or not this is a valid approach, is what has philosophers all worked up. The only thing they can agree on is that induction ​
is not ​
deductively valid​
, which is implied by their different definitions. ​
Induction​
is not well defined in logic, so these problems do transfer over into philosophy of science. You can define ​
induction​
as a subclass of arguments which either follow particular forms or normatively confer on their conclusions a ​
relatively high degree of probability​
. Lipton showed that both strategies confront serious difficulties. Lipton examines five models of ​
inductive inference​
and finds each one wanting as a description of inductive practice. Popper believes that ​
induction ​
can be neither justified nor vindicated. But that science remains rational because of its inherent falsificationism​
. Goodman produces the ​
new riddle of induction​
: why are we justified in inductively inferring the green hypothesis (emerald example) rather than the grue hypothesis? He rejects explanations of legitimate epistemological preference for qualitative preferences. Goodman believes that some predicates are projectable depending on their track record. Questions to Keep in Mind What is induction? In what ways is it used in scientific practice? What are some reasons that inductive inference is not justified? Theory and Observation Main Questions Observations are often said to confirm or refute scientific theories. But is the distinction between theory and observation so straightforward? Can observation settle disputes between theories, or is the choice always underdetermined? Introduction Modern philosophy of science has been ​
extremely influenced by the ​
Duhem­Quine Thesis​
.​
This thesis has been used in various ways to reach conclusions about the ​
limitations of empirical evidence ​
and the rules of scientific method as a constraint on our acceptance or rejection of scientific theories. Some argue from it that no scientific theory can ever be conclusively ​
refuted​
. Likewise, others argue that because of it, we cannot accept any theory as ​
objectively true​
– no matter how well the evidence fits. Duhem – ​
Physical Theory and Experiment Pierre Duhem begins by talking about the difference between physics and physiology. Physics is his main example throughout this article. The experimental testing of a theory is not the same in physics as it is in physiology. In physiology, it’s necessary and possible to obliterate opinions (preconceived notions) and accept the results of an experiment. But in physics it’s impossible to obliterate these opinions and notions. The apparatus and equipment used are both supported by abstract theory represented to be true. It is thus necessary to rely on at least these preconceived notions. Making use of instruments requires reliance on theory. The theories in physics lead to theory in chemistry, physiologists, etc. This reliance is an act of faith in some theory group. Physics experiments can also never condemn an isolated hypothesis, but only a whole theoretical group. Experiments implicitly recognize the accuracy of a whole group of theories. Experiments divided into ​
application​
and ​
testing​
. In ​
application​
we’re not interested in testing accuracy, but intending to draw on current theories. Here we use instruments based on same theories. In ​
testing​
, we are interested in determining the accuracy of a theory. From theory, we derive a theoretical prediction of experimental fact, then set up an experiment to observe the prediction. If the predicted occurrence doesn’t occur, then the prediction condemned. Failing to produce predicted observation shows at least one error, but not exactly what that precise error is. A ​
crucial experiment​
is therefore impossible in physics. You can have two hypotheses which are arranged to predict very different results. Observation of a prediction should equal acceptance of a hypothesis and rejection of another. But this doesn’t mean the theory is correct. There are infinitely many other theories (even ones we haven’t discovered yet) that we should test this theory against. Since it’s impossible to do this, it’s impossible to have a crucial experiment in physics. Experimental contradiction does not have the power to cha Gillies – ​
The Duhem Thesis and the Quine Thesis The Duhem Thesis states that you can only condemn a whole theoretical group, not an isolated hypothesis. He uses the example of Newton’s First Law of Motion. If the observation statement cannot be found, then you can’t identify it the problem is the 3 laws of motion or law of gravity. Duhem then goes on to say that a ​
crucial experiment​
is not possible in physics. This crucial experiment​
is one where you list all of the possible hypothesis available for a phenomenon. Then you eliminate all but one of these options using the crucial experiment. This led Duhem to his theory of good sense. Gillies claims that the theory of good sense is more modified falsification than conventionalism. Some of the most well known theories, like the straight line principle of light, have been disproven. Good sense helps us pick the best hypothesis. Gillies points out the irony that Duhem himself lacked good sense, he was always choosing the wrong physics hypothesis. Good sense cannot be a solution to theory choice, more of a starting point. The Quine Thesis is based on analytic (true based on the words) versus synthetic (needs empirical investigation) statements. Based on the same lines of argument as Duhem, Quine argues against the analytic/synthetic distinction. There are key differences between Quine and Duhem, though. Duhem is limited to a specific scope of science which excludes physiology whereas Quine uses the whole body of knowledge. Duhem also limits the number of possible hypothesis to be tested together but Quine believes the group can include all human knowledge. A major difference is the Quine believes no theory of good sense is needed – everything can be based on logic. Gillies combines the Duhem Thesis and Quine Thesis to get the Duhem­Quine Thesis. This holistic thesis applies to only theoretical hypothesis regardless of the area of study (this part follows from the Quine Thesis). The group of hypothesis being tested does not include all of human knowledge (follows from Duhem). Laudan – ​
Demystifying Underdetermination Conclusion Many different readings of the Duhem­Quine Thesis have been circulated since Duhem’s original paper. Duhem argued that individual physical theories and postulates cannot be tested in isolation. He was especially concerned with denying the existence of ​
crucial experiments​
in physics. He also criticized ​
inductivism​
. But he later goes on to connect ​
falsification​
with his theory of ​
good sense​
. Quine wrote his paper ignorant of the Duhem thesis. But he used similar premises to reach controversial conclusions about meaning, analyticity, and a priori knowledge. The combination of these two theses results in the ​
Duhem­Quine Thesis​
which Gillies discusses. Gillies rejects the global character of Quine and criticizes Duhem for not extending his thesis beyond physics. A discussion on Laudan needs to be added here. Questions to Keep in Mind What is underdetermination? How does it work in scientific practice? Can we separate theories from our auxiliary hypotheses? Or from other theories? Bayesian Approach to Confirmation Main Questions Dr. Aberdein did not provide any. But he did say he will probably separate Bayesian approach from theory and observation on the exam. So I have done this as well. Introduction I will not reproduce a discussion of Bayesian statistics (the actual math) just because of how annoying the symbols would be to type out in Word. There’s a simple introduction in the Kitcher book. ​
Bayesianism​
is a very large umbrella under which things fall. Essentially, it is the idea that the probability of something can be based on our prior probability of it and the probability of the evidence. This is often used in an attempt to explain ​
confirmation​
in scientific practice. Salmon – ​
Rationality and Objectivity in Science Mayo – ​
A Critique of Salmon’s Bayesian Way Mayo takes issue with Salmon’s Bayesian approach to confirmation, his ​
Bayesian way​
or comparative Bayesian approach​
. Kuhn had challenged the existence of an empirical logic for science. Salmon tried to “build a bridge” between Kuhn and logical empiricism using his comparative Bayesian approach​
. Salmon claims theory choice can be partly determined by the prior probabilities of the hypotheses. The ​
Bayesian Way​
still appeals to people because it allows plenty of room for extra­scientific (sociological and idiosyncratic) factors while allowing at least some room for empirical evidence. Salmon believes, as Kuhn does, that theory choice is between two rival theories. This leads him to create a ratio of the probabilities P(T​
/E)/P(T​
/E) where T​
and T​
are theories and E 1​
2​
1​
2​
is evidence. This ratio cancels out the ​
Bayesian catchall factor​
which is the probability of the evidence given that one of the other innumerably many theories is true (~T). In the special case where the evidence would have to happen if either T​
or T​
is true (T­>E), we prefer theory one 1​
2​
to theory two whenever theory one’s prior probability is higher. Mayo’s main problem with this is that you cannot apply Salmon’s comparative approach until we have accumulated sufficient knowledge from non­Bayesian means to find the priors. Salmon demands the priors be reached objectively, as a frequency. Salmon thinks the prior probability should be the frequency with which similar hypotheses have been successful. But is it even legitimate to base prior probabilities on the past successes of “similar” hypothesis? Mayo doesn’t think so. So she replaces Salmon’s definition with the frequency with which the observations that a hypothesis has predicted have been successfully observed. Then the prior probabilities are determined by the ​
reliability​
of the hypothesis. Mayo’s approach has two main advantages. By assigning probabilities in this way, it accords with how scientists actually talk about confirmation. Also, reporting the quality of tests performed provides a way of communicating the evidence that is intersubjectively testable. Mayo’s approach is objective and frequentist. It is also not Bayesian. And since she has re­defined the frequencies, she has essentially removed all of the Bayesian qualities from Salmon’s ​
comparative Bayesian approach​
. By working from Salmon’s own algorithm, Mayo is able to construct a bridge that is not Bayesian but provides a better link. She used ​
error statistics which is a lot like what you learn in statistics about hypothesis testing (i.e. assigning confidence intervals and testing to see if the data would reject a hypothesis by being far enough from the mean). Conclusion It is important to remember that the mathematics involved in Bayesian statistics is correct. Everyone agrees. Philosophers disagree on how to interpret these probabilities and whether or not they can be used to produce an empirical and logical backbone for scientific reasoning. Salmon thinks a bridge can be built to explain how scientists prefer one theory over another. Mayo plays on this idea but suggests an​
error statistic​
approach to understanding these priors. This approach is closer to how scientists actually work, but eliminates all ​
Bayesianism​
from the Bayesian Way​
. Questions to Keep in Mind Why is Bayesianism an attractive way to describe confirmation? What is confirmation? Why can’t we handle the Bayesian catchall factor? Explanation Main Questions Science aims to be explanatory. But what is an explanation? The more explanatory theories are, the more likely they are to be accepted as true. Why? Can this practice be justified? Introduction Laws and theories are often used to explain lower level theories or laws. We often use explanations; in fact, it’s difficult to imagine science as something other than a process of explaining​
natural phenomenon. There are two important definitions to keep in mind as you read. The thing that is being explained is called an ​
explanandum​
. The thing doing the explaining is called the ​
explanans​
. Hempel – ​
Two Basic Types of Scientific Explanation Kitcher – ​
Explanatory Unification Woodward – ​
The Manipulability Explanation of Causal Explanation Woodward explains that there are two different theories here. The ​
Manipulationist Theory​
X causes Y means manipulating/changing X would change Y while the ​
Regularity Theory​
states that X causes Y means all occurrences of X are followed by occurrences of Y. For example, consider the man that takes birth control (X) and does not become pregnant (Y). This works under the ​
Manipulationist Theory​
but makes no sense under the ​
Regularity Theory​
. Under the manipulationist theory, the position of a light switch is a cause of light being on, because we can change whether the light is on by manipulating the switch. A central point here is that the possibility of this manipulation (or intervention) is essential to causation and explanation. This theory is endorsed by many scientists but few philosophers. Causal vs Descriptive: ­
Causal: We are in a position to explain when we have info relevant to manipulating, controlling, changing nature. ­
Descriptive: Knowledge that does not provide info potentially relevant to manipulation. ­
The information relevant to manipulation needs to be understood modally or counterfactually (what would have happened if X were different). Example: Mass extinction of the dinosaurs. Manipulation is not practically possible in this example. So, we use a what­if­things­had­been­different question. A specific episode implies singular causal explanation. Example: Block sliding down an inclined plane. This is a case where a generic pattern induces a phenomenon (all blocks in this pattern will slide down). It is an explanation because it exhibits a pattern of counterfactual dependence between explanans and explanandum. We can also manipulate this using the what­if­thing­had­been­different question. When we do so, we can see causal relevance if changes in X are associated with changes in Y. Not all counterfactual dependence means causal relevance, though. He uses an example of a barometer in a storm to show this. How do we distinguish the counterfactual dependence then? Two ways: ­
Intervention: Counterfactuals that matter for causation and explanation are counterfactuals that describe how the values of one variable would change under the intervention​
that changes the value of another. We can see how this works by returning to the main examples. If you ​
intervene​
with the barometer, it doesn’t change the storm. But if you ​
intervene​
with the inclined plane, it changes the block’s actions. ­
Invariance: A generalization G is ​
invariant​
if G would continue to hold under some intervention that changes the value of X in such a way that, according to G the value of Y would change. The barometer case is a ​
noninvariant generalization​
since it incorrectly describes change via interaction. The inclined plane is an ​
invariant generalization​
since it correctly describes change via interaction. ­
If something is ​
invariant​
under some ​
interventions​
then it is potentially usable for manipulation. Explanations can differ in degree/depth. They must have some sort of practical point or payoff (this could connect well with some of the more technical stuff in explanatory unification). Woodward stresses that there should be continuity between everyday practices and systematic/sophisticated practices. The manipulationist view is nonreductive. The regularity theory is reductive ­ i.e. it tries to analyze the related concepts in family/circle solely by unrelated concepts that lie outside the family/circle. This is because there’s an assumption that nonreductive approaches will be circular but Woodward disagrees. Causal explanation should also satisfy plausible epistemological constraints. Explanatory info must be epistemically accessible info (recognized, surveyed, and appreciated). Woodward believes causal relationships are natural features of the world. They are “out there” in nature. He doesn’t concern himself very much with “lawfulness” like others do. Invariance​
suits his purposes much better. Weber, Van Bouwel, and De Vreese – ​
Scientific Explanation I’m not sure if he’ll cover this one because he only talked about it for like 20 minutes on Tuesday. He might though. So if someone can provide some notes, that would be awesome! Conclusion Most of the recent philosophical debate focused on the nature of scientific ​
explanation​
has focused on the ​
covering law​
model produced by Hempel. This ​
covering law​
model states that explanations are arguments that have at least one statement of empirical law in the premises. There are various problems with this model. Consider the examples of the man on birth control, the hexed salt, and the syphilitic mayor (found in commentary Curd and Cover). Kitcher claims that the goal of explanation is really to unify theories. The goal is to explain general laws rather than particular facts. Woodward provides a model which most closely accords with scientific practice. This model focuses less on ​
lawlikeness​
and more on ​
invariance​
. Questions to Keep in Mind What is the covering law model? What kind of problems does it have? Realism and Anti­Realism Main Questions Can science give us knowledge of an unobservable reality? If it does, then how does it do so? If it does not, then how does it achieve objectivity? Is science really progressive and cumulative? Is there a scientific method responsible for this? Introduction Empiricists ​
believe that the warrant for all of scientific claims rests on experience – ​
empirical evidence​
; they don’t believe in theoretical assertions that cannot be tested. Arguments for this can be supported by ​
underdetermination​
. ​
Realists​
believe that science is getting closer to the truth overall and that scientific claims, while tested by ​
empirical evidence​
, are able to discover hidden truths about the causes of events. When theories are ​
confirmed​
, the ​
realist​
uses this confirmation as proof that the entities underlying the theory are real. The ​
empiricist​
disagrees on this point and is skeptical of the existence of these unobservable entities. Laudan – ​
A Confutation of Convergent Realism Who did Laudan? Laudan is super important because he’s the one who introduces the ​
pessimistic induction​
that is legitimately one of the biggest arguments against realism. The ​
pessimistic induction​
is an attack on the inferences used in realism. He claims that the most “successful” past theories ended up being ​
non­referring​
, ie they didn’t ​
refer​
to actual entities or existing processes (according to current science). Saatsi – ​
On the Pessimistic Induction and Two Fallacies Saatsi supports Laudan in his ​
pessimistic induction​
and defends it against attack in this paper. He draws a line between successfulness of a theory and its approximate truth and attempts to re­establish the dignity of the ​
pessimistic induction​
from the ​
no miracles argument​
which is a realist​
argument that says the best explanation of the success of science is the approximate truth of its theories (rather than saying that science is magic). First, we need to separate two ideas: successfulness​
of a theory and ​
truth​
of a theory. A theory is ​
successful​
if it is being accepted by the scientific community. Laudan’s ​
pessimistic induction​
works like this: (1) success is a reliable indicator of truth (2) most current successful theories are true (3) most past theories that were successful are false since they differ significantly from current successful theories (4) but many of these past theories were also successful (5) therefore, the successfulness of a theory is not a reliable test for its truth. It’s important to note that Laudan is not claiming that most current theories will be rejected; just that the history of science shows that success is not a good indicator of truth. Laudan’s argument has been attacked for committing the ​
turnover fallacy​
. But it would only be fallacious if we were inferring the chances of replacing currently successful theories from previously rejected theories. To further protect the ​
pessimistic induction​
from attacks, Saatsi updates it to have this form: (1) success is not a reliable indicator of truth (2) there are no other reliable indicators of truth (3) most successful theories are false (4) current successful scientific theories are a representative sample from successful theories throughout the entire history of science (5) therefore, any current successful theory is probably false. This protects the ​
pessimistic induction​
from being attacked for committing the ​
false positives fallacy​
. In the end, Saatsi has shown that the two big fallacies realists are claiming against the ​
pessimistic induction​
are incorrect. Hacking – ​
Experimentation and Scientific Realism Oddie – ​
Truthlikeness Truthlikeness​
, or verisimilitude, is a philosophical concept that distinguishes between the relative and apparent​
​
truth​
and​
​
falsity​
of​
​
assertions​
and​
​
hypotheses​
. The ​
problem of truthlikeness is the problem of articulating what it takes for one false​
​
theory​
to be closer to the truth than another false theory. Here, we will focus on the ​
logical problem​
of giving a consistent and adequate account of ​
truthlikeness​
using the ​
content approach​
. Oddie gives an example of the problem using statements about the solar system. Saying there are 12 planets in our solar system is certainly more true than saying there are 80 but both statements are still not true. Saying that the number of planets is either less than or equal to nine is true but not as close to the whole truth as saying there are 8 planets (sorry, Pluto). Popper introduced the ​
content approach​
but realized that content alone is insufficient to characterize ​
truthlikeness​
. Showing this involves an example. First, let a matter for investigation be completely encased by the set of language ​
L​
. Then the set of all true sentences called ​
T​
is a complete truth account. ​
T​
is the target of investigation of ​
L​
. ​
F​
is the set of false sentences (not true) and since F implies T is true (logic), ​
F​
is a perfectly good set of sentences. The consequences of any theory ​
A​
will be divided between ​
T​
and ​
F​
. The intersection of ​
A​
and ​
T​
, called ​
AT,​
is the ​
truth content ​
of ​
A​
. Likewise, ​
AF​
is the falsity content of ​
A​
. Since every theory entails all the logical truths, these will constitute a special set, at the center of ​
T​
, which will be included in every theory, whether true or false. There are excellent diagrams in the actual article. This set up has important consequences for ​
truthlikeness​
. Firstly, since a falsehood has some false consequences, and no truth has any, it follow that no falsehood can be as close to the truth as a logical truth — the weakest of all truths. Second, it is impossible to add a true consequence to a false theory without thereby adding additional false consequences. So the account entails that no false theory is closer to the truth than any other. We could call this result ​
the relative worthlessness of all falsehoods​
. It is tempting at this point to retreat to something like the comparison of truth contents alone. Call this the ​
Simple Truth Content account​
. It deems a false proposition the closer to the truth the stronger it is. After the failure of Popper's attempts to capture the notion of truthlikeness, a number of variations on the content approach have been explored. Some stay within Popper's essentially syntactic paradigm, comparing classes of true and false sentences. Others make the switch to a more semantic paradigm, searching for a plausible theory of ​
distance​
between the semantic content of sentences, construing these semantic contents as classes of possibilities. Conclusion Realism​
is a very attractive position. It would explain why science seems so right, because it would say that scientific theories are pushing at and discovering the existence of unobservable entities to which they ​
refer​
. But it is difficult to justify the ​
realist​
position. One of the biggest attacks against it is the ​
pessimistic induction​
which Laudan outlines and Saatsi defends against realist claims of fallacies. ​
Truthlikeness​
is also important to consider in the pursuit of true theories. We need to be able to compare the truth of theories in order to claim they’re building on less true theories to become closer to truth. Oddie details the ​
content approach​
and some problems with it. Questions to Keep in Mind What are the challenges to scientific realism? What would we have to give up in accepting anti­realism? How might strength of a claim determine truthlikeness amongst false theories? What importance does truthlikeness have in realism? Laws of Nature Main Questions Fundamental scientific principles are often called ‘laws of nature’. What does this mean? Are such ‘laws’ more than just accidental regularities? Introduction Laws of nature​
are universally common in science. Philosophers of science also see their importance and many include them as an essential part of what it means to be scientific. Remember the ​
covering law model​
? At the time, we largely ignored the definition of ​
laws​
that were required for scientific explanation. But this section focuses on that definition, which is not as easily accessible as their use. We looked at two important ways of understanding laws: the regularity approach​
and the ​
necessitarian approach​
. ​
Regularity approach​
says that laws describe the way things behave. The ​
necessitarian approach​
agrees with that plus laws tell us how things must behave. Both of these approaches are ​
realist​
though in their assumptions. Ayer – ​
What is a Law of Nature? Ayer takes the ​
regularity approach​
which traces back to Hume. Hume’s constant­conjunction​
theory of causation states that one event (type A events) caused another event (type B events), which means A events are always followed by B events. Hume denies any objective requirement between events A and B in virtue of which A produces or makes B occur. Hume denies that a necessity connection between causes and effects is logical. If it were logical, effects could be deduced from causes prior to experiences but our knowledge of causal relations is based on experience. Hume therefore believes that the source of nonlogical necessity is an imaginative fiction that originated from inferences and expectations which we then project onto nature. All of this results in his ​
regulatory theory of laws​
which claims that laws of nature are true universal generalizations whose object content is exhausted by what actually happens in the world. Ayer then addresses a number of problems related to this approach. The first is the problem of vacuous laws​
which means that no vacuously true generalizations should qualify as laws. Because of some logical manipulations about things that don’t exist, we arrive at the problem where statements about things that don’t exist are vacuously true. For example, all winged horses are spirited is true and all winged horses are tame is true. We don’t want that to be a law of nature though. The next problem is the ​
problem of noninstantial laws​
. The natural inclination is to handle noninstantial laws in terms of how objects of a certain kind would behave if there were such objects. Regulatory theorists take statements of law to describe how actual objects behave, not how possible objects would behave if they existed. To reconcile this, the regulatory theory suggests we distinguish ​
ultimate laws of nature​
from ​
derivative laws​
. An ultimate law​
implies an ​
instantial law​
where a ​
derivative​
one may not. Next is the ​
missing values problem for functional laws​
. ​
Functional laws ​
assert a functional relation between two or more variables in the form of a math equation. There is a problem making sense of these counterfactual conditionals while still regarding laws of descriptions of what actually happens in the real world. There is also the ​
problem of accidental generalizations​
. It’s difficult to distinguish between genuine laws and generalizations of facts and Hume’s simple version of the ​
regulatory approach cannot make this distinction. This is where Ayer provides his own account: the ​
epistemic regulatory theory​
. This states that laws are true, universal generalizations with some additional features. The difference for these laws lies in the attitude of the people who put them forward and the laws will depend on what kind of science you’re in. Dretske – ​
Laws of Nature Cartwright – ​
Do the Laws of Physics State the Facts? Conclusion As vital as laws are in science, and as important as they are to theories of scientific explanation, you’d think there would be a clear understanding of what they are. Most philosophers fall into two approaches: the ​
regulatory​
and the ​
necessitarian​
. Both approaches have different problems. Ayer creates the ​
epistemic regulatory approach​
and Dretske adapts the ​
necessitarian approach​
. Both approaches think that the generalization must be true to be a law. But Cartwright shows that most laws are false because they don’t describe what’s actually happening in the world. Questions to Keep in Mind What kinds of difficulties do we face when trying to define what a law of nature is? Are laws actually true? What kind of connection do they enforce between the cause/effect? Science and Values Main Questions What are the ethics of science? Should society restrict scientific research that questions its values? Do women do science differently than men? Introduction With all the talk of deduction and laws, it’s easy to forget that science is done by scientists who are people. And this science is done in context. For these reasons, it’s important to understand what kind of influence the context and values of the researcher, and the society, have on the science produced. Goodstein – ​
On Fact and Fraud Goodstein’s main point is that fraud in science is a sort of necessary evil born from the nature of modern science. He says that the problem with fraud is that it runs contrary to the concept in science that scientists can disagree about the interpretation of the same data, but the data should be able to be trusted as true. He goes on to state that the scientific knowledge base is self­correcting through repeated trials and falsification, but that injecting falsehood into the knowledge base is not the goal of fraud. The goal of fraud according to Goodstein is to bypass the scientific method to advance one’s scientific career for more exposure and influence within the reward system (detailed further below). He outlines three factors that are present in almost all cases of fraud (career pressure, thinking that they knew the answers already, and irreproducibility of experiments), but he says that these factors are too common to be predictive of fraud in any way. Goodstein then outlines 15 guidelines of good science, that scientists learn as a part of their “scientific apprenticeship,” that are supposed to prevent fraud. I will not outline all 15 here, they are in a convenient list form in the chapter available online. He attacks each of these guidelines separately, which I will also not outline in specifics. Overall, however, Goodstein argues that it is unreasonable to expect an actual practicing scientist to adhere to these guidelines, mainly due to the impossibility of being purely objective, and the nature of the scientific authority structure and reward system. Goodstein believes that the root of fraud lies in the motivations present in the reward system of modern science. He states that the intrinsic motivations of doing science tend to lose their luster if you have nothing to show for it, so ultimately scientists are motivated by the fame and immortality that can be achieved through advancement through the reward system, which involves attaining degrees and placements, publishing papers in prestigious journals, etc. The authority structure oversees the reward systems, and consists of gatekeepers that determine who advances in the reward system, which Goodstein labels as scientists who have dropped out of the race of the rewards system for a seat in the authority structure. He argues that while science is largely a meritocracy, but says that being in the right place at the right time, and already having influence in the scientific community can largely skew the results of who advances. Despite the fact that this system encourages fraud, Goodstein argues that it is an integral part of the way that modern science works. Longino – ​
Values and Objectivity Objectivity​
is the willingness to let our beliefs be determined by “the facts” or by some impartial and non­arbitrary criterion rather than by our wishes as to how things ought to be. Longino argues that the objectivity of science is secured by the social character of inquiry. Science is objective in two different senses. The first is that the view provided by science is an accurate description of the facts of the natural world. The second is that the view provided by science is achieved by the scientific method. To the ​
logical positivists,​
science appears to be free of subjective preference. But the critics say that subjectivity plays a minor role in theory development and theory choice. There is a distinction to be drawn here between the ​
context of discovery​
, which can have an origin in guesses or dreams, and the ​
contest of justification​
, which requires the hypothesis considered only in relation to observable consequences. We’ve seen this distinction before. Three aspects of the social character of science are the existence of scientific disciplines as “social enterprises”, that the initiation into scientific inquiry requires education, and that the sciences are also among a society’s activities and depend for their survival on that society’s valuing what they do. Two shifts of perspective are needed to see how ​
objectivity​
works in this social context: the idea of science as a practice and science being practiced by social groups rather than individuals. Peer review is one of the aspects of the social group practicing science. It is meant to ensure that, among other things, the authors have interpreted the data in a way that is free of their subjective preferences. Scientific knowledge is produced by a community and transcends the contributions of any individual or subcommunity. We can criticize scientific theories in a way that less objective ones, like mystical claims, could not be. We can do this with ​
evidential ​
or ​
conceptual criticism. ​
Evidential criticisms​
question the evidence, accuracy, and extent of the research and conceptual criticisms​
question the meta­physical parts like the conceptual soundness of the hypothesis, its consistency, and the relevance of the evidence. Objectivity​
requires a way to block the influence of subjective preference at the level of background beliefs. We then get objectivity by degrees where scientific communities will be objective to the degree that they satisfy four criteria: avenues for criticism, shared standards, community response, and equality of intellectual authority. Okruhlik – ​
Gender and the Biological Sciences Sokal – ​
What the Social Text affair does and does not prove Conclusion Because science is done in a social context by people who have values, the science is affected by these societies and values. Okruhlik points to androcentric bias in life sciences to show that scientists and their theories are implicitly affected by beliefs about gender. But science is still objective in many ways, as Longino points out. And this social context, even if it creates an environment that promotes fraud, is a necessary part of how science progresses according to Goodstein. Fraud and hoaxes, like Sokal’s, can affect how science is seen by the general public but how much does it affect actual science? Questions to Keep in Mind How can scientists stay objective? Can they ever truly be objective? Do they need to be? What is scientific fraud and how/why is it committed? Does it pose a very serious threat to science as a whole? What about to public opinion? Catastrophic Risk and the Simulation Argument Main Questions What are catastrophic risks? Might we be living in a computer simulation? How are these questions related? Introduction This section is all about Bostrom’s work in ​
existential risk​
and the ​
simulation argument​
. While not the same as the philosophy we’ve seen until now, it connects scientific advances with human existence to reach some seemingly startling conclusions. Close examination shows that his arguments are fairly reasonable, even if they sound weird at first. Bostrom – ​
Existential Risk Prevention as Global Priority An ​
existential risk​
is a risk that threatens premature extinction of humans or drastic destruction of potential for desirable human development. Bostrom puts the whole of existential risks at about 10­20% in this century. The probability of a risk, P(X), may be very low based on assessments from method A. But the probability that A is wrong adds a lot more to our overall probability that X may occur. In order to assess the seriousness of a risk, we have to assign an evaluation system to the negative value of a particular possible loss scenario. He chooses a normative evaluation​
, which is “an ethically warranted assignment of value” because it’s best for considering society rather than a single agent. He assumes a life has some positive value, the value does not depend on when it occurs or whether it already exists, and that the value of two similar lives is twice that of one life. Using that framework, he says a risk can be characterized using three variables: ​
scope, severity, ​
and ​
probability​
. Using these variables, you can classify risks. An existential risk has at least crushing severity and at least pan­generational scope. Humanity​
is Earth­originating intelligent life and ​
technological maturity​
is attainment of capabilities affording a level of economic productivity and control over nature close to the maximum that could feasibly be achieved. Bostrom identifies four classes of existential risk. Human Extinction​
: Humanity goes extinct before reaching technological maturity. ​
Permanent Stagnation​
: Humanity survives but never reaches technological maturity. Mature technology would have innumerable and, possibly unimaginable at this point, benefits for future humans. Flawed Realisation​
: Humanity reaches technological maturity in a dismally and irredeemably flawed way. So they reach maturity but only realize some of the values of maturity. ​
Subsequent Ruination​
: Humanity reaches technological maturity in a ‘good’ way but subsequent developments lead to ruin anyway. Bostrom says we don’t need to practically worry about this type. Just focus on reaching technological maturity. Then that will probably bring down the chances of subsequent ruination. Bostrom’s goal is to focus on long term gains. And even ethical and religious approaches would put existential risk as a major priority. In the end, Bostrom concludes that we should be looking for a trajectory that avoids existential catastrophes, a sustainable trajectory. But minimizing existential risk is not an easy problem. Research on existential risk is very minimal. It requires multidisciplinary approaches that break from usual scientific methodology. Theoretical foundations are shaky. Even though it’s a public good, people don’t really care very much. It would require a lot of cooperation. Existential risk requires a proactive approach. But there is hope. Existential risk study has only just been born, after the atomic bomb. Many existential risks seem to lie very far into the future. And people have started to become more aware of global impacts of human action. Opportunities for mitigation should be on the rise too. Bostrom – ​
Simulation Argument FAQ ​
&​
Are You Living in a Computer Simulation? Conclusion Between the cyborgs and the simulated humans, Bostrom’s work seems a lot like a sci­fi movie. But the claims he makes aren’t really all that crazy. As humans, we’d like humanity to continue for as long as possible and reach a technological maturity that benefits humanity. So it makes sense that we should come together to mitigate risks of that. Logically, at least one of the three simulation argument disjuncts must be true: either the human species is likely to go extinct before reaching posthuman stage, posthuman civilizations are very unlikely to run human simulations, or there’s a good chance that we are simulated humans. But this isn’t a very strong claim anyway. Even if you accept (3), you’re just accepting that the likelihood of us being simulated is high, not that it’s guaranteed. I still think that for Bostrom to promote existential risk mitigation, then he must think we’re not simulated because by similar calculations as his, we would find any reduction in existential risk would greatly increase the likelihood of the simulations breaking the computer. There’s a more complicated version of my argument, but I’m not going to type it out. Questions to Keep in Mind What exactly is an existential risk? How would mitigating existential risk influence our ability to reach “posthuman” stage? How does it affect the likelihood that we are a computer simulation? What kind of ethical implications does our being simulated have?