The Role of Case Study Research in Political

The Role of Case Study Research in Political Science: Evidence for Causal Claims
Author(s): Sharon Crasnow
Source: Philosophy of Science, Vol. 79, No. 5 (December 2012), pp. 655-666
Published by: The University of Chicago Press on behalf of the Philosophy of Science Association
Stable URL: http://www.jstor.org/stable/10.1086/667869 .
Accessed: 11/07/2013 08:36
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .
http://www.jstor.org/page/info/about/policies/terms.jsp
.
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of
content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms
of scholarship. For more information about JSTOR, please contact [email protected].
.
The University of Chicago Press and Philosophy of Science Association are collaborating with JSTOR to
digitize, preserve and extend access to Philosophy of Science.
http://www.jstor.org
This content downloaded from 130.149.72.139 on Thu, 11 Jul 2013 08:36:02 AM
All use subject to JSTOR Terms and Conditions
The Role of Case Study Research in
Political Science: Evidence for Causal
Claims
Sharon Crasnow*†
Political science research, particularly in international relations and comparative politics,
has increasingly become dominated by statistical and formal approaches. The promise of
these approaches shifted the methodological emphasis away from case study research. In
response, supporters of case study research argue that case studies provide evidence for
causal claims that is not available through statistical and formal research methods, and
many have advocated multimethod research. I propose a way of understanding the integration of multiple methodologies in which the causes sought in case studies are treated
as singular causation and contingent on a theoretical framework.
1. Introduction: Methodological Debates in Political Science. The stimulus for political scientists Gary King, Robert O. Keohane, and Sidney Verba’s (1994) methodological primer for qualitative research was a shift that
had taken place in the discipline in the prior decades. Political science research, particularly in the areas of international relations and comparative
politics, had increasingly become dominated by statistical and formal approaches, sometimes loosely—and not completely accurately—grouped together as “quantitative methods.” The promise of these approaches—that is,
formal methods, such as rational choice and game theory, and powerful statistical methods, such as multiple regression analysis fueled by the increasing sophisticated use of statistical software packages—had shifted the meth-
*To contact the author, please write to: Norco College, 2001 Third Street, Norco, CA 92860;
e-mail: [email protected].
†I am grateful to the other participants in the symposium (Mary Morgan, Rachel Ankeny, and
Carlo Gabbani) for comments. Comments from participants at the “Reasoning with Cases in
the Social Sciences” workshop at the Center for Philosophy of Science at the University of
Pittsburgh in November 2011 also helped shape the article.
Philosophy of Science, 79 (December 2012) pp. 655–666. 0031-8248/2012/7905-0010$10.00
Copyright 2012 by the Philosophy of Science Association. All rights reserved.
655
This content downloaded from 130.149.72.139 on Thu, 11 Jul 2013 08:36:02 AM
All use subject to JSTOR Terms and Conditions
656
SHARON CRASNOW
odological emphasis away from the traditional case study method of political
science, a method that was predominantly qualitative.
One of the stated goals of King et al. (1994) was to revitalize and legitimate such qualitative research in political science through making it “more
rigorous.” The authors called for political science researchers to be more
self-conscious in their concept formation, research design, and data collection. They also offered specific recommendations for doing so, using quantitative methods as a template. The desire to make qualitative methods “more
rigorous” was praised and appreciated by many, but the specifics of King
et al.’s recommendations were challenged immediately and often, resulting
in a growing literature on qualitative and, later, multimethod research methodology in political science.1
While many issues in this ongoing discussion are potentially philosophical, one of particular interest is the epistemological role of case study research—that is, how do individual case studies provide evidence for the
causal claims that political scientists hope to establish, and what sort of evidence do they provide? Both case studies and the leading statistical methodologies (e.g., multiple regression analysis) offer observational evidence
rather than experimental evidence, and so neither has the status of the “gold
standard” of randomized experimental studies. Although increasingly popular in political science—a trend transported from economics—experiments
are not always possible, for a variety of reasons. Therefore, observational evidence continues to play an important role in the field, and questions about
what that role is are at the center of methodological debates.
A consensus has formed that multimethod research is better than any
methodology used on its own; some combination of formal, quantitative,
and qualitative methodologies will give better results than any used alone.
However, this consensus does not indicate agreement about what methods
are to be multiplied, in what way, or for what purposes. In this article, I explore whether multiple sources of evidence provide more robust support for
causal conclusions drawn from observations because more evidence is better
or whether it is the variety of evidence that provides better support for the
causal conclusions that political scientists seek.
The defense of “multimethod” or “mixed method” approaches is thus ambiguous, and the ambiguity has a several sources.2 For one thing, there are
1. A review article by Andrew Bennett and Colin Elman summarizes the literature and
describes other innovations as a “renaissance in qualitative methods” (2006, 456). Among
the activities they mention are the formation of a Qualitative Research Methods section of
the American Political Science Association and an annual summer qualitative methods
research workshop.
2. The terms “multimethod” and “mixed method” are used interchangeably in the literature. I will use “multimethod.”
This content downloaded from 130.149.72.139 on Thu, 11 Jul 2013 08:36:02 AM
All use subject to JSTOR Terms and Conditions
EVIDENCE FOR CAUSAL CLAIMS
657
many ways in which methods that have been traditionally identified as formal, quantitative, or qualitative overlap or intertwine. For example, although
nominal scales might traditionally be thought of as a feature of a qualitative
approach, nominal categories are often now incorporated in quantitative work
through the use of logit analysis, probit analysis, and dummy variables in regression studies (Collier and Elman 2008, 781). This blurring of categories
can go in the other direction as well, with statistical and mathematical tools
being used in the context of qualitative research.3
Given such ambiguity, Collier and Elman propose a broad characterization of qualitative research that I adopt: “Qualitative researchers routinely
rely on rich, dense information concerning specific cases” (2008, 781). The
“dense information” explored in case studies may be an entirely qualitative
“thick description,” or it may involve quantitative work. For example, a
case study of social welfare systems in Hungary might look for correlations
between income and some measure of benefit from the welfare system—
quantitative data, although used within a case.
In what follows, I limit the discussion to one side of the debate—questions
of evidence for causality. One argument made for the indispensability of
case study research centers around the idea that it is only through the detailed “thick” descriptions it provides that we can uncover appropriate evidence for causal claims or at least the sorts of causal claims that are sought in
political science.
Mahoney and Goertz (2006) identify case studies as most closely associated with what is sometimes called a “causes-of-effects” approach in which
the particular cause of a particular effect is sought. This approach is contrasted to that which informs statistical arguments—an “effects-of-causes”
or average effects approach. “Methodologists working in the statistical tradition have seen clearly the difference between the causes-of-effects approach, in which the research goal is to explain particular outcomes, and the
effects-of-causes approach, in which the research goal is to estimate average
effects” (230–31). Bennett and Elman concur: “Mainstream qualitative
methodologists . . . fit more comfortably into a causes-of-effects template, in
particular, explaining the outcome of a particular case or a few cases. They do
not look for the net effect of a cause over a large number of cases, but rather
how causes interact in the context of a particular case or a few cases to produce an outcome” (2006, 458).
2. Causal Inference: Some Ideas from Political Methodology. Although
George and Bennett (2005) have probably made the most sustained argument that case studies are indeed crucial for establishing causes in some way
other than average effects, it is a view that is fairly widespread among po3. Collier and Elman mention qualitative comparative analysis as an example (2008, 781).
This content downloaded from 130.149.72.139 on Thu, 11 Jul 2013 08:36:02 AM
All use subject to JSTOR Terms and Conditions
658
SHARON CRASNOW
litical methodologists (e.g., Mahoney and Goertz 2006; Gerring 2007;
Brady and Collier 2010). The argument is that since statistical analysis provides evidence for average effects rather than evidence that some particular
cause results in a particular effect, it lacks explanatory power for particular
cases. For example, there is a robust average effect that democracies do not
go to war with each other (the democratic peace hypothesis), but knowing
this does not really help us understand why it is that Britain and France did
not go to war in the autumn of 1898 at the time of the Fashoda Incident.4
Advocates of multimethod research have argued that in this particular case,
the thick description that case study research produces provides the means
to discover the causal mechanism through which war was averted and so provides an understanding that the statistical result cannot provide.
How does a case study provide such evidence? One answer is that the
evidence is found through process tracing. “Process tracing is perhaps the
tool of causal inference that first comes to mind when one thinks of qualitative methodology in political science” (Mahoney 2010, 123). George and
McKeown first used the term to mean “tracing the decision process by
which various initial conditions are translated into outcomes” (1985, 35).
Their focus was on giving an account of an agent’s decision-making processes. The term has come to be much more broadly used, however, describing not just a decision-making process but any causal process. Stephen
Van Evera offers the following description of process tracing in his methodology textbook: “In process tracing the investigator explores the chain of
events or the decision-making process by which initial case conditions are
translated into case outcomes. The cause-effect link that connects independent variable and outcome is unwrapped and divided into small steps; then
the investigator looks for observable evidence of each step” (1997, 64).
Brady and Collier have described the observable evidence elicited through
process tracing as different from the observational evidence that serves as
data in statistical arguments. We can see this in the following definition:
“process tracing. Examination of diagnostic pieces of evidence, commonly
evaluated in a temporal and/or explanatory sequence, with the goal of supporting or overturning alternative causal hypotheses. The diagnostic pieces
of evidence are called causal process observations (CPOs), and process
tracing provides criteria for evaluating their contributions to causal inference” (2010, 343). The “causal process observation” (CPO) terminology has
been taken up by a number of participants in the discussion. It is a focus of
several of the articles in Brady and Collier (2010) and figured prominently
4. The example is discussed in George and Bennett (2005) and is based on Peterson
(1995). The Fashoda Incident was a “near miss”—a case in which two democracies
(Britain and France) came close to war over an outpost in the West Nile Valley in the
autumn of 1898.
This content downloaded from 130.149.72.139 on Thu, 11 Jul 2013 08:36:02 AM
All use subject to JSTOR Terms and Conditions
EVIDENCE FOR CAUSAL CLAIMS
659
in sessions revolving around these topics at the 2010 American Political
Science Association meeting in Washington, DC (e.g., Collier et al. 2010).
What are CPOs?
Causal-process observations (CPOs). Pieces of data that provide information about context, process, or mechanism and contribute distinctive leverage
in causal inference. They are contrasted with data-set observations ( DSOs),
which correspond to the familiar rectangular data set of quantitative researchers. In quantitative research, the idea of an “observation” (as in a DSO) has
special status as a foundation for causal inference, and we deliberately incorporate this term in naming CPOs so as to place theory contribution in causal
inference on an equivalent footing. Obviously, we do not thereby mean that
one directly observes causation. Rather, this involves inference, not direct
observation. . . . Process tracing is the overall research procedure, which
identifies specific CPOs that yield valuable leverage in causal assessment.
(Brady and Collier 2010, 318)
The caveat that one does not directly observe causation—meaning that one
does not directly observe some causal connection—is a clarification intended to respond to objections raised after CPOs were introduced in the first
edition of Brady and Collier’s book. The definition also makes it clear that
these observations are identified, in part, through the role they play in causal
inference. Although they do not necessarily establish causal hypotheses,
they do “contribute distinctive leverage.” This role is contrasted with that
played by data set observations, which are used as quantitative evidence.
The role that particular observations play is what makes them CPOs. They
are observations made while engaged in process tracing in order to evaluate a
causal hypothesis, and so they are observations that are salient to that hypothesis. “Process tracing consists of procedures for singling out specific
CPOs and evaluating their contribution to causal inference in a given analysis setting” (Collier, Brady, and Seawright 2010, 202). These distinctive
observations are generated through qualitative research, for example, interviews, document research, historical research. The process that is being
traced is, in the paradigm case, a hypothesized causal process, and the effect
is being traced back to its cause (the search for causes of effects). Case studies are typically the means through which CPOs are revealed, and researchers trace the causal process within the case.
Brady offers the following example of the use of CPOs to overthrow a
causal hypothesis seemingly supported through evidence for an average effect. I summarize his example. On November 14, 2000, the Philadelphia Inquirer published an opinion piece by the economist John Lott in which he
argued that the networks’ early announcement of Gore as the winner in Florida cost Bush 10,000 votes.5 Lott’s conclusion was based primarily on a form
5. Lott published the full support for his idea in Public Choice (2005).
This content downloaded from 130.149.72.139 on Thu, 11 Jul 2013 08:36:02 AM
All use subject to JSTOR Terms and Conditions
660
SHARON CRASNOW
of regression analysis, using turnout data from all 67 Florida counties for
four presidential elections (1988, 1992, 1996, 2000). He estimated a timeseries cross-sectional regression with fixed county and time effects and included a dummy variable for the 10 panhandle counties. This research design is intended to compare the “treatment” case (the panhandle, where the
polls were still open when the election was called) with those counties that
did not receive the treatment (the other counties of Florida that were in the
eastern time zone), controlling for differences that were reflected in the data
from the previous elections. In this way, Lott arrived at an average effect,
which he used to draw his conclusion that Bush lost 10,000 votes.
Brady challenges Lott’s conclusion, noting that “a researcher accustomed
to the exclusive use of data-set observations might stop at this point, convinced that an adequate inference had been made. However, researchers oriented toward the use of causal-process observations would ask whether the
result makes any sense” (2010, 239). Brady proposed answering the question of whether Lott’s story “makes any sense” through examining the case
and looking to see whether the causal processes that should be present if the
causal hypothesis is correct are indeed present. Brady presents the following
CPOs to make that determination:
• Information from the networks about precisely when the networks
called the election for Gore (10 minutes before the closing of the polls
in the panhandle)
• Census data from 1996 on time of voting showing that only about a
twelfth of Florida voters voted in the last hour before the polls closed
in that election
• Information on the voting patterns of absentee voters
• Interviews with Florida election officials indicating that voters do not
come to the polls uniformly throughout the day
• Viewer and listener information from media sources that allowed
Brady to estimate how many potential voters would have been aware
of the networks’ announcement.
These observations all challenge the idea that the early call caused Bush to
lose votes by providing evidence that causal processes connecting the early
call with diminished voting were not present. In this example, Brady argues
that Lott’s hypothesis is tested and disconfirmed through CPOs that are discovered in the thick descriptive work that occurs in a case study. The average
effects (the turnout data from previous elections that Lott used) are projected
and so thought to hold in this particular circumstance, but the process tracing
from effect to cause does not bear out the hypothesis. According to Brady, to
understand what was really going on we need to seek the causes of effects
and not the effects of causes.
This content downloaded from 130.149.72.139 on Thu, 11 Jul 2013 08:36:02 AM
All use subject to JSTOR Terms and Conditions
EVIDENCE FOR CAUSAL CLAIMS
661
3. Philosophical Accounts of Causality: Effects of Causes and Causes of
Effects. The distinction between causes of effects and effects of causes has
been framed above in terms of a distinction between causation of particular
events and a more general account of causation (average effects). Philosophers who have worked on the problem of causality might recognize a similarity to the distinction made between singular causation (“actual causation”
or “token causation”) and general causation (“type causation”). Is the distinction between causes of effects and effects of causes a version of the distinction between token causation and type causation?
The answer is not immediately obvious, in part, because the philosophical
literature on token and type causation does not often converge. As Woodward points out, “the philosophy of science and structural equation literature
has focused almost exclusively on type-level causal claims. . . . By contrast,
philosophers working in the tradition inaugurated by David Lewis have focused exclusively on token causation” (2003, 74). It is also not immediately
obvious what the connection between type claims and token claims is, given
that there are cases that seem as though they should be covered under a general causal account but are not. For instance, Woodward uses the example of
someone who is a smoker but whose lung cancer is due to exposure to asbestos. Although smoking causes lung cancer and this cancer victim is a smoker,
his cancer is not caused by smoking.
Both James Woodward (2003) and Judea Pearl (2009) have claimed that a
virtue of each of their accounts is that they can accommodate singular causation. Pearl argues that his structural account “treats type and token claims
as instances of the same species, differing only in the details of the scenariospecific information that is brought to bear on the question” (2009, 310).
Woodward offers an account of token causation that he argues captures at
least “an interesting range of cases,” even if it is not comprehensive (2003,
75). I will not focus on the details of these different accounts. The connection
that I want to make to the use of cases in political science does not require any
commitment to the specifics of either Pearl’s or Woodward’s type-level account of explanation as the correct account, although it does presuppose that
these accounts are correct for at least some class of causes.6 What I take away
from how Pearl and Woodward have embedded singular causation in their
accounts of general causation are some ideas about how the details of an actual case matter for singular causal explanation. Specifically, I examine how
reasoning from cases works with general causal hypotheses.
As Woodward puts it, “questions about type causation abstract away from
information about the actual values of variables in any particular actual situation, and about what would happen under interventions in such situations,
given those actual values. By contrast, token causation has to do precisely
6. Specifically, in this discussion I am neutral about the question of causal pluralism of
the sort that Nancy Cartwright advocates (2007).
This content downloaded from 130.149.72.139 on Thu, 11 Jul 2013 08:36:02 AM
All use subject to JSTOR Terms and Conditions
662
SHARON CRASNOW
with such matters” (2003, 76). Determining what would happen under interventions, involves understanding the patterns of counterfactual dependence,
which in turn requires type causal understanding or, minimally, type causal
hypotheses. While the details of the accounts differ, Pearl makes a similar
point about the relationship between type and token causation. “We have argued that (a) it is the structural rather than circumstantial contingencies that
convey the true meaning of causal claims and (b) these structural contingencies should therefore serve as the basis for causal explanation” (2009, 328).
Joining these approaches to singular causation with the idea that reasoning from cases in political science involves seeking evidence relevant to general causal claims already on the table offers a way of understanding the potential value of multimethod research in political science. To trace the causal
process, the hypothesized cause and the effect must first be identified. A
causal structure, mechanism, or pattern of counterfactual dependency will do
this. CPOs are thus observations of actual values that are relevant given the
mechanism under consideration. The uncovering of these pieces of “diagnostic information” provides evidence for or against the operation of the hypothesized mechanism. But if something like the account that Pearl and
Woodward give of the type and token causality relationship is right, finding
CPOs depends (in some not yet specified sense) on background causal
claims (hypotheses), against which the thick description that cases can provide gives evidence for causality. How might this work in multimethod political science research?
4. Putting Causes of Effects and Effects of Causes Together in Political
Science. A review of some examples that political methodologists have offered as illustrations of reasoning from cases supports the idea of the analysis
of causes of effects (singular causes) in the previous section. Andrew Bennett (2010) analyzes Schultz’s use of process tracing to empirically test three
alternative hypotheses for the peaceful resolution of the Fashoda Incident.
The Fashoda Incident is an example of a particular case that appears to be
relevant to the democratic peace hypothesis—the hypothesis that democracies do not go to war against each other. The hypothesis is considered one of
the best supported quantitative results in political science research in the past
25 years (Russett 1993; Gleiditch 1995).7 While the empirical support for the
hypothesis is good, there are competing theoretical (causal) accounts. The
Fashoda Incident is a case in which two democracies were close to going to
war but did not, and it suggests an opportunity for testing out elements of
competing theoretical explanations of the democratic peace. According to
7. For example, Russett (1993) begins with, “Scholars and leaders now commonly say
‘Democracies almost never fight each other’ ” (3). The first two chapters of his book
provide a good summary of the democratic peace literature.
This content downloaded from 130.149.72.139 on Thu, 11 Jul 2013 08:36:02 AM
All use subject to JSTOR Terms and Conditions
EVIDENCE FOR CAUSAL CLAIMS
663
Bennett, Schultz uses the detailed description of this case to provide empirical tests of competing explanatory hypotheses.
Briefly, the methods that Bennett attributes to Schultz are those outlined
in Van Evera’s methodology textbook (1997). Hoop tests ( jumping through
the hoop to be considered) indicate whether a hypothesis is still worthy of
consideration given the evidence and so can eliminate a hypothesis but not
provide direct evidence for it. Hoop tests provide a necessary but not sufficient criterion for accepting the hypothesis. Smoking gun tests (the murderer
is holding the smoking gun, which strongly suggests he is the murderer, but
even if he was not found to be holding the gun, he might still be the murderer)
provide strong support for a hypothesis, but if the hypothesis does not pass
the test that would not be grounds to eliminate it. Straw in the wind tests provide useful information but are not decisive (which way is the wind blowing?) and offer neither necessary nor sufficient criteria. Doubly decisive tests
provide both necessary and sufficient criteria for the hypothesis—confirming one hypothesis while eliminating others. The description of the tests in
terms of necessary and sufficient conditions strongly suggests that the causal
theory that undergirds this analysis is something like Mackie’s INUS account. I will return to this suggestion below.
Bennett argues that Schultz (2001) uses pieces of diagnostic evidence
(primarily qualitative) to test three hypotheses.8 For example, Bennett analyzes Schultz’s argument against a neorealist hypothesis (Layne 1994) that
France backed down in the face of Britain’s stronger military force. According to Bennett, Schultz shows that Layne’s hypothesis does not survive a
hoop test since the hypothesis cannot explain a number of features of the
case: that the crisis happened in the first place, that it lasted 2 months, and
that it came so close to war since Britain’s superior military strength was obvious from the outset.9 Schultz argues for his own hypothesis—that the
transparency of democratic institutions makes it difficult for leaders to bluff
in circumstances where domestic opposition challenges the government’s
threat to use force, and this same transparency makes the threats of democracies more credible if the public and the opposition are supporting the government. Schultz points to the British opposition leaders’ resounding affirmation of British government’s commitment to take control of the region,
where similar domestic support for the French position was lacking. The
French ultimately withdrew and received no concessions from the British.
The specifics of this case thus provide a smoking gun test of Schultz’s hypothesis (Bennett 2010, 211–12).
8. Although Bennett prefers not to use the CPO terminology, those who do use that
language would surely describe this evidence as CPOs.
9. Schultz does not himself refer to what he is doing as testing, nor does he use Van
Evera’s terminology.
This content downloaded from 130.149.72.139 on Thu, 11 Jul 2013 08:36:02 AM
All use subject to JSTOR Terms and Conditions
664
SHARON CRASNOW
As I noted above, on the face of it these empirical tests would seem to be
more closely tied to something like an INUS account of singular causation—
an insufficient but nonredundant part of an unnecessary but sufficient condition (Mackie 1974). Such philosophical accounts are known to suffer from a
variety of problems, but primarily the difficulty seems to be that the logical
apparatus of necessary and sufficient conditions is inadequate for capturing
our intuitions about causal relations. Pearl (2009, 314) cites Kim (1971) on
this. Mackie himself recognized this and added further conditions (1974).10
For the current discussion, this is not an issue that we need to pursue. Tests of
necessary and sufficient conditions of the sort that Van Evera describes and
Bennett makes use of in the Fashoda example are not meant to establish causality but are used to test causal hypotheses that are already on the table. Process tracing aims at giving a singular causal account of the event but can only
do so against the backdrop of a general causal account.
5. Conclusions about Multimethod Research in Political Science. The
claim that multimethod research is better seems ambiguous unless we can
give an account of the role that evidence produced through these different
methodologies plays. At first blush, it might seem that it is simply a matter
of some sort of triangulation. Different methods produce different evidence,
and the evidence can be put together in such a way so as to converge on a
causal explanation. Gerring (2004), for example, urges a plurality of methodologies in order to achieve a kind of triangulation, but what he seems to
mean is that pursuing multiple methods increases the opportunity for evidence to be generated to both test and support alternative hypotheses. In other
words, he thinks the issue is a matter of producing more evidence. He argues
that case studies can generate information that might be missing where statistical analysis or natural experiments have been the primary mode of investigation, and vice versa, in areas where there have been multiple case studies,
another case study may be less likely to yield novel evidence. Multiple methods create multiple opportunities.
While this may indeed be one way in which multimethod research is valuable, it does not seem to be what is going on within the cases that I have discussed here. Evidence of average causes and evidence for singular causation
are simply not directed at the same end.11
I began by taking political methodologists who argue for the usefulness of
a case study at face value when they say they are using a different approach—
one that seeks causes of effects rather than effects of causes. If we think of
10. This is Nancy Cartwright’s (2007, 206) take on Mackie.
11. See Morgan (2012) for different ways in which evidence might be integrated to
support different kinds of hypotheses.
This content downloaded from 130.149.72.139 on Thu, 11 Jul 2013 08:36:02 AM
All use subject to JSTOR Terms and Conditions
EVIDENCE FOR CAUSAL CLAIMS
665
this distinction as a version of the distinction between singular causation and
general causation, recent philosophical accounts of singular causation give
us some insight. Political methodologists make the claim that cases can tell
us something about causality that quantitative methods cannot. I have argued
that to do so the information that the case gives us has to be evaluated against
a causal framework that we bring to the case for such a purpose. Specifically,
both Pearl and Woodward have accounts that integrate an understanding of
singular causation into their general accounts of causation, highlighting the
way in which an understanding of token causes depends on background
causal models or, more generally, on theory. In the examples that I have discussed, I have emphasized how the specific pieces of evidence produced
through process tracing are useful as evidence for singular causation (causes
of effects) within the context of testing a theory, but there does not seem to be
any reason to think that they support average effects (effects of causes).
REFERENCES
Bennett, Andrew. 2010. “Process Tracing and Causal Inference.” In Brady and Collier 2010, 207–20.
Bennett, Andrew, and Colin Elman 2006. “Qualitative Research: Recent Developments in Case
Study Methods.” Annual Review of Political Science 9:455–76.
Brady, Henry. 2010. “Data-Set Observations versus Causal-Process Observations: The 2000 U.S.
Presidential Election.” In Brady and Collier 2010, 237–42.
Brady, Henry E., and David Collier, eds. 2010. Rethinking Social Inquiry: Diverse Tools, Shared
Standards. 2nd ed. Lanham, MD: Rowan & Littlefield.
Cartwright, Nancy. 2007. Hunting Causes and Using Them. New York: Cambridge University Press.
Collier, David, Christopher H. Achen, Henry E. Brady, Thad Dunning, Colin Elman, Diana Kapiszewski, and Philip A. Schrodt. 2010. “A Sea Change in Political Science Methodology?” Program presented at the American Political Science Association annual meeting, Washington,
DC, September 4. https://www.apsanet.org/content _ 72134.cfm?navID5193.
Collier, David, Henry E. Brady, and Jason Seawright. 2010. “Introduction to Part II.” In Brady and
Collier 2010, 201–4.
Collier, David, and Colin Elman. 2008. “Qualitative and Multimethod Research: Organizations,
Publications, and Reflections on Integration.” In The Oxford Handbook for Political Methodology, ed. Janet M. Box-Steffensmeier, Henry Brady, and David Collier, 779–95. Oxford: Oxford University Press.
George, Alexander L., and Andrew Bennett. 2005. Case Studies and Theory Development in the Social Sciences. Cambridge, MA: MIT Press.
George, Alexander L., and Timothy J. McKeown. 1985. “Case Studies and Theories of Organizational Decision Making.” In Advances in Information Processing in Organizations, vol. 2, ed.
Robert F. Coulam and Richard A. Smith, 21–58. Greenwich, CT: JAI.
Gerring, John. 2004. “What Is a Case Study and What Is It Good For?” American Political Science
Review 98 (2): 341–54.
———. 2007. Case Study Research: Principles and Practice. Cambridge: Cambridge University
Press.
Gleiditch, Nils Petter. 1995. “Geography, Democracy, and Peace.” International Interactions
20:297–323.
Kim, Jaegwon. 1971. “Causes and Events: Mackie on Causation.” Journal of Philosophy 68 (14):
426–41.
King, Gary, Robert O. Keohane, and Sidney Verba. 1994. Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton, NJ: Princeton University Press.
Layne, Christopher. 1994. “Kant or Cant: The Myth of the Demographic Peace.” International Security 19 (2): 5–49.
This content downloaded from 130.149.72.139 on Thu, 11 Jul 2013 08:36:02 AM
All use subject to JSTOR Terms and Conditions
666
SHARON CRASNOW
Lott, John R., Jr. 2005. “The Impact of Early Media Election Calls on Republican Voting Rates in
Florida’s Western Panhandle Counties in 2000.” Public Choice 123:349–61.
Mackie, John L. 1974. The Cement of the Universe: A Study of Causation. Oxford: Clarendon.
Mahoney, James. 2010. “After KKV: The New Methodology of Qualitative Research.” World Politics 62 (1): 120–47.
Mahoney, James, and Gary Goertz. 2006. “A Tale of Two Cultures: Contrasting Quantitative and
Qualitative Research.” Political Analysis 14:227–49.
Morgan, Mary. 2012. “Case Studies: One Observation or Many? Justification or Discovery?” Philosophy of Science, in this issue.
Pearl, Judea. 2009. Causality: Models, Reasoning, and Inference. 2nd ed. New York: Cambridge
University Press.
Peterson, Susan. 1995. “How Democracies Differ: Public Opinion, State Structure, and the Lessons
of the Fashoda Crisis.” Security Studies 5 (1): 3–37.
Russett, Bruce. 1993. Grasping the Democratic Peace: Principles for a Post–Cold War World. Princeton, NJ: Princeton University Press.
Schultz, Kenneth. 2001. Democracy and Coercive Diplomacy. New York: Cambridge University
Press.
Van Evera, Stephen. 1997. Guide to Methods for Students of Political Science. Ithaca, NY: Cornell
University Press.
Woodward, James. 2003. Making Things Happen: A Theory of Causal Explanation. New York: Oxford University Press.
This content downloaded from 130.149.72.139 on Thu, 11 Jul 2013 08:36:02 AM
All use subject to JSTOR Terms and Conditions