Teaching Experimental Political Science – Reloaded

Paper for the ECPR General Conference 2015
Teaching Experimental Political Science – Reloaded
Ulrich Hamenstädt, University of Muenster, [email protected]
Abstract
For many decades, experimental methods in political science played the role of a “Sleeping Beauty”.
When browsing through the top journals of our discipline, it is noticeable that this has changed to
the greatest possible extent. In recent years we thus have a growing demand for teaching
experimental methods within the scope of university courses.
In 2012, I wrote an article titled “Teaching Experimental Political Science” for the European Political
Science journal. In contrast to that article, I would now argue that methods in Social Science should
be taught based on students’ projects. This paper therefore updates my old paper and presents
alternative teaching concepts. Consequently this paper argues that experiments in political science
should be taught to give a general introduction to the topic, present design examples, discuss specific
problems and make the students work on small group projects.
Introduction
For decades, experiments have not been part of the methodological toolbox for political scientists.
One of the most famous quotes in this context comes from an early issue of the American Political
Science Review, defining the discipline as non-experimental science (Lowell, 1910; quoted by
Druckman et al, 2011). However, this has dramatically changed since the mid-90s (Druckman et al,
2006; Morton and Williams, 2010). There are different reasons for these changes (Hamenstädt,
2012b): technical developments, an enhanced cost-benefit ratio, new research questions raised
within the discipline, etc. These changes also lead to a growing demand for teaching experimental
methods within the scope of university courses. Experiments can also be seen as a gold standard for
deductive, hypotheses testing research designs – therefore teaching experimental political science is
a salient approach to introduce research designs to students. Getting research started with a precise
question, a testable hypothesis, an overview of what will roughly happen during the research process
– in short: writing a research proposal for an own project – is one of the hardest things to do for
undergraduate students (and often for researchers, too). Yet students have to get used to think
systematically about their research; about how they convert an idea into a research proposal. This is
1
what we, as lecturers and docents, have to teach them and the best way to do it is to let them try it
under guidance, giving constant (personal) feedback, assigning project work in small groups, and
encouraging (peer) feedback for their projects (Blair et al, 2013a; Blair et al, 2013b).
The importance of teaching experimental designs to undergraduate students has already been
discussed in an article from 2011/12 in European Political Science (Hamenstädt, 2012a). However,
the prior article in EPS presented a course that was (implicitly) rooted in the idea of explaining a
method to students and let them repeat or reproduce the content of selected articles and book
chapters about experiments. So the course design – described in the old article – was denoted by a
“disassociation between research in theory […] and research in practice” (Ryan et al, 2013: 85). In
contrast to the old article, this one argues (1.) that notably for method courses, it is important for
students “to get their hands dirty” –meaning that they should work on a small project throughout or
at least by the end of the course. (2.) It is also of vital importance for an enriching course to work
with adequate examples. On the one hand, statistic books often use (extremely) simplified data sets,
designed for the use of unambiguous statistical models. On the other hand, textbooks for methods –
like the classical textbook for experiments from Shadish, Cook, and Campbell (2002) – work with
examples that are not from the field of political science. It is surely easier for most people to
understand an experimental design by taking examples from medicine and it might be good to
discuss difficult terms like ‘validity’ before talking about experimental designs at all. In contrast, this
paper argues that it is best to provide students with examples from their own discipline and discuss
them – experimental political science is very much an interdisciplinary approach und therefore
confusing enough for many students, so we might carefully challenge them with “transfer tasks” – by
transfer tasks I mean, discussing e.g. the methodology of medical studies and transferring them to
political science. In addition, it is also my impression that it is much easier to discuss challenging
methodological points with the students when you can draw from specific examples you dealt with in
the course of the seminar. So the paper suggests starting with examples (from political science)
before presenting a methodological problem to the plenum. The paper will (thus) work out the
foundation for the two key arguments and in doing so, it will bring the setup of the course I
presented three years ago into question.
In the course of the paper we will have in the first section a look at the course structure, which is a
“reloaded” version of the paper from 2012 (Hamenstädt, 2012a). In the second section of the paper,
the students’ feedback and the lecturer’s experiences from the course are reflected. Lastly, the
concluding section discusses the arguments “against” the old version of the paper, raised in the
introduction.
2
Course structure
The course was divided into four blocks. Each of them consisted of three to four teaching units (á 90
minutes). The first block was dominated by inputs from the lecturer and by classroom experiments in
order to give the students a feeling for the teaching and learning content. The teaching goal of this
first block was to show the students what kind of questions in political science could be answered
with the “ideal type” of laboratory experiments. The readings for the first block focused on articles
that gave an overview of political science methods and demonstrated where experiments could be
located within the “methodological toolbox”. Work in small groups based on these readings was
accompanied by inputs by the docent as well as by classroom experiments. The aim of using
classroom experiments as a teaching tool was to establish a sequence to criticize experiments: What
was the experiment about? What did it have to do with political science? What went well and what
went wrong during the experiment? What could we learn from the classroom experiment about
conducting experiments in general?
It was easy for the students to criticise those little classroom experiments, because there is often
something that goes wrong in a suboptimal environment like a classroom. Figure 1 gives an example
of one game that was played in class, and which was given to the students – together with a general
introduction of how to play “the game”.
Figure 1: “Tragedy of the Commons”- game
Let’s start with a little game…
In this example, the students were easily
You have four carts. Two of them are red and two of them are black.
Each round you have to give two cards to your neighbor.
Black cards count no points.
about how a common good is produced
A red card gives you 10 point, if you keep it
after a couple of rounds. This led us back
and 5 points to your group, if you give your card to the group.
Round
able to determine that the game was
Points
Round
Points
Test round
to the issue like the question why we
need a state to organise the production
of common goods, why we need
1
6
2
7
3
8
4
9
5
10
Count
Count
taxation
and
when,
the
free-rider
problem, etc. One might also give each
group the possibility to let someone give
a short speech – which can serve to
create a within-research design of the
experiment. Also, there will be some
discussion between the students during
Add each round the points you got from the red card(s) you keep (10 points)
to the points your group collects for the round.
the experiment – in how far does this
3
influence the results and how can we prevent such disturbances in the experimental set-up? All this
can be discussed with the students after this “Tragedy of the Commons”-game and my experience is
that the students had a lot to share about the experiment after they have participated in it. So this
experiment can also be a good example to discuss e.g. the distinction of “between-” and “within“research-designs. I suggest using veconlab1 and maybe other web portals for teaching. If your
classroom allows the students to have access to the web, those platforms can be of additional value
– not least because students tend to use the web access for checking their emails, just like many
experimental subjects do in pools. That tells the class a lot about what might happen in a web
experiment.
The second block of the course shall then deal with different forms of experiments. Starting from
laboratory experiments, you can go on to other common forms of experiments, like fieldexperiments, survey-experiments and last but not least natural experiments. Each form of
experiments stresses the “ideal type” of experimental design in its own way. My suggestion is to
choose one journal article reporting an experiment for a student’s presentation and assign an
overview chapter from the Handbook of Experimental Political Science (Druckman et al, 2011) to the
other students from the course.
In the third block, specific problems of experimental designs are to be discussed. Those are the
questions of causality – and how that in turn could be operationalised within a research design – and
questions of validity in research designs. Ethics might be of importance to discuss as well, since it is a
classical cross-sectional topic that will appear in the discussion of many experiments. That sounds
like theoretical and maybe humdrum work: and it indeed is. Yet it is also important for the students
to learn to reflect their knowledge about experiments while discussing those questions. Also, you will
already have a wide range of different examples from the research you have already discussed in
class, which will make those topics more interesting for the students.
For the last block, I suggest to include some time in the course syllabus for project work. I usually like
to let the students in my courses work in groups of four – eventually plus or minus one person – that
is the perfect group size. In the course to which I refer in this article, I paired the students and their
assignment was to conduct a classroom experiment and to write out the experiment in full, after
having tested it in the class and we had discussed it. First it sounds like time off for the lecturer, but it
is the opposite: students will need a lot of guidance and feedback. What I found notable was that
students often had to return to contents that were discussed earlier in the course; yet when the
contents became relevant, they often had to work them out again. (Not at least) those feedbacks
1
URL: http://veconlab.econ.virginia.edu/admin.htm last access, 02.03.2015.
4
concerned topics such as what a good question is and how this question could be answered within an
experimental design.
Feedback and experiences
The article draws on the students´ feedback given via the standardised course evaluation at the end
of the course. At this point it is important to mention that the students could select the course as a
supplementary course – it was not obligatory. However, courses on (quantitative) methods are not
the most popular courses in the Social Science curriculum –based on my experience at the institute
I’m working at –, so it is not surprising that not many students voluntarily choose to participate in
courses like this one.
Hence, it is important to keep in mind that the students’ evaluation was filled in by six students out
of eight. All in all we can presume that students participating in the course had an intrinsic
motivation. The next two figures give an account of the overall statistics from the evaluation – seven
is the highest rating students could chose for evaluating the course and one is the lowest evaluation
possible – and the student´s comments.
Figure 2: Results of the standardized evaluation of the seminary (n=6)2
Course evaluation (overall values)
Overall value Mean
Lecturer’s commitment
6,5
Discussions in the course
6,3
Core readings for the course
5,8
Course materials
6,0
2
All results of the evaluation can be found on the following URL: http://www.uni-muenster.de/imperia/md/content/fuchs/lehre/ss_2014experimente_in_der_politikwissenschaft.pdf, access 24.02.2015.
5
Figure 3: Comments from the students
Student´s
Translation
comment
1
A good seminar, notably the small number of students made it very pleasant. The form
of examination mad sense to me, and the seminar had a good structure.
2
Very interesting seminar with applied learning through examples!
3
Fantastic course with a comprehendible structure. Because of the examples of political
science experiments we discussed during the course the topics were easy to understand
and the relevance was also easy to grasp.
The evaluation and the comments from the students are overall very positive. From the perspective
of the lecturer I would share this impression. At this point I would like to outline some general
thoughts about the course: These (three) thoughts circle around the teaching goals of the seminar.
On the one hand someone might argue that a course on experimental methods in political science
should focus on experiments. Contrary to this I would argue instead (1.) that experiments are a
method among others and in bigger research projects they often serve as a complementary
approach. Therefore we should consider, in planning a seminar for (quantitative) methods, to teach
experiments as one example of organising a research plan and testing a hypothesis. As I argued in the
introduction, experiments could be understood as a gold standard of a deductive, hypothesis-testing
research design. So as an overall teaching goal it could make sense to define the teaching goals for
such a course as the more general understanding of a research process and organizing the seminar –
as described in the last section – around this goal: to locate experiments within the methodological
landscape, discuss examples and theory about the method, and let the students work on projects.
This is important, notably for undergraduate students, like it is true for the next point lecturers might
consider before teaching a seminar on a specific method.
(2.) Also students should learn to understand themselves as critical recipients of existing studies.
Published work, even in top ranged peer-review journals, can contain highly contested assumptions
and confusing interpretations of results. Courses for undergraduate students are appropriate places
to discuss those findings, including very critical discussion. However, I would never suggest to take
bad-practice-articles in consideration for course readings, but students often voice much more
criticism about an article than some might expect in advance – even if award-winning articles are
chosen as core readings for the course. Beyond the criticism of how scientific articles are
(sometimes) written, it is important for students to develop skills for systematically analysing
6
research designs and reading the statistical analysis of an article; this goes beyond summarising an
article and its findings, and asks for the development of skills of critical reading and analysis of
existing literature.
(3.) Sometimes students understand the story you are telling them best after they have heard the
story before. I remember confused faces by the students in the first session of the course as I told
them that the first thing we were going to do in this kick-of session was to discover the whole plot of
what we will learn about experiments. But like a well written thesis – with an introduction, a main
part, and a conclusion – a seminar might be constructed in the same sense: Tell the students what
you/they are going to do, then do it, then come back to what you have done. Therefore it is not
strange to “spoil the plot” in the first session, then to discover how much more difficult the whole
story is together with the students, and come back to it in a way that students have to apply what
they have learned to their own projects. During the last course sessions, when the students worked
in pairs on their little projects, students gave me the feedback that they had to read through (most
of) the literature again in order to understand the different research designs they had learned about
and had to think about their application to slightly different questions. So there is a clear cut
between hearing about something (a method for example), and using the information as knowledge
for own work.
Last but not last it should be mentioned that teaching goals should be clearly defined by the lecturer
and should not be limited to the content of the course; each course offers the students different
possibilities to practice their skills in multiple ways. Towards the end of the course it is worth
reflecting upon those competences – for this purpose we have to plan which competences we want
the students to improve in in advance and then structure the seminar and the teaching methods
accordingly.
Discussion
The goal of the article was to argue that – contesting the prior article in EPS (Hamenstädt, 2012a) –
teaching experimental political science is about allowing the students do gain experience with the
methods. This means a course should focus on teaching students how to come up with their own
(small-scale) experiments by the end of the course. The second line of argumentation in this article is
that political scientists have developed and conducted plenty of interesting experiments during the
last years. A lot of literature on the experimental methods still draws back on experiments from
neighbouring disciplines in social sciences – or even from medical science. Be that as it may, but in
teaching experimental methods for political science to students we can now revert to a multiplicity
7
of examples from our own discipline. Our goal should be to make the method understandable; how
we can answer scientific questions from our own discipline with the help of the experimental
method. That is what a course on experimental methods should offer and that is also what should be
taught to students. In bringing forward these two arguments, I showed in the course of the article
how to structure a course and what the students’ and my own experiences are.
8
References
Blair, A., Curtis, S., and McGinty, S. (2013a) ‘Is Peer Feedback an Effective Approach for Creating
Dialogue in Politics?’, European Political Science 12(1): 86–101.
Blair, A., Curtis, S., Goodwin, M. and Shields, S. (2013b) ‘What Feedback do Students Want?’, politics
33(1): 66–79.
Druckman, J., Green, D., Kuklinski, J. and Lupia, A. (2011) ‘Cambridge Handbook of Experimental
Political’, ScienceCambridge: Cambridge University Press.
Druckman, J., Green, D., Kuklinski, J. and Lupia, A. (2006) ‘The growth and development of
experimental research in political science’, American Political Science Review 100(4): 627–635.
Hamenstädt, U. (2012a) ‘Teaching Experimental Political Science: Experiences from a Seminar on
Methods’, European Political Science 11(1): 114–127.
Hamenstädt, U. (2012b) ‘Die Logik des politikwissenschaftlichen Experiments’, Wiesbaden: VS Verlag.
Morton, R. and Williams, K. (2010) ‘Experimental Political Science and the Study of Causality. From
Nature to the Lab’, Cambridge: Cambridge University Press.
Ryan, M., Clare Saunders, C., Rainsford E. and Thompson E. (2013) ‘Improving Research Methods
Teaching and Learning in Politics and International Relations: A ‘Reality Show’ Approach’, politics
34(1): 85–97.
Shadish, W., Cook, T. and Campbell, D. (2002) ‘Experimental and Quasi-experimental Designs for
Generalized Causal Inference’, Boston: Houghton Mifflin.
9
Appendix
Figure 4: Course content and list of the core readings and additional literature
Block
1
Session Content
Core readings
Introduction
Overview and presentation of the lecture; no required
First two classroom experiments
readings
and first discussion of
experimental designs.
Milgram-Experiment
Imai, K., King, G. and Stuart, E. (2008) ‘Misunderstandings
Movie about the Milgram-
between experimentalists and observationalists about causal
Experiment in Munich in 70s.
Discussion of the experimental
inference’, Journal of the Royal Statistical Society Series A,
“design”, or the design Milgram
171: 481–502.
should have used.
Druckman, J., Green, D., Kuklinski, J. and Lupia, A. (2011)
‘Cambridge Handbook of Experimental Political Science’,
Cambridge: Cambridge University Press. Chapter 1 & 2.
Group work
Druckman, J., Green, D., Kuklinski, J. and Lupia, A. (2006) ‘The
Readings assigned randomly to
growth and development of experimental research in political
students during the last session.
Clarifying what we have learned
so far.
science’, American Political Science Review 100(4): 627–635.
Morton, R. and Williams, K. (2010) ‘Experimentation in
Political Science’, in ‘The Oxford Handbook of Political
Methodology’, Oxford: Oxford University Press: 339-356.
Gerrig, R.J. and Zimbardo, P.G. (UA, 2004) ‘Psychology and
Life’, Allyn & Bacon. Chapter 2.
Green, D. and Gerber A. (2003): The Underprovision of
Experiments in Political Science’, The Annals of the American
Academy of Political and Social Science, 589: 94-112.
2
Laboratory
Iyengar, S. (2011) ‘Laboratory Experiments in Political
Experiments
Science’, in Druckman, J., Green, D., Kuklinski, J. and Lupia, A.
(ed.) ‘Cambridge Handbook of Experimental Political Science’,
Cambridge: Cambridge University Press. Chapter 6, pp. 73-88.
Mutz, D. (2007) ‘Effects of “in-your-face” television discourse
on perceptions of a legitimate opposition’, American Political
Science Review 101(4): 621–635.
Additional readings:
Battaglini, M., Morton and R. Palfrey, T.R. (2010) ‘The Swing
10
Voter’s Curse in the Laboratory’, Review of Economic Studies
77: 61–89
Levine, D.K. and Palfrey, T.R. (2007) ‘The Paradox of Voter
Participation A Laboratory Study’, American Political Science
Review 101: 143-158.
Takezawa, M. Gummerum, M. and Keller, M. (2006) ‘A stage
for the rational tail of the emotional dog: Roles of moral
reasoning in group decision making’, Journal of Economic
Psychology 27: 117–139.
Field experiments
Gerber, A. (2011) ‘Field Experiments in Political Science’ in
Druckman, J., Green, D., Kuklinski, J. and Lupia, A. (ed.)
‘Cambridge Handbook of Experimental Political Science’,
Cambridge: Cambridge University Press. Chapter 9, pp. 115138.
Olken, B. (2010) ‘Direct democracy and local public goods:
Evidence from a field experiment in Indonesia’, American
Political Science Review 104(2): 243–267.
Additional readings:
Bahry, D.L. and Wilson, R.K. (2006) ‘Confusion or fairness in
the field? Rejections in the ultimatum game under the
strategy method’, Journal of Economic Behavior &
Organization 60: 37–54.
Dunning, T. and Harrison, L. (2010) ‘Cross-cutting Cleavages
and Ethnic Voting: An Experimental Study of Cousinage in
Mali’, American Political Science Review 104(1): 21-39.
Gerber, A.S. and Green, D.P. (2000) ‘The Effects of Canvassing,
Telephone Calls, and Direct Mail on Voter Turnout: A Field
Experiment’, The American Political Science Review 94(3):
653-663.
Gerber, A.S. and Green, D.P. (2001) ‘Do Phone Calls Increase
Voter Turnout?: A Field Experiment’, The Public Opinion
Quarterly 65(1): 75-85.
Gerber, A.S., Karlan, D. and Bergan, D. (2009) ‘Does the Media
Matter? A Field Experiment Measuring the Effect of
Newspapers on Voting Behavior and Political Opinions’,
11
American Economic Journal 1(2): 35-52.
Horiuchi, Y., Imai, K. and Taniguchi, N. (2007) ‘Designing and
Analyzing Randomized Experiments: Application to a Japanese
Election Survey Experiment’, American Journal of Political
Science 51(3): 669–687.
Levy Paluck, E. and Green, D.P. (2009) ‘Deference, Dissent,
and Dispute Resolution: An Experimental Intervention Using
Mass Media to Change Norms and Behavior in Rwanda’,
American Political Science Review 103(4):622-644.
Wantchekon, L. (2003) ‘Clientelism and Voting Behavior.
Evidence from a Field Experiment in Benin’, World Politics 55:
399-422.
Survey experiments
Sniderman (2011) The Logic and Design of the Survey
Experiment: An Autobiography of Methodological Innovation’
in Druckman, J., Green, D., Kuklinski, J. and Lupia, A. (ed.)
‘Cambridge Handbook of Experimental Political Science’,
Cambridge: Cambridge University Press. Chapter 8, pp. 102114.
Hainmueller, J. and Hiscox, M.J. (2010) ‘Attitudes toward
Highly Skilled and Low-skilled Immigration: Evidence from a
Survey Experiment’, American Political Science Review 104(2):
61-84.
Additional readings:
Barabas, J. and Jerit, J. (2010) ‘Are Survey Experiments
Externally Valid?’, American Political Science Review 104(2):
226-242.
Hall, L., Johansson, P. and Strandberg, T. (2012) ‘Lifting the
Veil of Morality: Choice Blindness and Attitude Reversals on a
Self-Transforming Survey’, PLoS ONE 7(9).
Natural experiments
Robinson, G., McNulty. J.E. and Krasno J.S. (2009) ‘Observing
the Counterfactual? The Search for Political Experiments in
Nature’, Political Analysis 17: 341–357.
Whitt, S. and Wilson R.K. (2007) ‘Public Goods in the Field:
Katrina Evacuees in Houston’, Southern Economic Journal,
Symposium 74(2): 377–387:
12
Additional readings:
Elis, R., Malhotra, N. and Meredith, M. (2009) ‘Apportionment
Cycles as Natural Experiments’, Political Analysis 17: 358-376.
Green, D.P., Leong, T.Y., Kern, H.L., Gerber, A.S. and Larimer,
C.W. (2009) ‘Testing the Accuracy of Regression Discontinuity
Analysis Using Experimental Benchmarks’, Political Analysis
17: 400-417.
Kern, H.L. and Hainmueller, J. (2009) ‘Opium for the Masses:
How Foreign Media Can Stabilize Authoritarian Regimes’,
Political Analysis 17: 377-399.
McNulty, J.E., Dowling, C.M. and Ariotti, M.H. (2009) ‘Driving
Saints to Sin: How Increasing the Difficulty of Voting Dissuades
Even the Most Motivated Voters’, Political Analysis 17.
Sekhon, J.S. and Titiunik, R. (2012) ‘When Natural Experiments
Are Neither Natural Nor Experiments’, American Political
Science Review 106(1): 33-57.
Wilson, R.K. and Eckel, C.C. (2010) ‘Trust and Social Exchange’,
in Cambridge Handbook of Experimental Political Science,
Boston: Cambridge University Press. Chapter 17: 243-257.
3
Causality
Druckman, J., Green, D., Kuklinski, J. and Lupia, A. (2011)
‘Cambridge Handbook of Experimental Political Science’,
Cambridge: Cambridge University Press. Chapter 2.
Morton, R. and Williams, K. (2010) ‘Experimental Political
Science and the Study of Causality. From Nature to the Lab’,
Cambridge: Cambridge University Press. Capter 3 on Causality.
Additional readings:
Camerer, C.F. and Hogarth, R.M. (1999) ‘The Effects of
Financial Incentives in Experiments: A Review and CapitalLabor-Production Framework’, Journal of Risk and Uncertainty
19(1-3): 7-42.
Validity
McDermott, R. (2002) ‘Experimental methods in political
science’, Annual Review of Political Science 5: 31–61.
Morton, R. and Williams, K. (2010) ‘Experimental Political
Science and the Study of Causality. From Nature to the Lab’,
Cambridge: Cambridge University Press. Chapter 7, pp. 75-
13
100.
Additional readings:
Blanton, H. and Jaccard, J. (2008) ‘Representing Versus
Generalizing: Two Approaches to External Validity and Their
Implications for the Study of Prejudice’, Psychological Inquiry
19: 99–105.
Levitt, S.D. and List, J.A. (2007) ‘Viewpoint: On the
generalizability of lab behaviour to the field’, Canadian Journal
of Economics 40(2): 347-370.
Lucas, J.W. (2003) ‘Theory-Testing, Generalization, and the
Problem of External Validity’, Sociological Theory 21(3): 236253.
Ethics
Hertwig, R. and Ortmann, A. (2008) ‘Deception in
Experiments: Revisiting the Arguments in Its Defense’, Ethics
& Behavior 18(1): 59-92.
Additional readings:
Bonetti, S. (1998) ‘Experimental economics and deception’,
Journal of Economic Psychology 19: 377-395.
Bortolotti, L. and Mameli, M. (2006) ‘Deception in Psychology:
Moral Costs And Benefits Of Unsought Self-Knowledge’,
Accountability in Research 13: 259–275.
Richter, E.D., Barach, P., Berman, T., Ben-David, G. and
Weinberger, Z. (2001) ‘Extending the boundaries of the
Declaration of Helsinki: a case study of an unethical
experiment in a non-medical setting’, Journal of Medical
Ethics 27: 126–129.
Stodder, J. (1998) ‘Experimental Moralities: Ethics in
Classroom Experiments’, The Journal of Economic Education
29(2): 127-138.
Weisburd, D. (2003) ‘Ethical Practice and Evaluation of
Interventions in Crime and Justice: The Moral Imperative for
Randomized Trials’, Evaluation Review 27(3): 336-354.
14
4
Software and
Fischbacher, U. (2007) ‘z-tree: Zurich toolbox for ready-made
student´s projects
economic experiments’, Experimental Economics 10(2): 171–
Introduction to R and z-tree by
178.
the lecturer.
Morton, R. and Williams, K. (2010) ‘Experimental Political
Students worked during this time
Science and the Study of Causality. From Nature to the Lab’,
on their projects.
Cambridge: Cambridge University Press. Capter 15 and To Do
List.
Isabelle Boutron, Peter John and David J. Torgerson (2010)
‘Reporting Methodological Items in Randomized Experiments
in Political Science’, The ANNALS of the American Academy of
Political and Social Science 628: 112-131. (notably the pp. 121123)
15