Quick Links

Managing &
Communicating Research
Quantitative Research
Methods
Kevin Paterson
The scientific method
• The essence of the scientific method is that all
theoretical claims are subjected to an empirical test.
• That is, we don’t rely on conjecture or belief, but
attempt to test the basic building blocks of our theory
using a combination of observation and
experimentation.
Topics
•
•
•
•
•
•
•
•
•
•
•
•
The scientific method
Hypothesis testing
Experiments
Quasi-experiments
Questionnaires
Selecting participants
Control conditions
“Blind” and “double-blind” studies
Placebo effects in medical research
Biases in experiments
Replication
Meta-analysis
Positivism (and empiricism)
• Practice based on experiment and observation.
• The doctrine that all knowledge is derived from
sense-experience.
• That is, scientific knowledge is not derived from
belief, “common-sense”, or any other orthodoxy.
• Formed basis for Enlightenment.
Hypothesis-testing
• Experiments test hypotheses.
• Hypotheses are precise and well-defined statements
that describe the relationship between one or more
independent variables and one or more dependent
variables.
• Independent variable: what you manipulate.
• Dependent variable: what you measure.
• Hypotheses are usually predictions made by a theory.
Types of hypothesis
• Directional Hypotheses
– When we have a clear expectation about the nature of the
relationship we expect to find.
• Non-directional Hypotheses
– No expectation about the relationship that would be
expected, in terms of the direction of an effect.
• Null hypothesis
– Prediction of “no difference”.
– Problems with Null Hypothesis testing?
1
Hypothesis-testing
• Hypotheses are:
•
•
•
•
•
•
Specific, precise, and testable questions.
Usually predictions based on theory.
Defined before conducting experiment.
Define relationship between IVs and DVs.
Outcome must be measureable.
Hypothesis may be supported or refuted by data.
Experiments
Statistics in hypothesis testing
• Experimenters typically accept a finding if it has a low
statistical likelihood of having occurred by chance.
• Statistical techniques are used to establish the
likelihood of a chance effect.
• In social science, often use a cut-off of 95%
confidence.
• This means that 5% of the time an effect that occurs
by chance will have been reported as a real effect.
Quasi-Experiments
• Experiments involve manipulation of IVs and DVs
under carefully controlled conditions.
• Sometimes it is not possible, or would be unethical,
to manipulate a particular variable.
• Influence of extraneous variables/factors should be
eliminated/minimised.
• Instead, researchers look for naturally occurring
examples of this manipulation.
• Performance under experimental conditions should
be compared against appropriate baseline (i.e.,
control) conditions.
• For instance, Wakefield examined children who had
been vaccinated with MMR, had autism, and had
bowel complaints. He then tested for measles virus.
Evaluating experiments
• Questions to ask:
– Is the hypothesis fair, or actually predicted by theory?
• e.g., does presence of measles virus indicate effects of MMR?
– Is the nature of the measurement good enough?
• Were the methods used to detect virus good enough?
– What were the control conditions – do these provide fair
comparisons?
• Who was comparison group?
– How were participants selected? Any biases in sampling?
• How were the participants and control groups selected?
– Have the data been interpreted properly?
Survey Methods
• Advantages
– Surveys provide a relatively simply approach to the study of
attitudes, values, beliefs, and motives.
– They are easily adaptable to collect generalisable
information from different populations.
• Disadvantages
– The quality of the data can be affected by the characteristics
of the respondents (e.g., their memory, knowledge,
experience, and personality).
– Respondents will not necessarily report their beliefs,
attitudes, etc., accurately.
2
Interviews
• Structured interview
Interviews
• Unstructured interview
– Very similar to questionnaire, except that they are
completed with the experimenter and respondent present
rather than self-reported.
– Reasons for using this method often relate to the nature of
the sample (e.g., to ensure response or avoid
misunderstandings).
– Have general area of research interest but few or no
predetermined questions.
– Conversation is guided by issues introduced by interviewees.
– Generally used in qualitative research.
• Semi-structured interview
– Pre-determined questions but order of questions or precise
wording might vary depending on what seems appropriate
to the respondent.
Disavantages
Advantages
Self administered
questionnaires
Structured interviews
Often only or easiest way of
The interviewer can clarify
gaining information from large questions and correct
set of participants.
misunderstandings.
Allows anonymity, which can
encourage frankness.
The presence of the
interviewer encourages
participation and involvement
– hence the high response rate.
Low response rate typically.
Data may be affected by
characteristics of the
interviewer / interviewer bias /
interaction between
interviewer and respondent.
Can be hard to assess
representativeness of sample.
May not be treated seriously
by respondents.
Selecting participants
• Important that unbiased methods are used to select participants
and assign them to conditions.
– Wakefield clinic dealt specifically with children whose parents were
concerned about possible effects of MMR. Bias in sampling?
– Lots of research into the effects of homeopathy involves people
who have sought homeopathic treatment. Another possible
sampling bias.
• How researcher assigns participants to different experimental
conditions can also bias outcome.
Lack of anonymity.
Other selection biases
• Self-selection biases.
• Participants with particular motivations or interests may be
particularly willing to participate in a study.
• On-line and phone-in polls or studies are particularly prone to
self-selection biases.
Blind and Double-blind studies
• Information is withheld from participants in blind studies.
– i.e., they are unaware of the condition that they have
been assigned to.
– e.g., in “Pepsi challenge” they don’t know which cola
they have.
– Still risk that researcher or interaction between
researcher and participant will bias outcome (perhaps
inadvertently).
3
Blind and Double-blind studies
Example: Electro-sensitivity
• In double blind studies, information is withheld from
both participants and researcher.
– Uses truly random assignment to conditions.
– Experimenter is aware of which condition is which.
• It is claimed that a range of medical symptoms are
caused by acute exposure to electromagnetic signals,
e.g., mobile phones, wifi.
In triple-blind studies, even the person who
analyses the data does not know the conditions.
• Symptoms include: tiredness, difficulty concentrating,
nausea, bowel complaints, aches in the limbs,
crawling sensations, pain in the skin.
• Activity: design an experiment that could test for this
“syndrome”?
Example: Electro-sensitivity
• Read Ben Goldacre’s article in BMJ
• See BMJ website for follow-up debate:
http://www.bmj.com/cgi/eletters/334/7606/1249#168843
• Note how science rarely provides unambiguous answers.
• Instead, science involves debate (usually about
methodology and the interpretation of findings).
• It also depends (crucially) on replication.
Biases in experiments
• Already considered some of these:
– Sampling biases.
– Biases in allocating participants to different
conditions.
– Interpretative biases
– These are often referred to as Experimenter biases
Placebo effects
• An inert medicine or intervention words simply because
the recipient believes that it will.
– Exact source of effect is unknown, but likely to be a
psychological effect
• Benefit from positive interaction.
– Experiments have to control for placebo effect by
including “sugar pill” conditions.
– Listen to programme about placebo (Goldacre again).
http://www.bbc.co.uk/radio4/science/placebo.shtml
Biases in experiments
• Also “subject bias”
• Hawthorne Effect
– Refers to experiments conducted in Hawthorne
factory in Chicago in 1920s-30s
– Experimenter made multiple changes to working
conditions: changed lighting & layout conditions.
– All of the changes improved performance.
– Effects were due to participants’ reactions to
experiments, rather than experiment itself.
– Be aware that participants in experiments may exhibit
modified behaviour.
4
Biases in experiments
Popper’s theory of falsification
• Confirmation bias
• Research should be aimed at falsifying theories.
• Seeking evidence that is consistent with hypothesis or
theory rather than attempting falsification.
• Requires hypotheses that are falsifiable.
• A good theory is one that is resistant to falsification.
Replication
Meta-analysis
• Results of a single experiment rarely accepted as
providing definite answer.
• Statistical procedure for combining results from multiple
experiments.
• Important that results are replicable.
• Importance of convergent evidence.
• Another approach involves using Systematic Reviewing.
– Uses explicit methods to perform literature search
and critical appraisal of individual studies.
– Best known approach is promoted by Cochrane
Collaboration.
– http://www.cochrane.org/docs/descrip.htm
• Scientists are sceptical.
– Never trust a solitary finding.
• Research also tests limits of an effect.
– I.e., what can strengthen an effect or eliminate it.
Publishing research
• Activity: How might the way in which research is
published bias findings?
Publication bias
• Tendency for only positive results to be published.
• File-drawer problem refers to research that is conducted
but not reported because of null results.
– E.g., may be strong publication bias in parapsychology,
suggesting presence of effect, and large file-drawer problem.
– Are only studies that produce chance effects being reported?
• Can distort meta-analysis and systematic reviews.
– Statistical tools exist to detect possible publication
bias (e.g., Begg’s funnel plot).
5