Research Design - University of Warwick

DEPARTMENT OF SOCIOLOGY
RESEARCH DESIGN
Christina Hughes
[email protected]
Research Design: Overview
Research design is the basic plan for a piece of research. This
plan includes four main ideas (Punch, 1998). These are:
•
· The strategy - how will I proceed from initial identification of
research questions through to collecting data and evidence and
writing up?
· Conceptual framework - what kinds of theories or assumptions
am I bringing to my analysis?
· Who or what will be studied?
· Which tools and procedures will I use to undertake my
research?
Whilst research design represents the first stage of a project it
involves thinking through the whole process of research from
beginning to end. For this reason, a key aspect of research
design is that of evaluation as you will want to know how
adequate the research you intend to undertake is. Most textbooks
on issues of evaluation and design focus primarily on the more
positivist approaches in these areas. This is summarised well by
Punch (1998) which is a key source I shall use here. You should
note, nonetheless, that a critical understanding of evaluation and
design cannot be developed without recognising the
assumptions that are brought to bear about the nature of social
reality and the purposes of research. Greene (1994) indicates
how the key values associated with different research paradigms
impact on the types of evaluative questions asked. For example,
typical evaluative questions within postpositivism are: Are
desired outcomes attained and attributable to the programme?
Is this programme the most efficient alternative? Within a
critical framework a key evaluative question would be In what
ways are the premises, goals or activities of the programme
serving to maintain power and resource inequities in society?
For interpretivists, a central question would be How is the
program experienced by various stakeholders? As you will see
these questions do not replace more general questions of validity
and reliability that are posed in relation to the research findings
and the associated processes of knowledge generation. Both
these questions and specific paradigm-related questions are
asked.
These materials contribute to the following learning outcomes:
· To have an understanding of the key principles of research
design;
· To be able to critically evaluate your own research.
How do I design my Research Project?
•
Try to think of your end-points at the beginning. Whilst research
is often viewed as a linear series of separate steps as Blaxter,
Hughes and Tight (1996: 10) comment it should be viewed as a
spiral. Viewed from this perspective, research is cyclical, can be
entered at almost any point; is a never-ending process; will
cause you to reconsider your practice; and will return you to a
different starting place.
Punch (1998: 255) suggests that the concerns of research design
are to answer the following questions:
· Who or what will be studied?
· What strategies of inquiry will be used?
· What methods will be used for collecting and analysing
empirical materials?
The design must fit with the research questions posed.
Thus, will the research use quantitative, qualitative or textual
approaches? Will the research combine methods? If so, how? Is
the design absolutely pre-specified or will it be emergent? Is it a
mixture of both? Whatever the responses they must fit with the
research questions posed.
Clark and Causer (1991) suggest the following need to be
considered when designing, planning and conducting research:
· What are the objectives of the research? This question needs to
be considered in terms of identifying a topic area, defining the
research questions and rationale and identifying key concepts.
· Which methods, or mix of methods are appropriate? For
example, are you going to conduct a one-off survey or a series?
Do you focus on a single case study or a set of cases?
· How are your concepts going to be operationalized? It is
always necessary to specify how the central concepts are being
applied to concrete social phenomena.
· Who or what will be studied? In documentary research, for
example, the researcher needs to choose the most appropriate
documents.
· Can you access the necessary people, sites and materials?
· Do you need to pilot first? In large-scale survey research it is
usually thought to be an absolute necessity to pilot research
instruments such as questionnaires. If the research is based on
informal observation, the possibilities for piloting are less
certain.
· Is the research feasible given time and other resources
available?
· Do initial objectives need to be modified in the light of the
practicalities of the research?
· What are the appropriate methods?
· Do you need to build in cross-checks (asking a range of
questions on the same issue, or using different methods such as
documentary support of biographical information) to enhance
validity?
· Do you have an adequate timetable?
We should add the following:
· Analysis - what methods will be used?
· Ethics - what are the issues that need to be considered?
How Good is this Research? Criteria for Evaluation
•
An indication of the complexity of the question of evaluative
criteria in contemporary qualitative research is given in Part V
of the Denzin and Lincoln Handbook (1994: 479-557).
Modifying Hammersley (1992), they describe four basic
positions on the question of evaluative criteria: positivism,
postpositivism, postmodernism and poststructuralism. (They
also list the criteria important to constructivists, critical
theorists and feminists.) In this book, where quantitative and
qualitative research are brought together within the pragmatic
question-method framework suggested, the criteria presented in
this chapter fit best with the positivist and postpositivist
positions they describe. (Punch, 1998: 262, n 7)
Punch (op cit) raises a very important aspect of evaluation
criteria. This is that the criteria considered relevant to deciding
whether or not we can be confident about the findings of
research are linked to the epistemological and political
frameworks of the research design. Punch reproduces the more
dominant criteria associated with positivism and postpositivism
that are used when answering the question `How good is this
research?'
•
Before providing the criteria, Punch outlines two concepts that
establish the background to evaluative criteria. These are:
disciplined enquiry and the fit between the component parts of
a research project.
Disciplined enquiry
•
Disciplined inquiry has a quality that distinguishes it from other
sources of opinion and belief. The disciplined inquiry is
conducted and reported in such a way that the argument can be
painstakingly examined. The report does not depend for its
appeal on the eloquence of the writer or on any surface
plausibility. ... Whatever the character of a study, if it is
disciplined the investigator has anticipated the traditional
questions that are pertinent. He [sic] institutes control at each
step of information collection and reasoning to avoid the
sources of error to which these questions refer. If the errors
cannot be eliminated he takes them into account by discussing
the margin for error in his [sic] conclusions. Thus, the report of
a disciplined inquiry has a texture that displays the raw
materials entering the argument and the logical processes by
which they were compressed and rearranged to make the
conclusion credible. ... Disciplined inquiry does not necessarily
follow well established, formal procedures. Some of the most
excellent inquiry is free-ranging and speculative in the initial
stages, trying what might seem to be bizarre combinations of
ideas and procedures, or restlessly casting about for ideas.
(Cronbach and Suppes, 1969 in Punch, 1998: 251-2).
What is important about disciplined inquiry is that its data,
arguments, and reasoning be capable of withstanding careful
scrutiny by another member of the scientific community.
(Shulman, 1988 in Punch, 1998: 252)
The fit between the component parts of a research project
•
... in most cases the best way to do [this] is to get the research
questions clear, and then align the design and methods with
those. We have noted the reciprocal influence between questions
and methods, and limitations as to what can be done in terms of
design and method impose limitations on questions that can be
asked, but the argument is that this direction of influence should
be minimized. The main influence should be from questions to
methods. (Punch, 1998: 252).
Punch argues that the issue of `fit' is a major concern in
assessing the validity of the research. He argues that in projects
where the component parts (ie design, methods and questions)
do not fit together.
Specific Criteria for Evaluation
Punch (p 253) offers the following specific criteria:
•
1. The set-up of the research
2. The empirical procedures used in the research (design, data
collection, sample and data analysis)
3. The quality of the data
4. The findings and conclusions reached in the research
5. The presentation of the research
1. The set-up of the research
· Is it clear where the research is coming from in terms of
approaches, methods and paradigms?
· Is the topic area clearly identified, ie what is this research
about?
· Are the research questions appropriate (ie they are clear, easily
understood and unambiguous, the concepts are specific enough
to connect the data to indicators, the data will answer the
questions, the questions are related to each other, the questions
are worthwhile and interesting).
· Is the research set in context? Context has two senses here.
The first relates to a theoretical, practical or professional
concern. The second means that the research connects to a
relevant literature.
•
2. Empirical Procedures
· Are the design, data collection and data analysis procedures
reported in sufficient detail to enable the research to be
scrutinized and reconstructed?
· Is the design of the study appropriate to the research
questions?
· Are the data collection instruments and procedures adequate
and appropriate for the research questions?
· Is the sample appropriate?
· Are the data analysis procedures adequate and appropriate for
the research questions?
•
3. Quality of the Data
· Is there evidence of care, control and thoroughness in the
collection of data?
· Is the data reliable? Is there, for example, internal consistency
and stability over time?
· Is the data valid? Thus, how do we know this instrument
measures what we think it measures? How well do the data
represent the phenomena for which they stand?
· Has the possibility of reactivity in the data been considered,
reported and taken into account and what steps have been taken
to minimize its effects. For example, in quantitative research
reactivity means the extent to which measuring something
changes it (attitude scales may provoke or influence the attitude
in question). In qualitative research a participant observer's
presence may change the behaviour of those observed.
•
4. Findings and Conclusions reached in the research
· Have the research questions been answered?
· How much confidence can we have in the answers put
forward?
· What can be concluded from the research on the basis of what
was found?
The second two questions raise issues of internal and external
validity. Internal validity refers to the internal logic and
consistency of the research. For example, in quantitative
research this relates to whether the relationships between the
variables have been correctly interpreted. In qualitative research,
the internal validity refers to the extent to which the findings
faithfully represent and reflect the reality that has been studied.
This means that all the parts fit together and whether rival
hypotheses have been eliminated, negative evidence has been
considered and findings have been cross-validated with other
parts of the data.
External validity refers to questions of generalizability. In
quantitative research this generally refers to issues arising from
sample generalizability. In qualititative research it broadly refers
to the transferability of the findings to other settings and
contexts.
•
5. Presentation of the Research
· Is there a written report?
· What other ways might the research be presented?
References
Blaxter, L, Hughes, C and Tight, M (1996) How to Research,
Buckingham, Open University Press
Clark, J and Causer, G (1991) Introduction: Research Strategies
and Decisions in G Allan and C Skinner (Eds) Handbook for
Research Students in the Social Sciences, London, Falmer, pp
163-176
Greene, J (1994) Qualitative Program Evaluation: Practice and
Promise, in N Denzin and Y Lincoln (Eds) Handbook of
Qualitative Research, Thousand Oaks (Calif), Sage, pp 530544
Punch, K (1998) Introduction to Social Research:
Quantitative and Qualitative Approaches, London, Sage