chapter one - Lyceum Books

Instructor’s Manual with Test Bank
to accompany
Research Methods for Social Workers
A Practice-Based Approach
2nd Edition
by
Samuel S. Faulkner
Cynthia A. Faulkner
i
Chicago, Illinois
______________________________________________________________________________
Copyright © 2014 by Lyceum Books, Inc., 5758 S. Blackstone Ave., Chicago, IL 60637.
All rights reserved under International and Pan-American Copyright Convention. No part of the
publication may be reproduced, stored in a retrieval system, copied, or transmitted in any form or
by any means without written permission from the publisher.
Instructors of classes using Faulkner & Faulkner, Research Methods for Social Workers: A
Practice-Based Approach may reproduce material from the instructor’s manual with test bank
for classroom use.
Faculty Exam Copies can be ordered through Lyceum Books, Inc:
http://www.lyceumbooks.com/
ISBN: 978-1-935871-32-3
ii
TABLE OF CONTENTS
To the Instructor ……………………………………………………………………
1
Chapter 1: What is Research
Chapter Outline ……..………………………………………………………………
Test Questions ….…………………………………………………………………..
Discussion Questions ….…………………………………………………………...
Chapter Glossary Terms …………………………………………………………...
Applied Learning Activities…………………………………………………………
Key Points ………………….………………………………………………………
Additional Teaching Resources. ……………………………………………………
2
3
5
5
7
7
8
Chapter 2: Ethical Considerations
Chapter Outline ……..………………………………………………………………
Test Questions ….…………………………………………………………………..
Discussion Questions ….…………………………………………………………...
Chapter Glossary Terms …………………………………………………………...
Applied Learning Activities…………………………………………………………
Key Points ………………….………………………………………………………
Additional Teaching Resources. ……………………………………………………
9
9
13
13
14
16
17
Chapter 3: Literature Review
Chapter Outline ……..………………………………………………………………
Test Questions ….…………………………………………………………………..
Discussion Questions ….…………………………………………………………...
Chapter Glossary Terms …………………………………………………………...
Applied Learning Activities…………………………………………………………
Key Points ………………….………………………………………………………
Additional Teaching Resources. ……………………………………………………
19
20
23
23
24
24
25
Chapter 4: Variables and Measures
Chapter Outline ……..………………………………………………………………
Test Questions ….…………………………………………………………………..
Discussion Questions ….…………………………………………………………...
Chapter Glossary Terms …………………………………………………………...
Applied Learning Activities…………………………………………………………
Key Points ………………….………………………………………………………
Additional Teaching Resources …………………………………………………….
28
29
32
32
34
35
36
iii
Chapter 5: Sampling
Chapter Outline ……..………………………………………………………………
Test Questions ….…………………………………………………………………..
Discussion Questions ….…………………………………………………………...
Chapter Glossary Terms …………………………………………………………...
Applied Learning Activities…………………………………………………………
Key Points ………………….………………………………………………………
Additional Teaching Resources …………………………………………………….
37
38
41
41
43
44
45
Chapter 6: Qualitative Research Designs
Chapter Outline ……..………………………………………………………………
Test Questions ….…………………………………………………………………..
Discussion Questions ….…………………………………………………………...
Chapter Glossary Terms …………………………………………………………...
Applied Learning Activities…………………………………………………………
Key Points ………………….………………………………………………………
Additional Teaching Resources …………………………………………………….
46
47
51
51
52
53
53
Chapter 7: Quantitative Research Designs
Chapter Outline ……..………………………………………………………………
Test Questions ….…………………………………………………………………..
Discussion Questions ….…………………………………………………………...
Chapter Glossary Terms …………………………………………………………...
Applied Learning Activities…………………………………………………………
Key Points ………………….………………………………………………………
Additional Teaching Resources …………………………………………………….
54
55
58
59
59
60
61
Chapter 8: Survey Research
Chapter Outline ……..………………………………………………………………
Test Questions ….…………………………………………………………………..
Discussion Questions ….…………………………………………………………...
Chapter Glossary Terms …………………………………………………………...
Applied Learning Activities…………………………………………………………
Key Points ………………….………………………………………………………
Additional Teaching Resources …………………………………………………….
63
63
64
65
65
65
66
Chapter 9: Evaluative Research Designs
Chapter Outline ……..………………………………………………………………
Test Questions ….…………………………………………………………………..
Discussion Questions ….…………………………………………………………...
Chapter Glossary Terms …………………………………………………………...
Applied Learning Activities…………………………………………………………
Key Points ………………….………………………………………………………
Additional Teaching Resources …………………………………………………….
68
68
71
71
72
73
74
iv
Chapter 10: Single-Subject Designs
Chapter Outline ……..………………………………………………………………
Test Questions ….…………………………………………………………………..
Discussion Questions ….…………………………………………………………...
Chapter Glossary Terms …………………………………………………………...
Applied Learning Activities…………………………………………………………
Key Points ………………….………………………………………………………
Additional Teaching Resources …………………………………………………….
75
76
78
78
79
79
80
Chapter 11: Introduction to Descriptive Statistics
Chapter Outline ……..………………………………………………………………
Test Questions ….…………………………………………………………………..
Discussion Questions ….…………………………………………………………...
Chapter Glossary Terms …………………………………………………………...
Applied Learning Activities…………………………………………………………
Key Points ………………….………………………………………………………
Additional Teaching Resources …………………………………………………….
81
82
85
85
86
87
88
Chapter 12: Introduction to Inferential Statistics
Chapter Outline ……..………………………………………………………………
Test Questions ….…………………………………………………………………..
Discussion Questions ….…………………………………………………………...
Chapter Glossary Terms …………………………………………………………...
Applied Learning Activities…………………………………………………………
Key Points ………………….………………………………………………………
Additional Teaching Resources …………………………………………………….
90
90
94
94
95
96
97
Chapter 13: Practicing Your Research Skills
Example Research Rubric..…..………………………… …………………………
99
v
TO THE INSTRUCTOR
What we are most pleased about this new edition is that there is an additional 30% new
or updated material. Developed in part from suggestions made by students and faculty
who have used the previous edition, these improvements include content and instructional
features that will remind you why this is one of the best research methods books on the
market!
Important Features of the Instructors Manual
 Chapter Outline. Each chapter provides a teaching outline that is an overview of
the material covered in each chapter.
 Test Bank. Each chapter includes a Test Bank that contains several testing
options such as: True/False, Multiple Choice, Short Answer and Essay questions.
 Discussion Questions. Each chapter (with the exception of Chapter 12) includes
discussion questions for instructors to use as a means of generating critical
thinking surrounding the material that was presented.
 Chapter Glossary Terms. Terms presented in each chapter are located in bold
and are defined in this section as well as in the Glossary at the end of the text. A
.zip file of the entire glossary is available for import into a Blackboard course
shell upon request.
 Applied Learning Activities. These in or out of classroom activities are also
located within the chapters of the text. The activities are designed to assist
students in applying the material presented in each chapter.
 Key Points. Summarizing the key points of a chapter provides a study guide for
students.
 Additional Teaching Resources. The rationale for this section is to assist
instructors by providing supplemental resources to help illustrate concepts and
processes of each chapter. These resources may include: Films/Media, Web
Links, Print Material, Tables/Graphs/Diagrams, or Sample Assignments.
Sample Assignments include identification of CSWE objectives.
1
CHAPTER ONE: WHAT IS RESEARCH?
CHAPTER OUTLINE
I.
II.
III.
IV.
V.
VI.
VII.
VIII.
IX.
X.
XI.
What is Research?
A. Definition of research
B. Definition of evidenced-based practice
Importance of Social Work Research
A. The increasing importance of research
B. Rationale for the research process
Defining Research
Ways of Knowing
A. Personal experience
B. Knowledge of others
C. Tradition
D. Scientific methods
Qualitative Research – driven by a meaningful phenomenon. They use the
following steps:
A. Used when little or nothing is known about a phenomenon.
B. The process helps researchers gain an in-depth understanding gained from
observations and/or interviews..
Quantitative Research – the following general steps are used:
A. Relevant variables are used to explain relationships
B. Uses numerical representations
Mixed Method Research- a combination of qualitative and quantitative
designs
Developing Your Research Questions
A. Theory defined
What is a Hypothesis?
A. Hypothesis defined
Research Designs
A. Exploratory (defined)
B. Explanatory (defined)
C. Descriptive (defined)
D. Evaluative (defined)
E. Single-Subject (defined)
Strengths and Limitations of Research
A. Overview of the strengths of research
B. Inherent limitations of research
2
TEST QUESTIONS
True/False
1. Most people are researchers (though they probably wouldn’t label themselves as
such). (True)
2. Qualitative research is considered less rigorous (scientific) than quantitative research.
(False)
3. There are five types of research. These are: qualitative, inferential, descriptive, and
informative.
(False)
4. Knowledge is transferred in multiple ways. One of these ways is by relying on our
personal experience. (True)
5. Many times the process of conducting research leads to even more questions and
establishes a cycle of questioning, investigating and questioning. (True)
6. One misconception about research is that experimental studies solve huge questions
about scientific questions. (True)
7. One of the LEAST reliable ways of gaining knowledge is relying on tradition.
(True)
8. Quantitative research is usually characterized by the fact results are reported in
numerical terms (in numbers and figures). (True)
9. Funding sources are becoming LESS interested in programs evaluating their
effectiveness. (False)
10. The Social Work Code of Ethics promotes Social Workers conducting research.
(True)
Multiple Choice
1. Research is defined as:
a. The process of systematically answering questions
b. The process of systematically gaining information
c. The process of scientific inquiry
d. A way of looking at the world
e. None of the above
3
2. Knowledge is transferred in four ways. These four ways are:
a. tradition, others experience, our experience, our best guess
b. others experience, our experience, scientific inquiry, expert opinion
c. our experience, others experience or knowledge, tradition, and the
scientific method
d. others experience, our knowledge, tradition, and the internet
e. none of the above
3. Research methods are generally divided into five categories. These five are:
a. qualitative, descriptive, quantitative, evaluative, and single-subject
design
b. qualitative, descriptive, exploratory, and explanatory
c. descriptive, declarative, demonstrative, and investigative
d. explanatory, evaluative, directive, and inferential
e. none of the above
4. Qualitative research is MOST often associated with what:
a. explanatory research
b. research that determines why a phenomenon exists
c. research that is generalizable to a large population
d. exploratory research
e. none of the above
5. Quantitative research is MOST often associated with what:
a. explanatory research
a. research that determines why a phenomenon exists
b. research that is generalizable to a large population
c. exploratory research
d. none of the above
Short Answer
1. Qualitative research (and Social Work in general) has been influenced by several
disciplines. These include: ________________________, and
____________________________.
(Sociology and Anthropology)
2. The NASW _________________ of ___________________ recommends Social
Workers conduct research.
(Code
Ethics)
3. _________________________ research is used to determine a client’s progress or the
overall effectiveness of a program.
4
(Evaluative)
4. ________________________ research seeks to explain the relationship between two
entities.
(Quantitative)
Essay Questions
Is one type of research method inherently “better” than another? Why or why not?
Support your opinion with examples.
How would Social Work be different if there were no emphasis on research? Would it
help or hurt the profession?
DISCUSSION QUESTIONS
1. Why is research important?
2. Why does the Council on Social Work Education (CSWE) require that research be
taught in all accredited programs?
3. Develop an argument for/against the teaching of research methods in Social Work
education.
4. Discuss the merits and application of the four types of research.
CHAPTER GLOSSARY OF TERMS
Bias - The unknown or unacknowledged error created during the design of the study, in
the choice of a problem to be studied, over the course of the study itself, or during the
interpretation of the findings
Descriptive Research - Descriptive research is an attempt to delineate (describe) some
definitive characteristics about a single issue or population. It seeks to provide
5
information about a phenomenon using descriptive language (how many, how much,
what is the statistical average, etc.).
Evaluative Research - Evaluative research employs systematic methodology to measure
a client’s progress or to evaluate the effectiveness of a program or agency.
Evidence-based practices – Practices whose efficacy is supported by evidence
Explanatory research design - A type of research design that looks at the correlation
between two or more variables and attempts to determine if they are related, and if so, in
what way and how strongly they are related
Exploratory research design - A type of research design that allows us to use our
powers of observation, inquiry, and assessment to form tentative theories about what we
are seeing and experiencing
Hypothesis – A research statement about relationships between variables that is testable
and that can be accepted or rejected based on the evidence
Mixed-method research – A research design that uses both qualitative and quantitative
methods
Open-ended question - An inquiry that is worded in a way that allows the respondent to
answer in a his or her own words as opposed to merely soliciting a yes-or-no response
Qualitative Research - Qualitative research is used when little or nothing is known
about a subject. We employ qualitative methods to allow us to develop our knowledge
base. In essence, we need to “find out” about this phenomenon.
Quantitative Research - Quantitative research distinguishes itself from qualitative
research in that these studies display findings mostly in numerical terms. This type of
research seeks to be explanatory in that it attempts to explain the relationship between
two entities (often referred to as variables).
Research - Research is being defined as the process of systematically gaining
information.
Single-subject design - A method for evaluating an individual’s progress over time that
measures whether a relationship exists between an intervention and a specific outcome
Theory – A statement of set of statements designed to explain a phenomenon based upon
observations and experiments and often agreed upon by most experts in a particular field
Variable – A variable is any attribute or characteristic that changes or assumes different
values.
6
APPLIED LEARNING ACTIVITIES
Activity #1:
Use the four ways of knowing—your own experience, your knowledge of others,
tradition, and scientific methods—to explain how you know the following statement is
factual: “Americans celebrate the Fourth of July.”
Activity #2:
You are working on a presidential campaign and want to know how your classmates
would vote in the upcoming election. How would you state your research question? What
research design would you choose to carry out your study, and why?
KEY POINTS

Research is the process of systematically gaining information.

Research is becoming increasingly important as governing agencies demand
evidence that programs and practices are effective.

Knowledge is gained through our own experiences, through others, through
tradition, and through the use of scientific methods.

There are two types of research methods: qualitative research methods and
quantitative research methods. When both research methods are used, this is
called a mixed method design.

Research questions may arise from personal experience, out of research articles or
theories under study, or out of practice experience and are born out of the
Researcher’s personal interest in a subject.

Hypotheses are research statements about relationships between variables that are
testable and that can be accepted or rejected based on the findings from a study.

Exploratory research designs allow the researcher to use his or her powers of
observation, inquiry, and assessment to form tentative theories about what is
being seen and experienced.

Descriptive research designs use descriptive language to provide information
about a phenomenon.

Explanatory research designs attempt to explain the relationship between two or
more factors.

Evaluative research designs attempt to examine the effectiveness of programs and
services.

Single-subject designs are used to measure a person’s progress over time.
7
ADDITIONAL TEACHING RESOURCES
Tables/Graphs/Diagrams
Diagram 1.1 – Methods Within the Two Fields of Research
Descriptive
Methods
Exploratory
Methods
Explanatory
Methods
Evaluative
Methods
KEY:
A – Qualitative Field of Research
B – Quantitative Field of Research
C – Mixed Method of Research
8
CHAPTER TWO: ETHICAL CONSIDERATIONS
CHAPTER OUTLINE
I.
Introduction
A. Overview of what will be covered in the chapter
B. Ethical considerations
II.
Historical Overview
A. Nuremberg Code – Ten Principles
B. National Research Act (1974)
C. Belmont Report (1979)
III.
Respect for Individuals
A. Anonymity
B. Confidentiality
C. Informed Consent
IV.
Beneficence
A. Benefits vs. Risks of Research
B. Milgram Experiment
C. Debriefing
V.
Justice
A. Four Principals
B. Tuskegee Syphilis Experiment (example)
VI.
Other Considerations
A. Faking Data
B. Plagiarism
C. Institutional Review Boards
TEST QUESTIONS
True/False
1. Simply observing participants in a study is a result of one famous study, conducted in
the 1920’s that examined worker productivity. It has now become known as the
“Hawthorne Effect”. (True)
2. Institutional Review Boards (known as IRB’s) grew out of the Nazi’s unethical
experimentation on prisoners. (True)
9
3. Stanley Milgram was interested in the amount of electrical shock a person could
withstand in his now famous study. (False)
4. The NASW Code of Ethics does not specifically mention ethics. (False)
5. Plagiarism is considered unethical behavior.
(True)
6. Failing to acknowledge another persons work is a form of plagiarism. (True)
7. Debriefing is the process of discussing with a subject what they thought about the
experiment after it is over. (False)
8. It is not necessary to obtain informed consent from a minor (child under the age of
18).
(False)
9. Confidentiality and anonymity mean the same thing. (False)
10. In research, special populations refer to those with diminished rights. (True)
11. Research can be manipulated to prove a point. (True)
12. If you were offering an incentive for participation in a study (say a gift certificate at a
local fast food restaurant, and a subject answered only one question and then
withdrew from the study, they would still be entitled to the gift certificate. (True)
Multiple Choice
1. Anonymity, in research, means:
a. that the researcher will not be able to be identified
b. that the researcher will not collect or report any information that
could identify an individual subject in the study
c. that the researcher will not divulge any specific information about subjects
such as age, ethnicity, etc.
d. all of the above
e. none of the above
2. Confidentiality, in research, means:
a. that the researcher will not be able to be identified
b. that the researcher will not collect or report any information that could
identify an individual subject in the study
c. that the researcher will not divulge any specific information about
subjects such as age, ethnicity, etc.
d. all of the above
e. none of the above
10
3. IRB stands for:
a. Institutional Review Board
b. Invitational Review Board
c. Informational Review Board
d. Inquisitional Review Board
e. None of the above
4. When conducting research with children, it is important to obtain:
a. the child’s written permission to participate
b. the parent’s written permission to participate
c. both the child and the parents permission
d. neither one
5. Which of the following are important parts of an informed consent:
a. the researcher’s name and contact information,
b. a statement about the purpose of the study
c. a statement of the risks involved
d. a statement of the right to refuse to participate
e. all of the above
f. none of the above
6. When conducting research, it is important to remember that a subject has the right
to withdraw from the study:
a. any time they wish
b. any time up until the experiment has begun
c. any time up until the end of the study
d. before they sign the informed consent
e. subjects are not allowed to withdraw
f. none of the above
7. Some people believe that one of the ethical issues with conducting research on
children and prisoners is:
a. they may not be reliable test subjects
b. they may not be in a position to fully consent to being a subject
c. they may skew the results of the experiment
d. all of the above
e. none of the above, there are no ethical dilemmas with conducting research
with children and prisoners
8. The Nuremberg Trials were:
a. an investigation into Germany’s treatment of prisoners during WWII
b. a study conducted on athletes during the 1936 Olympics in Nuremberg,
Germany
c. a jury trial that considered the legality of the Geneva Convention
d. none of the above
11
9. It is important to provide debriefing, when:
a. some form of intervention has been employed on a subject
b. a researcher asked questions that may be considered personal or sensitive
in nature
c. some form of deception has been employed
d. none of the above
10. Which of the following might be considered plagiarism:
a. copying a sentence verbatim and failing to cite the author
b. paraphrasing a sentence and failing to cite the author
c. using another persons ideas
d. all of the above
e. none of the above
Short Answer
1. With the invention of the internet, ________________________ has become
more of a problem among college students.
(Plagiarism)
2. The ______________________________
__________________________
______________________ had to
be abandoned after only a few days because it
was getting out of
hand.
(Stanford Prison Experiment)
3. The federal government mandates that human subjects are given an
_________________
_______________.
(Informed Consent)
4. The process of protecting a person’s identity when conducting research is
known as _________________________.
(Anonymity)
5. The National Association of Social Workers (NASW) has constructed a
_________________of __________________________ that guides Social
Work practice, research, and the Social Worker’s professional conduct.
(Code Ethics)
12
Essay Questions
1. Select one side of the argument (for or against) regarding whether prisoners are fully
able to
provide informed consent in research and whether it is ethical to conduct research with
prisoners.
2. Discuss the Milgram experiment in terms of its ethicality. Was the experiment
ethical? Why
or why not? Should Dr. Milgram have been allowed to conduct his experiment? Why
or why
not?
3. Review the NASW Code of Ethics stance on ethics in research. Briefly outline what
the Code
of Ethics says about ethical research.
DISCUSSION QUESTIONS
1. When is it acceptable to use deception in research? Devise a research design where it
would
be acceptable to use deception. How would you debrief your subjects afterwards?
1. How widespread do you believe plagiarism to be among your fellow students? Has
the widespread use of the internet increased the incidence of plagiarism in your
school? What should be the consequences for a student who is caught plagiarizing
work?
3.
Discuss the procedures of the Institutional Review Board in your school. In your
opinion, does this board facilitate (help) researchers or hinder them from conducting
research? Support your opinion with examples.
CHAPTER GLOSSARY TERMS
Anonymity - Anonymity is often confused with confidentiality. Anonymity, in research,
means that the researcher will not collect or report any information that will identify the
subject.
Beneficence - The obligation in research to do no harm and maximize benefits while
minimizing possible harm
13
Benefits - Positive values related to health or well-being that is expected, in research, to
outweigh the risks
Confidentiality – Confidentiality is the assurance that a researcher provides to subjects
that all information about them, and all answers they provide (whether experimental or
survey research) will remain only in the hands of the investigator. No other person
outside of the research process will have access to this information.
Debriefing - Debriefing is the process of fully informing subjects of the nature of the
research when some form of deception has been employed.
Faking data.- Making up desired data or eliminating undesired data in research findings
Informed Consent – Informed consent is the process of letting potential subjects know
what the basic purpose of the study will be, that their participation is voluntary, and
obtaining their written permission to participate in the study.
Institutional Review Board – The Institutional Review Board (IRB) is a committee
mandated by the federal government. Any institution of higher learning that receives
federal money (including financial aid for students) will have a committee that oversees
research with human subjects and animals and ensures that all research is conducted in a
safe, ethical, and humane manner.
Justice- An ethical research principle regarding the fairness of distribution of benefits
and risks among all individuals, which can be formulated in four ways: to each person an
equal share, to each person according to individual need, to each person according to
individual effort, and to each person according to merit
Laundering data - A way of statistically manipulating the data collected to reduce
errors and make the findings more accurate
Plagiarism - Plagiarism means taking credit for something that the person has not done,
in whole or in part.
Respect for individuals - An ethical research principle according to which the autonomy
of an individual is acknowledged and those with diminished autonomy are protected
Risk - The possibility that psychological, physical, legal, social, or economic harm may
occur; sometimes expressed in levels, such as “no risk,” “little risk,” “moderate risk,” and
“high risk”
APPLIED LEARNING ACTIVITIES
Activity #1:
You are a case manager at a homeless shelter located in a large metropolitan city. The
shelter has one hundred beds and provides emergency shelter for men, women, and
families with children. Residents of the shelter are allowed to stay for up to ninety days.
The goals of the shelter are to help consumers obtain employment, secure permanent
housing, and access health care and to identify other services needed. Your task as a case
manager is to link them to the necessary services.
14
As an employee of the shelter for the past four years, you have developed a
hypothesis that certain case management techniques seem to be more effective than
others with the residents of the shelter. These techniques are intensive case management
(meeting with residents at least twice per week), developing goal attainment scales with
each resident, and requesting that each of your residents keep a detailed log of daily
activities.
1. What does the Code of Ethics say about evaluating your practice?
2. What should the overarching ethical principles in designing a study be?
3. Does the potential for harm exist? If so, what will you do to protect human
subjects?
4. How will you address the issues of confidentiality and anonymity?
Activity #2:
You are a social worker who works at a domestic violence shelter. Your job is to provide
services to children of the women who enter the shelter. The shelter has capacity for
fifteen children, and your caseload averages about ten to twelve children each month.
Your job is to link the children with counseling services, medical care, and other social
services as needed. In the past three months, you have noticed that there has been an
increase in aggressive behavior among the children in the shelter. You want to develop an
intervention utilizing a play therapy technique that you learned in your social work
classes and present your findings at a state conference for social workers. You design a
study in which the children in the shelter are randomly placed in one of two groups. One
group will receive play therapy, and the other group will receive no intervention. You
will then compare the behaviors of the two groups of children. Before you implement this
study, answer the following questions:
1. What does the Code of Ethics say about obtaining informed consent when
children are used as research participants?
2. How will you address the issue of confidentiality for those mothers and their
children who want to participate in the study?
3. What do you say to potential participants about their right to withdraw from
the study?
4. Does the potential for harm exist? If so, what will you do to protect the
subjects?
5. Are there potential benefits to the subject? What are they?
Activity #3:
You are a social worker working for an agency that runs group homes for adults who are
developmentally delayed. The four group homes have the capacity to house eight adults
each. The ages of the adults in these homes range from thirty-one to forty, and they
generally interact positively with each other. However, many of the adults engage in
inappropriate touching and other sexual behavior. Your agency has decided to begin
teaching sex education classes that will include a good touch/bad touch component in an
effort to reduce the inappropriate behavior. The agency will phase in the curriculum,
teaching it at one group home at a time. You decide to track the results for possible future
publication. To do this, you will record the number of incidents of inappropriate behavior
15
prior to the classes and then continue to record the number of incidents during and after
the completion of the classes. You will compare these to the number of incidents at the
other group homes that are waiting for the classes to be conducted. First, however,
consider the following ethical issues:
1. How will you obtain the informed consent of your research participants?
2. In addition to the participants’ informed consent, is it necessary to obtain any
other signatures?
3. Is it possible to adequately inform your research participants of what you are
asking of them?
4. If the answer to the previous question is no, is it ethical to proceed with the
classes even if they agree to it?
5. Are there potential risks or harmful effects to the subjects? Are there potential
benefits?
KEY POINTS

The three guiding principles for protecting human rights in research are respect
for individuals, beneficence, and justice.

Three methods for protecting human rights in research are confidentiality,
anonymity, and informed consent.

Confidentiality is the assurance that a researcher provides to subjects that all
information about them, and all answers they provide, will remain in the hands of
the investigator and that no other person outside the research process will have
access to this information.

Anonymity is the practice of not collecting any information that will identify the
subject.

Informed consent is letting potential subjects know what the basic purpose of the
study will be and that their participation is voluntary, and obtaining their written
permission to participate in the study.

Debriefing is the process of fully informing subjects of the nature of the research
when some form of deception has been employed.

Plagiarism is the unauthorized use of another person’s work without properly
giving him or her credit.

Institutional review boards oversee the rights of human subjects involved in
research.
16
ADDITIONAL TEACHING RESOURCES
Films/Media
Obedience [videorecording] : research carried out at Yale University / chief investigator,
Stanley Milgram ; narration and production, S. Milgram.
University Park, PA : PennState, [date unkown, c1969]
1 videodisc (45 min.) : sd., b&w ; 4 3/4 in.
Container title: Stanley Milgram’s Obedience.
Presents an experiment conducted by Dr. Stanley Milgram in May 1962 at Yale
University on obedience to authority. Describes both obedient and defiant reactions of
subjects who are instructed to administer electric shocks of increasing severity to another
person.
Web Links
National Institute of Health: Regulations and Ethical Guidelines - Belmont Report,
Nuremberg Code, Guidelines for Research Involving Human Subject (booklet).
http://ohsr.od.nih.gov/guidelines/index.html
Print Material
Ethical Standards of the National Association of Social Workers
5.02 Evaluation and Research
(a) Social workers should monitor and evaluate policies, the implementation of programs,
and practice interventions.
(b) Social workers should promote and facilitate evaluation and research to contribute to
the development of knowledge.
(c) Social workers should critically examine and keep current with emerging knowledge
relevant to social work and fully use evaluation and research evidence in their
professional practice.
(d) Social workers engaged in evaluation or research should carefully consider possible
consequences and should follow guidelines developed for the protection of evaluation
and research participants. Appropriate institutional review boards should be consulted.
(e) Social workers engaged in evaluation or research should obtain voluntary and written
informed consent from participants, when appropriate, without any implied or actual
deprivation or penalty for refusal to participate; without undue inducement to participate;
and with due regard for participants' well-being, privacy, and dignity. Informed consent
should include information about the nature, extent, and duration of the participation
requested and disclosure of the risks and benefits of participation in the research.
17
(f) When evaluation or research participants are incapable of giving informed consent,
social workers should provide an appropriate explanation to the participants, obtain the
participants' assent to the extent they are able, and obtain written consent from an
appropriate proxy.
(g) Social workers should never design or conduct evaluation or research that does not
use consent procedures, such as certain forms of naturalistic observation and archival
research, unless rigorous and responsible review of the research has found it to be
justified because of its prospective scientific, educational, or applied value and unless
equally effective alternative procedures that do not involve waiver of consent are not
feasible.
(h) Social workers should inform participants of their right to withdraw from evaluation
and research at any time without penalty.
(i) Social workers should take appropriate steps to ensure that participants in evaluation
and research have access to appropriate supportive services.
(j) Social workers engaged in evaluation or research should protect participants from
unwarranted physical or mental distress, harm, danger, or deprivation.
(k) Social workers engaged in the evaluation of services should discuss collected
information only for professional purposes and only with people professionally
concerned with this information.
(l) Social workers engaged in evaluation or research should ensure the anonymity or
confidentiality of participants and of the data obtained from them. Social workers should
inform participants of any limits of confidentiality, the measures that will be taken to
ensure confidentiality, and when any records containing research data will be destroyed.
(m) Social workers who report evaluation and research results should protect participants'
confidentiality by omitting identifying information unless proper consent has been
obtained authorizing disclosure.
(n) Social workers should report evaluation and research findings accurately. They should
not fabricate or falsify results and should take steps to correct any errors later found in
published data using standard publication methods.
(o) Social workers engaged in evaluation or research should be alert to and avoid
conflicts of interest and dual relationships with participants, should inform participants
when a real or potential conflict of interest arises, and should take steps to resolve the
issue in a manner that makes participants' interests primary.
(p) Social workers should educate themselves, their students, and their colleagues about
responsible research practices.
18
CHAPTER THREE: LITERATURE REVIEW
CHAPTER OUTLINE
I. What is a literature review?
A. Definition of Literature Review
B. Theoretical Perspective
II. Step 1: Conducting Your Search for Research Articles
A. Key Words
B. Searching for Sources
III. Step 2: Choosing Your Articles
A. Methodology Defined
B. Peer Reviewed Journals
III. Step 3: Reviewing Your Articles
A. Main Parts of an Article
1. Abstract
2. Introduction
a. Problem Statement Defined
b. Independent Variable Defined
c. Dependent Variable Defined
3. Review of Literature
4. Methods
5. Results or Findings
6. Discussion
B. Your Critique of the Article
IV. Step 4: Organizing Your Search Results
A. Index Cards
B. Computer Software
C. Groupings
V. Step 5: Developing a Problem Statement or Hypothesis
A. Developing a Problem Statement
B. Developing a Hypothesis
VI. Step 6: Compiling Your Reference Page
19
TEST QUESTIONS
True/False
1. A literature review is a search of the existing published research to review what is
known about the topic you are studying, what interventions have been found to be
helpful, and suggestions for future research that can provide direction for your own
research. (True)
2. While a literature search can seem to be a waste of time in the beginning, the truth is,
it can save time in the long-run. (True)
3. An independent variable is the variable that predicts a change in the dependent variable
(sometimes referred to as predictor variable). (True)
4. The dependent variable is sometimes referred to as a descriptive variable. (False)
5. Keywords are words selected as search terms in any data base search. (True)
6. Internet articles are more reliable than peer-reviewed journals. (False)
7. Blind review means the reviewer is visually impaired. (False)
8. A citation is giving credit to another person for their work. (True)
Multiple Choice
1. A literature review is
a. a search of fiction literature
b. a book report about a particular author
c. a critique of a famous author’s work
d. a review of published research to determine what is known about a subject
e. none of the above
2. An independent variable is said to be
a. the variable that is predicted by another variable
b. the variable that is manipulated by the researcher to determine if it can
predicts change in another variable
c. a variable that stands alone (independent) of the other variables
d. all of the above
e. none of the above
3. A dependent variable is said to be:
a. the variable that is predicted by another variable
20
b. the variable that predicts a change in another variable
c. a variable that stands alone (independent) of the other variables
d. all of the above
e. none of the above
4. Keywords are words that are
a. key components in a study
b. words used to define search terms in any data search
c. words that define the independent variable
d. words that define the dependent variable
e. none of the above
5. Literature reviews are considered to be a continuous process. This means
a. the researcher continues to return to the literature to check out hypotheses
against what is known
b. the researcher returns to their hypothesis to determine if it is supported or rejected
c. the researcher continues to refine and shape the research question based upon the
literature
d. none of the above
6. In qualitative research, the literature review is used
a. To determine what has been published on a phenomenon
b. To make sense of what has been found in the study
c. Both a and b
d. none of the above
7. Theoretical perspectives include which of the following
a. a model that makes assumptions
b. explains how a dependent variable is defined
c. explains how a independent variable is defined
d. both b and c
e. none of the above
8. Most Social Workers use which style guide
a. the American Psychiatric Association
b. the American Medical Association
c. the Chicago Style Manual
d. none of the above
9. A literature review in quantitative research
a. is conducted after developing a hypothesis and after research is completed
b. is conducted before developing a hypothesis and before any research is completed
c. Both a and b
d. Has nothing to do with quantitative research
21
Short Answer
1. A ___________________________ _______________________ helps to situate
a study within a theoretical perspective.
(Literature Review)
2. ______________________________ are usually found immediately following
the abstract of an article.
(Keywords)
3. _____________________ _____________________ journals have been
reviewed by experts for accuracy, methodological concerns, and other issues.
(Peer Reviewed)
4. A ______________________ __________________________ means that the
reader has no information about the author.
(Blind Review)
5. To ______________________ means that credit has been given to another person.
(Cite)
Essay Questions
1. Provide a discussion that compares and contrasts the use of the literature review
in qualitative and quantitative articles.
2. You are conducting a quantitative study. What steps would you take to conduct
a literature review? Provide a detailed discussion of the process.
3. Discuss the role of the literature review in hypothesis development.
22
DISCUSSION QUESTIONS
1. This chapter warns against the use of over-reliance upon the internet. Why is
that?
2. Are peer-reviewed journal inherently better than other journals? Why or why
not?
3. Is there an danger of researcher bias in literature reviews? If so, how can this be
counteracted?
CHAPTER GLOSSARY TERMS
Abstract- A brief summary of the research and its findings, usually no more than 250
words
Citation- Means of giving credit to the authors for what is being reported; is organized
by last name then date
Independent variable- A variable that is controlled or manipulated by the researcher
Key word- Words that are found in the abstract of an article, and again as identifiers for
the article, and that can be used as search terms in a database search
Literature review- A search of the published research that allows you to synthesize what
is known about the topic you are studying
Methodology- The research methods, procedures, and techniques used to collect and
analyze information in research
Peer review- Review of an article’s content, accuracy, and methodological concerns by
experts in the field that occurs before an article is accepted for publication
23
Problem Statement- An open-ended statement that tells you what a study is intended to
do but does not predict what the results might be
Reference page- The alphabetical list of studies cited in a summary of a literature review
Theory- A statement or set of statements designed to explain a phenomenon based upon
observations and experiments and often agreed upon by most experts in a particular field
Theoretical perspective-. A model that makes assumptions about something, attempts to
integrate various kinds of information, gives meaning to what we see and experience,
focuses on relationships and connections between variables, and has inherent benefits and
consequences
APPLIED LEARNING ACTIVITIES
Activity #1:
Use step 1 to conduct a literature search. Select an article and use the outline in figure 3.1
to review it.
Activity #2:
Conduct a literature search and write a brief literature review based on one or more of the
following research questions. Select five references and organize the descriptions of the
studies in a table, summarize the findings in a report, and create a reference page.
1. What is the relationship between socioeconomic status and child maltreatment?
2. What is the relationship between intensive case management services and
inpatient hospitalization admissions?
3. What is the relationship between domestic violence and substance abuse?
KEY POINTS

A literature review is a search of published research that allows you to review
what is known about the topic you are studying.

A literature review helps shape a research design by giving the researcher an
overview of previous studies on a topic.
24

To conduct a search of the literature, you must identify key words to enter into a
search database, such as Social Work Abstracts or PsycInfo.
ADDITIONAL TEACHING RESOURCES
Sample Assignment
Social Work Research Methods: Article Analysis
Option #1: Find, read and analyze an empirical social work-related article on the topic
you have chosen and report the following information.
Option #2: Read the assigned empirical social work-related article and report the
following information.
Basic Elements
Empirical Research Article
Criteria for evaluating the basic
Objective
elements
Addressed
An article from an academic journal that
presents a quantitative or qualitative study
related to the topic.
Cite the article read
Use APA style: Author. (Date). Title.
Source.
Introduction
(section header used)
What was the purpose of this
study
Relevant to social work
C
Develops or tests a theory
D
More than one possible answer to the
question addressed
B
Literature review
(section header used)
How many articles were
reviewed?
At least four to five major sources, and
reference to classic works on this topic
What were the articles'
publication sources?
Preferably academic journals and books
written by academics, or historical
documents
Time frame for articles cited
Most articles published within five years
of the literature review article's publication
date
How were the articles reviewed
Had a same or similar research hypothesis
25
related to this study
or focus for the study, participants and
setting, data collection and analysis
methods
Problem Statement
(section header used)
Clearly stated, consistent with purpose of
What was the research
study, and had a plausible relationship
hypothesis or focus of the study?
between variables
What were the variables that the
study measured or observed that
Observable (by researcher or others),
lead to changes in other
measurable (by indicators and
variables?
dimensions), specific (categorized, limited,
named)
(independent or “cause”
variables)
What were the variables that the
Observable (by researcher or others),
study measured or observed that
measurable (by indicators and
were changed or influenced?
dimensions), specific (categorized, limited,
named)
(dependent or “effect” variables)
Research Design
(section header used)
Was the study exploratory,
descriptive, explanatory or
evaluative?
Type specified, appropriate to the topic,
appropriately related to previous studies.
How did the researchers select
participants?
Population and sampling frame specified,
participation selection method appropriate,
number of participants (& groups)
explained
What was as the diversity of the Identify the diversity of the sample (age,
participants?
gender, race, location, etc.)
What methods did the
researchers use to measure or
observe the variables?
E
Tools were appropriate to purpose of
study, valid, and reliable.
Anonymity, confidentiality, informed
What ethical considerations were consent
addressed in this design?
Do the benefits outweigh the risk?
Data Collection
(section header used)
What was the social setting or
data source used in the study?
Specified, appropriate to the topic, and
appropriately related to previous studies
What was the data collection
procedure used in the study?
Specified, appropriate, effective
26
A
Data Analysis
(section header used)
What data analysis method was
used?
Clearly specified, appropriate, effective
How generalizable were the
findings
Specified, limitations stated.
Findings
(section header used)
What were the significant
findings?
How does this research inform social work
practice?
What were the strengths and
limitations of the study?
Clearly identified to guide future research
C
CSWE Objectives:
A. The student will apply social work ethical principles to guide professional practice.
B. The student will apply critical thinking to inform and communicate professional
judgments.
C. The Student will engage in research-informed practice and practice informed research.
D. The student will apply knowledge of human behavior and the social environment.
E. The Student will engage diversity and difference in practice.
27
CHAPTER FOUR: VARIABLES AND MEASURES
CHAPTER OUTLINE
I.
Variables in Research Design
A. Conceptualizing a Variable
B. Operationalizing a Variable
II.
Viewing and Using Variables
III.
Types of Variables
A. Predictive Variables and. Control Variables
B. Demographic Variables
C. Confounding Variables
IV.
What is a Measure?
V.
Defining and Operationalizing Measures
VI.
Levels of Measurement
A. Discrete Levels of Measures
1. Nominal-level Variables
2. Ordinal-level Variables
B. Continuous Levels of Measure
1. Interval-level Variables
2. Ratio-level Variables
VII.
Reliability and Validity in Measurement
A. Types Reliability
1. Test-retest Reliability
2. Equivalent Form Reliability
3. Internal Consistency Reliability
4. Inter-observer Reliability
5. Intra-observer Reliability
B. Types of Validity
1. Face Validity
2. Content Validity
3. Criterion-Related Validity
4. Concurrent Validity
5. Construct Validity
a. Construct
b. Convergent
c. Discriminate
28
TEST QUESTIONS
True/False
1. A measure is any tool or instrument used to gather data. (True)
2. When we operationalize something, we are measuring a persons ability to
understand unfamiliar words. (False)
3. Measures have two parts, the stimulus and the response. (True)
4. Responses are measured in one of two ways. (False)
5. I.Q. tests, depression inventories, and opinion surveys are all forms of different
measures. (True)
6. Measures often try to capture difficult concepts such as depression, happiness, or
alcoholism. (True)
7. One way to determine if a measure is nominal is that you can add or subtract
from the measure. (False)
8. Whenever possible, it is a good idea to avoid measures that have been
standardized. (False)
9. If a measure is said to be reliable we know that it is measuring the same thing
over and over. (True)
10. A measure can have reliability but not validity. (True)
Multiple Choice
1. There are four levels of measurement. These are:
a. exploratory, explanatory, descriptive and evaluative
b. nominal, ordinal, interval, and ratio
c. descriptive, ordinal, inferential, and explanatory
d. nominal, ordinal, interval, and heuristic
e. none of the above
29
2. If you are asking for a response of “yes” or “no” you would be measuring at what
level
a. interval
b. ratio
c. ordinal
d. nominal
e. none of the above
3. When a measure has been standardized, it means
a. norms have been established for that measure
b. it has been worded so that it applies to the greatest number of people
c. it is 8 ½” X 11”
d. it is accepted by a large number of practitioners
e. none of the above
4. If we are measuring at the interval level, responses must
a. be mutually exclusive
b. it is rank ordered
c. have equal gradations between steps
d. all of the above
e. a and b only
f. b and c only
g. none of the above
5. Reliability
a. is the stability of a measurement instrument from one use to the next
b. is the ability of an item to ask a question consistently
c. is the ability of a researcher to consistently produce good work
d. is the least important aspect of an instrument
e. none of the above
6. There are five types of validity. They are:
a. external, internal, backward, forward and recumbent
b. internal, external, construct content and criterion
c. face, concurrent, criterion-related, content and construct
d. none of the above
7. Professor Jones asks her students to rate their overall level of happiness. She
asks them to rate their feelings on a scale of 1 – 10 with one being the lowest
(very unhappy) and 10 being the highest (very happy). Professor Jones is
a. using a nominal-level of measurement
b. using an ordinal-level of measurement
c. using a interval-level of measurement
d. not enough information to tell
30
8. Professor Jones asks her students to rate their overall level of happiness. She
asks them to rate their feelings on a scale of 1 – 10 with one being the lowest
(very unhappy) and 10 being the highest (very happy). Two weeks later, Professor
Jones asks her students to again complete the same scale. Professor Jones is
a. conducting a reliability test on her scale
b. attempting to operationalize her scale
c. attempting to see how happy her students are
d. attempting to establish validity for her scale
e. not enough information to tell
9. ___________________ is a term that is applied to a measurement tool (such as a
scale, survey, poll, or test) that describes how much the tool measures what it is
meant to measure.
a. reliability
b. validity
c. variability
d. standardization
e. none of the above
10. Face validity is best described as
a. a measure that is said to have value
b. a measure that is standardized
c. a measure that appears to be valid
d. all of the above
e. none of the above
Short Answer
1. If a researcher measures something with the only requirement that the answers be
mutually exclusive, they are measuring at the ___________________ level.
(Nominal)
2. If a researcher measures something with the requirement that the answers be
mutually exclusive, rank ordered, contain equal gradations and have an absolute
zero they are measuring at the ____________________ level.
(Ratio)
3. ___________________________ a concept is how we are choosing to measure it
for our study.
(Operationalizing)
31
4. Whenever possible, a researcher should choose an instrument that has been
__________________________.
(Normed)
Essay Questions
1. Choose at least three issues of face validity. Discuss these issues and how they
can be minimized in a research study.
3. You want to conduct a study where you measure the self-esteem of teenagers.
Discuss how you would measure this concept. What would you look for in a
standardized measure? How would you measure self-esteem if you were developing
your own scale?
3. Compare and contrast the issues surrounding reliability and validity.
DISCUSSION QUESTIONS
1. Many times, instruments have been standardized only on one population or
ethnicity. For example, with white males. Is it ethical to use this with other people
than what the scale was normed on? Why or why not?
2. Operationalizing a scale can be manipulated to fit a researcher’s own bias? Do you
agree or disagree with this statement? Why or why not?
CHAPTER GLOSSARY TERMS
Conceptualizing a variable - How we translate an idea or abstract theory into a variable
that can be used to test a hypothesis or make sense of observations
Concurrent validity - How well a measure correlates with some other measure of the
same variable that is believed to be valid
Confounding variable - A variable that obscures the effect of another variable
Construct - The concept or the characteristic that an instrument is designed to measure
Control for - A means of subtracting the effects of certain independent variables on the
dependent variable by holding those variables constant
Control variable - Any variable that researchers control for (i.e., hold constant)
Construct validity - A form of validity related to the extent to which the items of an
instrument accurately sample a construct
32
Content validity - A form of validity related to how well the items in a measurement
represent the concept that is being measured
Criterion-related validity - A form of validity related to a measure’s ability to make
accurate predictions (also called predictive validity)
Demographics - The physical characteristics of a population, such as age, sex, marital
status, family size, education, geographic location, and occupation
Dichotomous variable - A variable with only two responses to choose from, such as yes
or no or treatment group or nontreatment group
Equivalent form reliability - A measure of consistency between two versions of a
measure
External validity - The extent to which a study’s findings are applicable or relevant to a group outside the study (also called
generalizability)
Face validity - A form of validity related to whether a measure seems to make sense (be
valid) at a glance
Internal consistency - The consistency among the responses to the items in a measure;
the extent to which responses to items measuring the same concept are associated with
each other
Internal validity - A measure of how confident the researcher can be about the
independent variable truly causing a change in the dependent variable (as opposed to
outside influences)
Interobserver reliability - A measure of reliability that is used when two or more
observers rate the same person, place, or event
Intraobserver reliability - A measure of reliability that is used when one observer rates
a person, place, or event two or more times
Interval-level variables - Variables that are measured in such a way that there are equal gradations between each item, items are rank
ordered, and each item is mutually exclusive and exhaustive
Measure - A tool or instrument that is used to gather data and has two parts: the item
(stimulus) and the response
Nominal-level variables. - Variables that are measured in such a way that items are
mutually exclusive and exhaustive
Operationalizing. - How we define a concept so that it can be measured
Ordinal-level variables. - Variables that are measured in such a way that items must be
mutually exclusive, exhaustive, and rank ordered
Ratio-level variables - Variables that are measured in such a way that items are mutually
exclusive and exhaustive, they are rank ordered, there are equal gradations between
items, and there is an absolute zero
Reliability - The stability and consistency of a measurement
Standardized measure - A measurement or instrument that has been given to enough
people that we can compare one person’s scores to those of other test takers
33
Test-retest reliability - A method of examining the consistency of your measure from
one time to the next to establish reliability
Validity - How much a measurement tool measures what it is meant to measure
APPLIED LEARNING ACTIVITIES
Activity #1:
Identify the following variables as independent or dependent.
1. Amount of time studying and scores on a final exam
2. The number of divorces a mother has had and her children’s fear of intimacy in
adult relationships
3. Sex and fear of intimacy in adult relationships
4. Number of hours a child spends playing violent video games and the child’s
aggression scores on a child behavior scale
5. Scores on SATs and grade point average in the freshman year of college
6. Use of Ritalin or Cylert and amphetamine usage during adolescence
Activity #2:
Sandy is a social worker at a nursing home. Sandy notices that residents of the nursing
home who are more physically active seem to be less depressed, have more energy, and
generally seem to be healthier than the more sedentary residents. Sandy reviews the
literature and finds a relationship between physical activity and depression in the elderly.
Based on her literature review, Sandy develops the following research hypothesis:
Residents of the nursing home who exercise a minimum of twenty minutes a day, three
times a week, will have lower levels of depression than those who don’t. Sandy develops
the following research design to test her hypothesis:

She recruits volunteers from the residents to participate in an exercise
class. The class meets three times a week for thirty minutes. A total of
twenty residents volunteer to participate in the exercise class.

She asks residents who don’t wish to participate in the exercise class to
take a pre- and posttest as a comparison group. She is able to obtain a
comparison group of nineteen residents who will take the pre- and
posttests (but are not willing to participate in the exercise classes).

She gives members of both groups a standardized depression inventory at
the beginning of the study (pretest).
34

After four weeks, a total of fifteen people complete the exercise classes
and take the posttest. Sixteen of the original nineteen members of the
comparison group complete the posttest. Sandy compares the scores of
both groups.
1. What are the strengths of Sandy’s design?
2. What are the weaknesses of this study?
3. How can you determine measurement validity?
4. How can we determine if the measure is reliable?
KEY POINTS

Variables can be categorized into three groups: independent variables, dependent
variables, and control variables.

The dependent variable is predicted by another variable, or is said to depend on the
independent variable.

The independent variable is often thought of as a variable that is manipulated by the
researcher.

Conceptualizing a variable refers to how we translate an idea or abstract theory into
variables that can be used to test hypotheses or make sense of observations.

A control variable is any variable that the researcher wants to hold constant (control for).

A confounding variable obscures the effect of another variable.

A measure is a tool or instrument that is used to gather data.

There are four levels of variables: nominal-level variables, ordinal-level variables,
interval-level variables, and ratio-level variables.

The term operationalize refers to how we define a concept so that it can be measured.

Reliability refers to the ability of a measure to remain stable and consistent over time.

Validity is a term describes how much the instrument measures what it is meant to
measure.

There are several types of validity: face validity, content validity, criterion-related
validity, concurrent validity, construct validity, convergent validity and discriminate
validity.
35
ADDITIONAL TEACHING RESOURCES
Tables/Graphs/Diagrams
Example for Each Level of Measurement
Item (Stimulus)
Response Options
Level of Measurement
Have you ever been treated
for depression?
In the past month, I have
thought about ending my
life.
How many days in the last
week have you experienced
episodes of crying?
Yes
No
Not at all
Sometimes
Frequently
None
1–2 days
3–4 days
5–6 days
Daily
(respondents enter a
number)
Nominal
How many times have you
attempted suicide?
Ordinal
Interval
Ratio
Web Links
Finding Psychological Measures (multiple links):
http://www.muhlenberg.edu/depts/psychology/Measures.html
This site provides links to multiple different measures and scales that are used in research
and practice.
36
CHAPTER FIVE: SAMPLING
CHAPTER OUTLINE
I. What is sampling?
A. Definitions – Population, Elements, Sample, Sampling Frame, Enumeration
Units, Sampling Unit
II. Random Selection and Random Assignment
A. Random Selection
B. Random Assignment
III. Sample Size
A. Representativeness of a sample size
B. Brief discussion of sample size
IV. Internal Validity and External Validity
A. External Validity or Generalizability
B. Internal Validity – Seven Threats
V. Probability Sampling
A. Definition of Probability Sampling
B. Probability Sampling Theory
VI. Probability Sampling Techniques
1. Simple random sampling
2. Systematic random sampling
3. Stratified random sampling
4. Cluster sampling
VII. Sampling Error (defined)
VIII. Non-Probability Sampling
A. Definition of non-probability sampling
B. Convenience sampling
C. Purposive sampling
D. Quota sampling
E. Snowball sampling
IX. Limitations of Non-Probability Sampling
37
TEST QUESTIONS
True/False
1. One of the goals of exploratory (qualitative) research is to draw conclusions about a
population based on a small sample size. (False)
2. There are two types of samples – probable and improbable. (False)
3. A sample should reflect the characteristics of the population from which it was drawn.
(True)
4. Not all sampling techniques yield an accurate portrayal of the population from which
the sample was derived. (True)
5. Sampling designs are divided into four main categories. (False)
Multiple Choice
1. A sample is
a. A group of people who have been chosen to give their opinions
b. A group of subjects selected from a larger population in the hope that
studying this smaller group will reveal important things about the larger
population.
c. A group of individuals who have been selected because of some characteristic they
have: such as age, eye color, ethnicity, etc.
d. Something a researcher uses to make predictions about a population
e. All of the above
f. None of the above
2. There are
a. Two types of sampling (probable and improbable)
b. Two types of sampling (simple and random)
c. Four types of sampling (probable, improbable, simple probable, and random
probable)
d. Four types of sampling (monochromatic, exhaustive, simple, and complex)
e. None of the above
3. If a researcher wants to insure that that each and every member of a group has an
equal chance of being included in their study, they will need to use:
a. Probability sampling
b. Non-probability sampling
c. Simple probable sampling
d. Monochromatic sampling
38
e. A very large sample size
f. None of the above
4. Random selection implies:
a. Every member of the population is similar
b. Every member of the population has an equal chance of being selected
c. The researcher has no idea who is selected or not
d. A and B
e. B and C
f. All of the above
g. None of the above
5. One of the benefits of random selection is:
a. The researcher has no idea who has been selected
b. The researcher can be certain that all group members are similar
c. The researcher can feel more confident about generalizing results
d. All of the above
e. None of the above
6. Placing all group members names in a hat and then drawing them out would be an
example of what?
a. Shoddy research techniques
b. Convenience sampling
c. Researcher bias
d. Random sampling
e. None of the above
7. Sampling error can be reduced by
a. making sure you have a sample that is very similar
b. making sure the researcher has accounted for researcher bias
c. having someone other than the researcher choose the sample
d. increasing the sample size
e. None of the above
8. Jane is a Case Manager who recruits clients for a study. Jane is using which kind of
sampling?
a. Simple random
b. Probability
c. Problematic
d. Non-problematic
e. Convenience
f. None of the above
39
9. If every member of a sample has an equal chance of being in either the treatment
group or the non-treatment group, the researcher could be said to be using:
a. Random selection
b. Probability selection
c. Good research techniques
d. Random assignment
e. None of the above
10. Experimental mortality is
a. one threat to internal validity
b. the conclusion of an experimental research study
c. the time-line of the life of the study
d. all of the above
e. None of the above
Short Answer
1. ________________________________ sampling is a method of drawing a sample
from a population in two or more stages. (Cluster)
2. A ___________________ is a group of subjects selected from a larger population in
the hope that studying this group will generalize to the population. (Sample)
3. ____________________________ _____________________ implies that each and
every member of the population has an equal chance of being selected for the study
(being included in the sample). (Probability Sampling)
4. ______________________ sampling is simply as the name implies – selecting a
sample based on your knowledge of a population or drawing a sample with some predetermined characteristics in mind. (Purposive)
5. One of the goals of sampling in __________________________ research is to
generalize to a larger population. (Explanatory)
Essay Questions
1. List the two main types of sampling theory and provide a brief discussion of each.
Include in your discussion an overview of the strengths of each and any limitations
they may contain.
40
2.
Provide an overview of the differences and similarities in sampling techniques
between qualitative and quantitative research. Include such issues as how samples
are drawn and the purpose of each.
3.
Discuss the issue of generalizability as it applies to both qualitative and quantitative
studies.
DISCUSSION QUESTIONS
1. Is it possible for a researcher to manipulate a sample to gain the results they desire? If
so, how?
2. What are the ethical issues involved with sampling? Which types of sampling have
greater ethical concerns? Why?
3. This chapter suggests a formula for establishing sample size. Do you believe this
formula is an accurate way to draw a sample from a population? Why or why not?
CHAPTER GLOSSARY TERMS
Cluster sampling - A method for drawing a sample from a population in two or more
stages through a process of listing naturally occurring clusters within the population and
sampling the clusters (sometimes referred to as multi-stage sampling)
Convenience sampling - Reliance on available subjects; one of the most frequently used
sampling techniques in social work research
Elements - Individual members of a population or sample
Enumeration unit - A unit containing one or more units listed in the sampling frame
External validity - The extent to which a study’s findings are applicable or relevant to a
group outside the study (also called generalizability)
Internal validity - A measure of how confident the researcher can be about the
independent variable truly causing a change in the dependent variable (as opposed to
outside influences)
Non-probability sampling - Techniques for selecting a sample in which every individual
does not have a greater-than-zero chance of being selected
Population - A set of entities from which a sample can be drawn to either describe a
subsection of that population or generalize information to the larger population
Probability sampling - A sampling technique in which each and every member of the
population has a non-zero chance of being included in the sample
41
Probability sampling theory - A theory that requires the researcher to select a set of
elements from a population in such a way that those elements accurately portray the
parameters of the total population
Purposive sampling - Selection of a sample based on knowledge of a population or with
some predetermined characteristics in mind
Quota sampling - A means of selecting a stratified non-random sample in which a
researcher divides a population into categories and selects a certain number (a quota) of
subjects from each category
Random Assignment – Random assignment occurs when subjects within an
experimental research design are placed into groups entirely by chance. Random
assignment increases internal validity (how confident the researcher can be about the
intervention truly causing a change in the dependent variable as opposed to other, outside
influences) and helps reduce the likelihood of bias because each subject has equal
probability of being placed in each group.
Random selection - Means of selecting a sample from a larger population in which each
member of the population has an equal chance of being selected for a study
Representativeness - A condition that is met when characteristics of the sample are
similar to those of the population from which the sample was drawn
Sample - A group of subjects (elements) selected from a larger population
Sampling error - An error that occurs because only part of the population is directly
contacted
Sampling frame - A list of all elements or other units containing the elements in a
population
Sampling unit - A population selected for inclusion within a sampling frame
Simple random sampling - A method of sampling in which a sample is generated
randomly from a population in which each person has been assigned a number
Snowball sampling -- A method of sampling in which the researcher starts with one or
more members of the group being studied to gain access to other members of the same
group, through a referral system, for the purpose of building the sample
Stratified random sampling - A method of sampling in which the population is divided
into subgroups (strata) and a sample is drawn from each stratum
Systematic random sampling - A method of sampling in which every nth number is
selected at random (for example, every third person or every tenth person)
42
APPLIED LEARNING ACTIVITIES
Activity #1:
Susan is a hospital social worker. She is interested in investigating how effective support
groups are for patients who have been diagnosed with a terminal illness. Susan recruits
volunteers for the support groups by posting signs around the hospital. Susan asks a
friend of hers at another hospital (which does not have a support group) to recruit
volunteers to serve as a comparison group. Susan and her friend ask each of the groups to
complete a standardized instrument as pre- and posttests (with a period of eight weeks in
between the tests).
1. What type of sampling is Susan using—probability or non-probability? What is
the reason for your answer?
2. Which sampling design are Susan and her friend using?
3. What are the strengths and weaknesses of Susan’s sampling design? What design
would you choose? How would you carry it out?<nl>
Activity #2:
Raul is a social worker at a mental health agency. Raul recruits a total of sixty volunteers
for a study that will examine the effects of teaching assertiveness skills to people who are
diagnosed with mood disorders. Raul places the names of all the volunteers for the study
into a hat and draws out each name. As he draws the names, he assigns them either to the
group that will learn assertiveness skills or to a waiting list (for the next group to start).
The wait-list group will serve as the control group for the study.
1. What type of sampling is Raul using—probability or non-probability? What is the
reason for your answer?
2. Which sampling design is Raul using?
3. What are the strengths and weaknesses of Raul’s sampling design? What design
would you choose? How would you carry it out?
Activity #3:
Maria is a social worker at a shelter for battered women. Maria works with the children
of women who come into the shelter. Maria has observed that male children with younger
brothers or sisters tend to be more aggressive when playing than female children with
younger brothers or sisters. Maria designs a study to measure the number of times a child
hits, slaps, shoves, kicks, or punches other children. She will also record the age and sex
of the child as part of her study. Maria recruits ten boys and ten girls (all with younger
siblings) to be in a play group for one hour a day. She then observes their interactions.
43
1. What type of sampling is Maria using—probability or non-probability? What is
the reason for your answer?
2. Which sampling design is Maria using?
3. What are the strengths and weaknesses of Maria’s sampling design? What design
would you choose? How would you carry it out?
KEY POINTS

Sampling is the process of selecting a group of subjects from a larger population
in the hope that studying this smaller group (the sample) will reveal important
things about the larger group (the population) from which it was drawn.

Probability sampling is a method of sampling in which everyone in the population
has an equal chance of being randomly selected for the study and randomly
assigned to either the experimental group or the comparison group.

There are four techniques for conducting probability sampling: simple random
sampling, systematic random sampling, stratified random sampling, and cluster
sampling.

Non-probability sampling is a method for selecting a sample where every member
does not necessarily have a greater-than-zero chance of being selected.

There are four techniques for conducting non-probability sampling: convenience,
purposive, quota, and snowball sampling.

Internal validity refers to how confident the researcher can be about the
intervention truly causing a change in the dependent variable. There are seven
threats to internal validity: extraneous events, passage of time, testing effect,
instrumentation problems, selection bias, mortality of sample, lack of casual
direction.

External validity (referred to as generalizability) it is the extent to which a study’s
findings are applicable or relevant to a group outside the study. Characteristics of
external validity include: can other researchers duplicate the study, how the
respondents of the measure were chosen, and the size of the sample.
44
ADDITIONAL TEACHING RESOURCES
Tables/Graphs/Diagrams
Sampling Diagram
Probability Sampling
Random
Systematic Stratified
Cluster
Non-Probability Sampling
Convenience Purposive Quota Snowball
Draw from the population to
get the sample.
Generalize findings from the
sample back to your
population.
45
CHAPTER SIX: QUALITATIVE RESEARCH DESIGNS
CHAPTER OUTLINE
I.
How is Qualitative Research Used?
II.
Descriptive Inquiry
A. Individual vs. Collective Understanding
B. Focus Groups
C. Assessments
III.
Speculative Inquiry
A. Inductive vs. Deductive Research
IV.
Qualitative Research Methods
A. Phenomenological Design
B. Grounded Theory Design
C. Ethnographic Design
D. Case Study
1. Illustrative Case Studies
2. Exploratory Case Studies
3. Critical Instance Case Studies
4. Program Effects Case Studies
5. Prospective Case Studies
6. Cumulative Case Studies
7. Narrative Case Studies
V.
Data Collection
A. Observations
1. Overt Observational Research
2. Covert Observational Research
3. Four Observation Roles
4. Types of Observations: Descriptive, Focused, Selective
B. Interviews
1. Conversational Interview
2. Interview Guide
3. Open-Ended Interview
4. Fixed-Response Interview
VI.
An Example Qualitative Study
A. Research Outline
B. Interview Questions
C. Gaining Access
D. Selection Criteria
E. Ethical Considerations
46
F.
G.
H.
I.
Recording Information
Analysis
Literature Review
Writing the Report
TEST QUESTIONS
True/False
1. It is generally agreed that qualitative research is a research method that is
employed when little or nothing is known about a subject. (True)
2. Some people argue that qualitative methods are better suited for studies on
complicated subject topics such as a persons comfort with death, how it feels to
be employed or how a child views the drinking habits of an alcoholic parent.
(True)
3. Qualitative research is often called explanatory research. (False)
4. Qualitative research deals with inductive inquiry and deductive inquiry. (False)
5. Qualitative researchers believe that the researchers themselves are an integral part
of the research process. (True)
6. Qualitative research can be defined as descriptive methods that are based upon
collected observations and quotes (data) to generate common themes. (True)
7. A case study can only be dome on a single person or event. (False)
8. Ethnography is a particular research design in qualitative studies that is centered
on cultural behavior. (True)
9. Phenomenological research designs seek to understand the lived experience of
those who are being studied (the individual’s perceptions, thoughts, ideas and
experiences). (True)
Multiple Choice
1. It is generally agreed that qualitative research is largely:
a. exploratory
b. explanatory
c. inductive
47
d.
e.
f.
g.
deductive
both a and c
both b and d
none of the above
2. It is generally agreed that quantitative research is largely:
a. exploratory
b. explanatory
c. inductive
d. deductive
e. both a and c
f. both b and d
g. none of the above
3. Qualitative researchers are interested in:
a. gaining an in-depth understanding of a phenomenon
b. being able to generalize to other (larger) populations
c. making sure they have enough time to record their findings
d. all of the above
e. none of the above
4. The inductive method goes
a. from the general to the specific
b. from the specific to the general
c. from one topic to another
d. in a different direction with each study
e. none of the above
5. A narrative is a type of
a. Quantitative research
b. Assessment tool
c. Case study
d. None of the above
6. There are five basic qualitative methods. These are:
a. biological, psychological, sociological, anthropological, and humanistic
b. ethnography, grounded theory, phenomenological study, biography,
and case study
c. ethnography, hermaneutics, phenomenological, psychological, and
sociological
d. biological, sociological, anthropological, phenomenological, and grounded
theory
e. none of the above
7. In qualitative studies, a semi-structured interview
48
a.
b.
c.
d.
e.
would never be used
is an interview that should only be used as a last resort
is an interview that leaves room for additional questions
is considered to be the mark of an inexperienced researcher
none of the above
8. In a qualitative study, gaining access is
a. an important issue
b. a minor detail
c. not a concern
d. none of the above
9. In qualitative research, a grand tour is a
a. Research question
b. Study design
c. Type of sampling technique
d. none of the above
Short Answer
1. While ______________________________ research relies primarily on
numbers and statistical analysis, _____________________________
research is more concerned with observations and quotes from participants.
(Quantitative
Qualitative)
2. Qualitative research is often called ____________________ research.
(Exploratory)
3. Quantitative research is often called ______________________ research.
(Explanatory)
4. A ___________________ is one or more statements about how something
works.
(Theory)
5. Sometimes qualitative research is referred to as ____________________
research or naturalistic inquiry.
(Field)
49
6. Qualitative researchers believe that ________________________
____________________ are an integral part of the research process.
(Researchers Themselves)
7.
______________________________ methods move from the general to the
specific.
(Deductive)
8. ______________________________ like many other qualitative methods has
its roots in Anthropology.
(Ethnography)
9. _______________________________ seeks to understand the lived
experience of those being studied.
(Phenomenology)
10. A __________________________ interview is limited to the questions a
researcher wants answered.
(Structured)
Essay Questions
1. Compare and contrast the two types of studies (qualitative and quantitative).
Include aspects of how the two studies are alike and how they differ.
2. Provide a discussion of qualitative studies. Include the following:
a. when would it be appropriate to use a qualitative design
b. what would be the approach you would use
c. how does the issue of gaining access enter in (if at all)
d. how would you interpret your findings
3. Discuss the issue of gaining access in a qualitative study. What qualitative design
would you employ? What are the problems/concerns? How would you address
the issue of informed consent?
50
DISCUSSION QUESTIONS
1. Assume you want to conduct a qualitative study of a deviant population (illegal
drug users for example). What questions would you want to ask? What would be
ethical issues you would need to be aware of? How would you gain access?
2. In your opinion, which type of study (qualitative or quantitative) is more difficult?
Why?
3. Discuss the issue of selecting a sample for a qualitative study? How do you know
when you have a sufficient sample size? How are subjects recruited?
CHAPTER GLOSSARY TERMS
Case study - A detailed analysis of a single person or event (or sometimes a limited
number of people or events)
Deductive research -The process of reasoning that moves from a general hypothesis or
theory to specific results through the use of quantitative methods
Descriptive inquiry- A strategy used in qualitative research to develop a greater
understanding of issues by describing individual experiences
Ethnography - A type of qualitative research design that is centered on cultural behavior
and seeks to record the cultural aspects of a group
Focus group - An open discussion in which individuals share their opinions about or
emotional responses to a particular subject
Grand tour questions- Large, overarching questions that identify the broad intent of a
research study and are based on the existing knowledge (i.e., experience, knowledge from
others, tradition, and prior research)
Grounded theory - A type of research design that utilizes a recursive form of question
and analysis
Inductive research - The gathering of information based upon observations and quotes
that is organized into common themes
Phenomenology - A type of research design that seeks to understand the lived experience
of the individuals who are being studied (their perceptions, thoughts, ideas, and
experiences)
Semi-structured interview - Prepared research questions are that are used to start the
interview process but allow additional information to be solicited
Speculative inquiry - A strategy used in qualitative research to generate a theory based
on common experiences
51
Structured interview - An interview that is limited to the research questions the
researcher wants answered
APPLIED LEARNING ACTIVITIES
Activity #1:
You are a social worker at a homeless shelter. Your consumer population has changed
recently, and you are seeing more women with children at the shelter than previously.
You are interested in understanding what issues are common among these families so that
you can provide some general services targeting this population. Few studies have been
published about the experiences of homeless mothers. You have asked to do all the case
management for these women.
1. What questions would you ask?
2. What research design would you use?
3. How would you select participants?
4. How would you record their information?
5. How would you protect their confidentiality?
Activity #2:
You are a social worker for a juvenile correction facility. You have noticed that there is
an unspoken rank and file among the residents (for instance, certain kids are allowed to
cut in the lunch line while others are not) as well as a code of behavior that staff has
observed. You are interested in understanding the culture of this facility through the eyes
of the residents. You will observe the residents you have in your caseload (this is your
sample) and interview them during their regular weekly individual meetings with you.
1. What questions would you ask?
2. What research design would you choose?
3. What researcher role would you use?
4. How would you record their information?
5. How would you protect their confidentiality?
Activity #3:
You are a social worker for a women’s domestic violence shelter. You are interested in
studying the lived experiences of the women in the shelter to understand any perceptions,
thoughts, ideas and experiences that they may have in common. You will utilize the
nightly support group setting for the women (this is your sample) to explore and compare
their experiences.
1. What questions would you ask?
2. What researcher role would you assume?
3. What issues would you encounter in attempting to gain access to subjects?
4. How would you record their information?
5. How would you protect their confidentiality?
52
KEY POINTS

The four most common qualitative research designs are ethnography, grounded
theory, phenomenology, and case study.

Ethnographic research designs are centered on cultural behavior. This research design
seeks to record the cultural aspects of a group.

Grounded theory is a type of research design that utilizes a recursive form of question
and analysis. The researcher begins with a set of questions that lead to further
questions. From the individual information collected, common themes are identified.

Phenomenological research designs seek to understand the lived experience of those
who are being studied.

A case study is a detailed analysis of a single or limited number of people or events.
A case study can be illustrative, exploratory, a critical instance, program effects,
prospective, cumulative, or narrative.
ADDITIONAL TEACHING RESOURCES
Web Links
This link takes you to The Qualitative Report and maintains a list of qualitative research
web sites:
http://www.nova.edu/ssss/QR/web.html
The Qualitative Methods Workbook by C. George Boeree:
http://webspace.ship.edu/cgboer/qualmeth.html
This is an excellent resource for students and instructors.
53
CHAPTER SEVEN: QUANTITATIVE RESEARCH DESIGNS
CHAPTER OUTLINE
I.
Getting Started
A. The “so what rule”
II.
Developing a Testable Hypothesis
III.
What is Descriptive Research?
A. Definition of descriptive research
B. Non-standardized Measures
IV.
Correlation versus Causation
A. Correlational Relationship defined
B. Causal Relationship defined
V.
Data Collection
A. Archival or Retrospective Research
VI.
Cross-Sectional and Longitudinal Designs (defined)
VII.
Group Research Designs (GRD)
A. Pre-experimental designs
1. One-group posttest only design
2. One-group pretest and posttest design
B. Quasi-experimental designs
1. Posttest only with non-equivalent comparison group design
2. Pretest and posttest with non-equivalent comparison group
design
3. Time-series design
4. Time-series design with non-equivalent comparison group
C. Experimental designs
1. Posttest only control group design
2. Pretest - posttest control group design
3. Solomon’s four-group design
54
TEST QUESTIONS
True/False
1. The “so what” rule means all research should be asked the question, “What is the
value of this study to Social Work?” (True)
2. When conducting a literature review, we are attempting to answer several
questions such as: what is known to date and what level of knowledge exists.
(True)
3. Research is generally conducted in “giant leaps” so that it answers major
questions. (False)
4. Causality, in research is difficult to prove. (True)
5. Correlation implies that two or more variables are linked together in a
relationship. (True)
6. Causality implies a relationship between two variables in which a change in one
variable will result in a change in the other variable). (True)
7. A compounding variable is a variable that obscures the effect of another variable.
(False)
8. A control variable is any variable that the researcher wants to hold constant
(control for) within the experiment. (True)
9. When developing a research question, the research should first ask, is this
empirical?
(True)
10. Cross-sectional designs are designs that track a group of people over a long period
of time. (False)
Multiple Choice
1. In a quantitative study, a literature review can help the researcher
a. determine a research hypothesis
b. determine at what level of sophistication currently exists (descriptive,
explanatory, etc.).
c. determine what is known about a phenomenon
d. all of the above
e. none of the above
55
2. Correlation implies that
a. two or more variables are related
b. an independent variable is causing a change in the dependent variable
c. variables have some degree of causation
d. all of the above
e. none of the above
3. Group research designs are divided into the following:
a. pre-experimental, experimental and associational
b. associational, correlational, and causational
c. pre-experimental, quasi-experimental and experimental
d. longitudinal, latitudinal, and empirical
e. none of the above
4. In research notation, the following X O would signify what?
a. a posttest only design
b. a non-equivalent comparison group design
c. an experimental design
d. none of the above
5.
In research notation what would these symbols imply R and X
a. the R stands for research and the X implies intervention
b. the R stands for random assignment and the X implies intervention
c. the R stands for research and the X means random assignment
d. none of the above
6. Quasi-experimental designs are designs
a. where group members are randomly assigned to either the treatment or
control
group
b. where group members are selected based on their gender
c. quasi-experimental designs are designs where it is either not possible
or not
feasible to randomly assign members
d. none of the above
56
7. Research generally proceeds
a. in giant leaps
b. in small incremental steps
c. at a snail’s pace
d. according to whatever trend is currently popular
e. none of the above
8. A longitudinal design is concerned with
a. how a single group changes over time
b. the difference in a randomly assigned group and a control group
c. why group members decide to drop-out of a study
d. none of the above
9. When are pre-experimental designs useful?
a. they are never useful
b. when the researcher is in a hurry
c. when subjects can’t be recruited for an experimental design
d. when the research question is fairly simple, unsophisticated and it is
impossible to set up experimental conditions
e. none of the above
10. One of the benefits of random assignment is:
a. it increases how confident the researcher can be about the
intervention truly causing a change in the dependent variable
b. it decreases the chance that subjects will drop-out of the experiment
c. it increases the confidence the researcher has about the truth of subjects
answers
d. there is no benefit
Short Answer
1. In research _________________________________ is easier to prove than
_________________________.
(Correlation Causation)
2. A ____________________________________ variable is any variable that the
researcher wants to hold constant.
(Control)
3. _______________________________ _________________________ designs are
stronger than pre-experimental designs but weaker than _____________________
designs.
(Quasi-Experimental, Experimental)
57
4. ________________________________ does not imply _______________________.
(Correlation, Causation)
Essay Questions
1. Give a brief overview of the three main types of group research designs. Include a
brief description of the strengths and weaknesses of each type.
2. Provide an overview of some of the factors that must be taken into consideration
before conducting a study. Be sure to carefully list each factor and provide some ideas as
to what can be done to offset any problems it may cause.
3. Provide a discussion of how a literature review can shape or define a study. What
factors in the literature review will lend themselves to the design? What should the
researcher be looking for within a literature review?
DISCUSSION QUESTIONS
1. How does the issue of ethics fit into group research design? Does the researcher have
to be careful to not violate the rights of subjects? Why or why not?
2. Some people argue that assigning clients to a waiting list (and others to an
experimental group) is not ethical. Do you agree or disagree? Why or why not?
3. You want to conduct an experiment with some of your clients by recruiting volunteers
for two groups (one for a comparison group and one for the intervention). However,
over half of your clients are from varied ethnic backgrounds including many recent
immigrants to the United States. What steps could you undertake to gain their
cooperation? How would you deal with the issue of informed consent?
58
CHAPTER GLOSSARY TERMS
Causal relationship - A relationship in three conditions must be met: (1) the independent
variable must come before the dependent variable (known as temporal ordering), (2) the
independent and dependent variables must be correlated, and (3) the correlation between
the independent and dependent variables cannot be explained by the impact of another
variable
Correlational relationship - A relationship between two or more variables in which a
change in one variable may be associated with some degree of change in the other
variable
Cross-sectional design - A research design that looks at a cross-section or subset of a
population at one point in time
Longitudinal design - A research study that follows one cohort over a period of time
Non-standardized methods - Informal methods of collecting data, such as the use of
broad and open-ended question (recorded for accuracy) or a journal or field notes
APPLIED LEARNING ACTIVITIES
Activity #1:
Your agency provides emergency food and used clothing to clients. You are tasked with
describing how satisfied the clients at your agency feel about the services that they have
received.
1. List the questions that you would ask in order to collect information on clients’
satisfaction with these services.
2.
How would you collect data using a quantitative method?
3. If you were to use a cross-sectional design, which methods would you use to
collect information, and why?
Activity #2:
You are a case manager at an outpatient treatment facility. You teach a class on the
effects of alcohol, methamphetamines, and other drugs on the body. Clients volunteer to
attend your classes, but once they sign up to attend, attendance is expected. Your classes
are held one time a week, for one hour, and last for eight weeks. You give everyone a test
the night before classes begin and again at the end of the eight-week period. You then
compare their scores.
59
1. Would this type of research be considered pre-experimental, quasi-experimental,
or experimental? Why?
2. What type of group research design is this?
3. Are there ways that the research design could be made stronger? If so, what could
be done to change the research design to make it stronger?
Activity #3:
You are a hospital social worker who has been asked to start a support group for people
who are attempting to quit smoking. Because the size of the group is limited, you
randomly assign people to the group or to a waiting list for the next group, which will
start in four weeks. To randomly assign people, you place everyone’s name in a bowl and
draw names until you have filled the group. Everyone agrees, knowing they have an
equal chance of being in the first support group or on a waiting list for the next one.
Before the group begins, you ask both the people in the support group and the people on
the wait list to fill out a questionnaire about the number of cigarettes they smoke per day.
At the end of the four-week support group, you ask both groups to fill out the same
survey.
1. Would this type of research be considered pre-experimental, quasi-experimental,
or experimental? Why?
2. What type of group research design is this?
3. Identify the independent and dependent variables in this study.
KEY POINTS

Causal relationships exist when one variable is causing a change in the other, and
correlational relationships exist when one variable may be associated with some
degree of change in the other variable.

Cross-sectional research looks at a slice of the population at one point in time.

Longitudinal studies follow the same cohort of individuals over time.

There are three main types of group research designs: pre-experimental, quasiexperimental, and experimental designs.

A comparison group is used in quasi-experimental research. Sometimes called the
non-treatment group, it is the group that receives no treatment (intervention).

A control group is used in experimental research. In studies that use a control
group, subjects have been randomly assigned to either the experimental group or
the control group.
60
ADDITIONAL TEACHING RESOURCES
Tables/Graphs/Diagrams
Types of Group Research Designs (GRD)
Pre-Experimental Designs
Quasi-Experimental
Designs
Experimental Designs
One Group, Post-test Only
Design
Post-test Only With NonEquivalent Group Design
Post-test Only Control
Group Design
XO
R X O
XO
O
61
R
O
One Group Pre-test, Post-test
Design
Non-Equivalent
Comparison Group Design
Pre-test, Post-test Control
Group Design
R O X O
OXO
OXO
R O
O
O
O
Soloman’s Four-Group
Design
Time Series Design
R O X O
OOOXOOO
R O
Time Series Design with
Non-Equivalent
Comparison Group
O O OX O O O
O O O
O O O
62
O
R
X O
R
O
CHAPTER EIGHT: SURVEY RESEARCH
CHAPTER OUTLINE
I.
Defining Survey Research
II.
Appropriate Survey Topics
III. Developing a Survey
A. Survey Questions
1. Ambiguous and Unclear Questions
2. Difficult and Vague Questions
3. Acronyms and Other Jargon
4. Double-Barreled Questions
5. Avoiding Absolutes
6. Biased Language
B. Pilot Testing Your Survey
IV. Administering Surveys and Expected Rates of Return
A. Telephone Surveys
B. Mail-Out Surveys
C. Interviews
D. Internet Surveys
E. Customer Satisfaction Surveys
F. Expected Rates of Return
V. Advantages and Disadvantages of Survey Research
TEST QUESTIONS
1. One advantage to survey research is:
a. To reach a large number of people with little expense
b. Can be used by face to face interviews, by phone, or by other research
tools
c. Easy for people to get accustomed to
d. All the above
2. One disadvantage of surveys is:
a. They are too expensive
b. They can only be administered at one point in time
c. Response rate can be slow
d. None of the above
63
3.
4.
5.
6.
7.
8.
One problem that many researchers make when developing surveys is:
a. They ask questions that are too vague
b. There are no difficult questions
c. There can be a limited or small response rate
d. Both A and C
Asking too many demographic questions does what?
a. Makes respondent feel important
b. Can lead to over collection of data
c. Can lead to feelings of intrusiveness in respondent
d. Both B and C
There are four ways of administering surveys, these are:
a. Face to face, telephone, internet, and random selection
b. Face to face, telephone, internet, and mail out
c. Telephone, telegraph, internet, and random selection
d. None of the above
Scales are forced choice items, asking respondents to rate on a
continuum.
a. Lakers
b. Lookout
c. Likert
d. None of the above
One of the downsides to using the internet to survey respondent is:
a. There is no downside
b. One should never use the internet, it is too unsafe
c. It is too expensive
d. A researcher can’t be sure who is responding
Asking two questions within one survey question is:
a. An example of using space wisely
b. An example of a double-barreled question
c. Rarely seen in survey research
d. None of the above
DISCUSSION QUESTION
Describe a time when you participated in a satisfaction survey. What did you think about
this process? Has your thoughts changed (either positively or negatively) about this
experience? Explain.
64
CHAPTER GLOSSARY TERMS
Likert Scales – When providing multiple choices in a survey where the choices are
ranked on a continuum (for example, from strongly disagree to strongly agree).
Rule of Parsimony – This simply means eliminating all unnecessary questions/data and
paring your survey to the absolute minimum.
APPLIED LEARNING ACTIVITY
You are interested in describing individuals involved in intimate partner violence. You
decide to send a survey to several shelters in several states. You collect data on the
following information:
1. Average length of the relationship
2. Number of times the individual has tried to leave the relationship
3. Type of abuse
4. Race of the victim
5. Age of the victim
6. Sex of the victim
7. Average family income
8. What other information would you collect, and why?
KEY POINTS

Surveys are a research design in which a sample of subjects is drawn from
a population and studied (usually interviewed) to make inferences about
the population.

Surveys are a relatively inexpensive way to reach a large number of
people quickly

Piloting provides advantages such as making any changes or adjustments
before distribution on a large scale.
65

Telephone surveys are relatively easy to conduct and are cost effective.

Research has demonstrated that consumers consistently over-rate their
satisfaction with the agency they are rating.

For most researchers, a minimally acceptable rate of return is about 50
percent.

Survey research has the ability to reach a large number of people with a
relative small amount of effort thus making it a very efficient way to
collect data.

One issue with internet based research is that the researcher can never
truly know who is responding to survey items.
ADDITIONAL TEACHING RESOURCES
Type of
Question
Example
Question
Concern
Do you engage
in illegal
drugs?
Definitions may be
misconstrued or inaccurate:
“engage” (sell, use, or
observe?)
Vague
“illegal drugs” (What about
abusing prescribed drugs?)
Acronym
Slang or
Have you
every called
APS (or CPS)
on someone?
Acronym may be
misinterpreted to mean
something other than the
intended Adult Protective
Services (or Child Protective
Services).
Do you feel
blue (“bad”,
Not universally descriptive.
Could be misconstrued to
66
Jargon
DoubleBarreled
Using
Absolutes
“under the
weather”)
today?
mean ill or sick.
Appropriate alternative:
Do you feel sad today?
Are you in
favor of
schools
providing
condoms and
sex education
to high school
students?
You are asking two separate
questions:
Do you always
feel worthless
around your
spouse?
Limits the accuracy of the
response. A better question
would be:
Are you in favor of schools
providing condoms to high
school students?
Are you in favor of schools
providing sex education to
high school students?
I feel worthless around my
spouse”
Never
Yes or No
Seldom
Occasionally
Frequently
Negative
Statements
No – I believe that “It is not
good…”
It is not good
for children to
be without
their fathers.
Yes – I agree to the statement
that “It is not good…”
Yes or No
BiasedLaden
Language
Both of these answers might
mean the same.
Do you believe
firemen
(policemen,
mailmen)
receive
hazardous pay?
67
Gender bias towards males.
Better alternative is using
“fire-fighters”, “police
officers” or “mail carriers”.
CHAPTER NINE: EVALUATIVE RESEARCH DESIGNS
CHAPTER OUTLINE
I.
Introduction
A. Why conduct a program evaluation?
II.
Program Evaluation (defined)
A. Process Evaluation
B. Outcome Evaluation
III.
Process Evaluation
A. Definition
B. Program description
C. Program monitoring
D. Quality assurance
IV.
Outcome Evaluation
A. Definition
B. Measures program objectives
C. Writing the report
V.
Strengths and Weaknesses of Program Evaluation
VI.
Practical Considerations and Common Problems
TEST QUESTIONS
True/False
1. The Social Work Code of Ethics mandates that Social Workers evaluate their
practice.
(True)
2. Today, a Social Worker needs a basic working knowledge of how to evaluate a
programs goals and objectives. (True)
3. Most grant and funding sources do not require a program evaluation, but it is a
good idea to conduct one anyway. (False)
4. There are three types of program evaluations: process, outcome, and analytical.
(False)
68
5. A process evaluation is sometimes referred to as a formative evaluation.
(True)
6. A process evaluation is similar to a qualitative study in that both are utilized when
little is known, asking such questions as “How are our services perceived?”
(True)
7. Once a process evaluation has been conducted, it can’t be repeated during the
fiscal year. (False)
8. Today, funding sources are increasingly demanding that programs are accountable
in how they spend their money. (True)
9. Measuring program objectives is part of a process evaluation. (False)
10. Research has demonstrated that customer satisfaction surveys are positively
biased (customers tend to rate their satisfaction as higher than it really is). (True)
Multiple Choice
1. Generally, a process evaluation has three main goals which are:
a. To construct a program description, program monitoring, and to
assess the quality of services being provided.
b. To insure clients are satisfied with services, to comply with the
requirements of a funding source, and to maintain high quality services
c. To gauge staff morale, to assess quality of services, and to comply with
funding sources requirements
d. All of the above
e. None of the above
2. Tracking the number of “no shows” would be an example of:
a. Program description
b. Program monitoring
c. Process evaluation
d. Measuring client satisfaction
e. None of the above
3. Objectives, ideally, should conform to the acronym MOST. This stands for:
a. Meaningful, observable, stated clearly, and truthful
b. Measurable, operationalized, specific, and truthful
c. Measurable, operationalized, specific and time-lined
d. Measurable, observable, specific and time-lined
e. None of the above
69
4. Some of the positive aspects of a process evaluation are:
a. They are relatively easy to conduct
b. They have no rigid structure or time-line
c. Both A and B
d. None of the above
5. Jane is a Case Manager in a homeless shelter. Jane selects ten clients and asks them
to complete a questionnaire about their experiences with the program. Jane is:
a. Using a customer satisfaction survey to evaluate her program
b. Probably making a mistake in her research design
c. Utilizing a pretest/posttest one group only research design
d. All of the above
e. None of the above
6. One of the negative aspects of the outcome evaluation is:
a. That it provides no indication of whether the program itself is the cause of the
change in the consumer.
b. That they are relatively expensive to conduct
c. That an evaluator needs extensive training in order to conduct an evaluation
d. That program evaluations are extremely time consuming
e. None of the above
7. One question at the heart of all outcome evaluation, is:
a. Is this program cost effective?
b. Are consumers satisfied with services?
c. Did this program accomplish what it set out to do?
d. Is this a good program?
e. None of the above
8. Joe is a Case Manager in a large agency. Joe is supervising a new program. He
establishes evaluations to be conducted after 90 days, six months, and at the end of the
first year. Joe is:
a. Using the process evaluation technique to monitor his program
b. Using the pretest/posttest one-group research design
c. Using a customer satisfaction survey to measure effectiveness
d. None of the above
9. Jane has been asked to conduct an evaluation of an existing program. Jane reviews
the objectives for the program and begins to collect data for each objective. Jane is
probably:
a. Conducting a process evaluation
b. Conducting a descriptive study
c. Conducting an outcome evaluation
70
d. None of the above
10. Two common mistakes in process evaluations are:
a. Inappropriate timing and inapplicable questions.
b. Using customer satisfaction surveys and small sample sizes
c. Inexperienced evaluators and un-measurable objectives
d. None of the above
Short Answer
1. A process evaluation is generally associated with an ______________________ audit,
while an outcome evaluation is generally associated with an
________________________ audit.
(Internal, External)
DISCUSSION QUESTION
You are conducting an evaluation for an agency and the agency ask you to adjust your
report so that the agency is placed in the most favorable light possible. They are not
asking you to falsify information – just to present it in the way that is most positive. For
example, you are surveying a group of high school students for a school district and you
find that drug usage has increased by 100 students from one year to the next (resulting in
a 100 percent increase in usage). One way to report this information is simply that drug
usage increased by 100 students. Another way is to report that drug usage increase by
100%. What do you say to the agency?
CHAPTER GLOSSARY TERMS
Baseline - A beginning point in research that establishes an initial sense of how a
program, group, or individual is currently functioning and allows researchers to track
progress over time
Outcome evaluation - An external evaluation that measures the overall effectiveness of a
program by looking at the goals and objectives established by the program to answer the
question “Did this program accomplish what it set out to do?”
Process evaluation - An internal evaluation process that is initiated in the early stages of
a program and has three main goals: to construct a program description, to monitor a
program, and to assess the quality of services being provided
Program description - The delineation of the setup, routines, and consumer
characteristics of a program
71
Program evaluation - A type of research design and analysis that evaluates specific
characteristics of a program within an agency
Program monitoring - A program evaluation method that is used to examine what
happens after people receive services from a program
Quality assurance - Means of determining the level of satisfaction of both services for
consumers and programmatic issues for the staff
APPLIED LEARNING ACTIVITIES
Activity #1:
Now it is your turn to practice a program evaluation. Pretend that you are a program
evaluator and that you are completing a program evaluation at the six-month interval for
this program. Below is the description of a fictitious program. Read the program
description and then complete the tasks that follow:
Safe Haven is a shelter for adolescent runaways waiting for a court determination
and trial. It was established as an alternative to placing adolescents with the general jail
population. The goal of the shelter is to keep the residents safe while they wait for their
court date. The ages of the residents range from fourteen to seventeen. The Safe Haven
staff monitors three safety risks for this population: harm to others, harm to self, and
alcohol or drug use. Staff provides specific programming to address these three issues.
First, mandatory group counseling is provided daily from 6:00 to 8:00 P.M.,
Monday through Friday, so that residents can learn conflict resolution skills to prevent
them from harming others. During this group, members listen to a thirty-minute
curriculum-based presentation followed by a demonstration. Group members then
practice the skills with each other.
Second, all residents are monitored for depression and suicidal ideations to
prevent self-harm. Residents meet daily for thirty minutes with a social worker for
mandatory individual counseling, Monday through Friday.
Finally, group members participate in a drug and alcohol education class. This is
also a curriculum-based educational presentation on refusal skills followed by group
practice and process discussions. This mandatory group meets daily from 3:00 to 4:30
P.M., Monday through Friday. Urinalyses are conducted on residents every three days,
and a breath analysis every day before bed.
The rest of the day throughout the week, residents are tutored by a teacher who is
brought into the shelter. On weekends, residents receive visitors and participate in
recreational activities at the shelter under close supervision.
1. Design a program description for this program that includes the types of
services being provided and their location; the program’s mission; the
frequency with which services are offered, as well as time and day; and
72
characteristics of the people you serve (number of residents, sex, race, age,
income, etc.).
2. Develop a minimum of three program objectives. Objectives should be able to
meet the MOST standard.
Activity #2:
Using the following outcome, develop a table to visually organize your data.
Is there an increase in the percentage of consumers who speak English as a second
language who enter prenatal care in their first trimester? Because this program is new,
there are no prior statistics to compare with this group. However, in the first five months
of this program, there were fifty-eight consumers. Of these, fifteen consumers spoke both
Spanish and English and eight consumers spoke Spanish only. Of the fifteen
Spanish/English-speaking women, four (27%) entered prenatal care in their first
trimester. Of the eight women who only spoke Spanish, one (13%) entered prenatal care
in her first trimester. This is a total of 40 percent of the women who spoke English as a
second language who entered prenatal care in their first trimester.
KEY POINTS

There are two reasons that social workers should evaluate their programs. First,
social work ethics demand that we “monitor and evaluate policies, the
implementation of programs, and practice interventions.” Second, funding sources
are increasingly demanding that programs be accountable in how they spend their
money, how consumers are helped, and what positive benefits result from the
monies spent.

A program evaluation is a research design and analysis that evaluates specific
characteristics of a program within an agency.

There are two types of program evaluations: process evaluations and outcome
evaluations.

A process evaluation (an internal audit) has three main goals: to construct a
program description, to provide program monitoring, and to assess the quality of
services being provided.

An outcome evaluation is an external audit that measures the overall effectiveness
of a program. This evaluation looks at the goals and objectives established by the
program to answer the question “Did this program accomplish what it set out to
do?”
73
ADDITIONAL TEACHING RESOURCES
Tables/Graphs/Diagrams
Program Evaluation Overview
Process Evaluation
Outcome Evaluation
1. Internal Audit
1. External Audit
2. Contains Three Elements
2. Measures Program Objectives
a) Program Description (setup, routines
and consumer characteristics)
3. Objectives must be Measurable,
Observable, Specific and TimeLined
b) Program Monitoring (utilization and
effectiveness of services)
4. A Written Report is Completed
c) Quality Assurance (consumers and
staff)
74
CHAPTER TEN: SINGLE-SUBJECT DESIGN
CHAPTER OUTLINE
I.
Defining Single-Subject Design
A. Tool for social work practice
1. To evaluate individual progress over time
2. Underlying concept – measuring the reinforcement/intervention of
a desired behavior/outcome
II.
Elements of SSD
A. Target Outcome
1. Work with the consumer to define the desired outcome
2. Outcomes can be a change in feelings or behaviors
3. Abstract and subjective outcomes will be to be operationalized
B. Intervention
1. What is the proposed techniques that will be used to facilitate
change?
a) Intervention needs to be evidenced-based i.e. literature
review
b) Intervention needs to be MOST (measurable, observable,
specific and time-lined
C. Select a Measure
1. Consider reliability and validity issues
2. Consider appropriateness to diversity (age, physical capacity,
mental capacity, etc.)
D. Baseline Data
1. Target behavior/feeling measured BEFORE the introduction of the
intervention
2. Minimum three observations
E. Intervention Data
1. Target behavior/feeling measured AFTER the introduction of the
intervention
2. Initial length of intervention should be already decided (prior) to
implementation) based on recommendation from evidence-based
research findings
F. Results
1. Results are reported graphically (over time) or as differences in
mean scores
2. More advantage to graphic representation – client has a clear
visualization of small changes that mean scores may not represent.
In addition clients can examine any historical significance (i.e.
they had a bad week due to a death)
III.
Types of SSD
A. AB Design
1. A = baseline scores and B = intervention scores
75
B. ABA Design
1. The second A is the measures after the intervention has stopped. This is
to determine if the outcome held (stayed the same) or changed (got
worse or better)
C. ABAB Design
1. The second B represents the reintroduction of the same intervention.
For instance, the clients outcome worsened after withdraw from the
intervention, so the intervention was introduced again.
D. ABC Design
1. The C represents the introduction of a second (different intervention)
concurrently or replacing the first intervention.
IV.
Strengths and Limitations of SSD
A. Strengths include visual representation of progress that can reinforce
treatment decisions, and monitors practice interventions with evidencedbased results.
B. Limitations are lack of generalizability and does not take into consideration
confounding factors that might be affecting outcome findings
TEST QUESTIONS
True/False
1. A single-subject design is a tool that measures whether a relationship exists
between an intervention and an outcome. (TRUE)
2. A single-subject design aggregates data from multiple participant responses
into a single graph. (FALSE)
3. One possible problem with the ABA is that once an intervention has occurred,
you can not return to the original baseline. (TRUE)
4. The least complicated single-subject design is the AB design. (TRUE)
5. The target outcome is the baseline data. (FALSE)
Multiple Choice
1. Which of the following is not one of the basic elements of single-subject designs?
a. reporting the results
b. selecting the intervention
c. collecting baseline data
d. gaining access
76
2.A target outcome can be either:
a. a behavior or a feeling
b. a thought or a feeling
c. a thought or a behavior
d. none of the above
3.While collecting baseline data from a single-subject design, it is suggested that you
collect at least how many observations?
a. none
b. three
c. five
d. ten
e. one week
4.There are two ways to report finding from a single-subject design:
a. use a graph or use a table
b. calculate the average scores or use a graph
c. calculate the average score and the means
d. none of the above
5.One baseline and two interventions is which single-subject design?
a. ABA
b. AAB
c. ABB
d. ABC
6. The ABA design is used to do which of the following:
a. track multiple interventions
b. tracks the behavior after the intervention has ceased
c. return to baseline after the intervention has ceased
d. both b and c
77
7.When selecting a measure for SSD, you should consider which of the following
except:
a. research the desired outcome
b. examine reliability and validity of the measure
c. consider appropriateness to the diversity of the client
d. all of the above should be considered
Short Answer
1.Target behavior is measured before the intervention is called _________________
data.
(Baseline)
2.The intervention should follow the _________ standard.
(MOST)
DISCUSSION QUESTIONS
1.How can using a single-subject design graph assist someone having a brief relapse
in desired outcomes? What would you say and do with your client?
2. It is important to research the intervention you will be using. How would you go
about doing this and what would you say to your client about your findings? How
do you discuss and select the intervention options with your client?
CHAPTER GLOSSARY TERMS
Baseline - A beginning point in research that establishes an initial sense of how a
program, group, or individual is currently functioning and allows researchers to track
progress over time
Single-subject design - A method for evaluating an individual’s progress over time that
measures whether a relationship exists between an intervention and a specific outcome
Target outcome - The goal of the intervention
78
APPLIED LEARNING ACTIVITY
You are a case manager working with developmentally disabled adults in a residential
group home. You have a participant who is a thirty-three-year-old white male. His name
is Bruce. Bruce has been diagnosed as mildly mentally retarded with an IQ of around 60.
Bruce works in a sheltered workshop, assembling sponges on hair curlers, and enjoys his
job. His hobbies include collecting model cars and playing video games, both of which he
is very passionate about. He occasionally has angry outbursts. The staff at the workshop
have begun to complain about his behavior and have stated that he will be banned from
the workshop if he does not get his anger under control. They called you into the office
today and related an incident that happened recently during which he suddenly became
angry and began to shout at the other workers and throw things around the room. Possible
reasons for his behavior were discussed among the staff; however, there is no consensus
as to why he is acting this way. The staff wants some assurance from you that he will
control his behavior.
Your task is to design a single-subject research study for John. You should:
1. Identify the type of single-subject design you will use and the target outcome,
2. Describe the intervention and the rationale for this intervention,
3. Select the measurement,
4. Describe the process for collecting the baseline data,
5. Describe the process for collecting the intervention data, and
6. Finally, discuss any limitations of your study.
KEY POINTS

A single-subject research design is a method for evaluating individual progress
over time.

The basic elements of all single-subject design research are selecting the target
outcome, selecting the intervention, selecting the measurement tool, collecting
baseline data, collecting intervention data, and conducting the analysis (compiling
the results).

There are four types of singles-subject research designs: AB (one baseline and
one intervention), ABA (which adds an additional baseline), ABAB (which
repeats the baseline and intervention), and ABC (which introduces a new
ntervention).
79
ADDITIONAL TEACHING RESOURCES
Tables/Graphs/Diagrams
Elements of Single Subject Design Research
Element
Definition
Target Outcome
Target outcomes are the specific behaviors or feelings that the
individual and Social Worker want to focus on changing.
Intervention
An intervention is the specific technique used to facilitate a
change (or outcome) in research.
Measurement Tool
The tool used to collect the data on the target outcome. Many
times this is a standardized instrument.
Baseline Data
The data collected by the measurement before the intervention is
introduced to determine a starting point.
Intervention Data
The data collected by the measurement after the intervention has
been introduced.
Results
The baseline data is compared to the intervention data to
determine if change occurred over time.
Types of Single Subject Designs and Collection Methods
Measurement Design
Data Collection Method
AB
Baseline
Intervention
ABA
Baseline
Intervention
Baseline
ABAB
Baseline
Intervention
Baseline
ABC
Baseline
Intervention #1
80
Intervention
Intervention #2
CHAPTER ELEVEN: DESCRIPTIVE STATISTICS
CHAPTER OUTLINE
I.
What is Data Analysis?
II.
The First Step of Data Analysis
A. Descriptive statistics
1. Definition of descriptive statistics
2. Examples of descriptive statistics
B. Inferential statistics
1. Definition of inferential statistics
2. Examples of inferential statistics
III.
Descriptive Analysis
A. Measures of Distribution
1. Distribution defined
2. Frequency and percentage
B. Measures of central tendency
1. Central tendency defined
2. Mean
3. Median
4. Mode
5. Outlier
C. Measures of dispersion
1. Measure of dispersion defined
2. Range
3. Variance
4. Standard deviation
5. Leptokurtosis
6. Platykurtosis
7. Normal distribution
8. Skewed distribution
IV.
Strengths and Limitations of Descriptive Statistics
81
TEST QUESTIONS
True/False
1. Data analysis includes both descriptive and inferential statistics. (True)
2. Descriptive statistics are ways of organizing, describing, and presenting
qualitative data in a manner that is concise, manageable, and understandable.
(False)
3. Descriptive statistics can be used in both qualitative and quantitative research
methods (True)
4. Aggregating data means we are compiling information in a concise, manageable,
and understandable manner. (True)
5. Distribution of data in statistics refers to a ranking of the responses in a variable
from high to low that result in an observable pattern. (True)
6. Normal distribution of data is an assumption used in statistical procedures that
scores are probably distributed equally around the mean. (True)
7. Platykurtosis is the shape of a distribution of scores that is tall and thin because
the majority of scores are similar to the mean. (False)
8. A histogram is a type of bar chart used in statistics to visually present interval
and ratio data.. (True)
9. A skewed distribution of scores when plotted on a graph, produce a
nonsymmetrical curve. (True)
10. One of the major limitations of descriptive statistics, however, is that they offer
no insight into the relationship among variables. (True)
Multiple Choice
1. Measures of central tendency include:
a. Mean, median. mode, and range
b. Mean, median, range and standard deviation
c. Mean, standard deviation, range, and frequency
d. Mean, median, and mode
e. None of the above
82
2. Measures of dispersion include:
a. Range, standard deviation and variance
b. Mean, median. mode, and range
c. Mean, median, range and standard deviation
d. Mean, standard deviation, range, and frequency
e. Mean, median, mode, and frequency
f. None of the above
3. Generally, statistics are divided into:
a. Two categories (qualitative and quantitative)
b. Four categories (qualitative, descriptive, quantitative, and evaluative)
c. Two categories (univariate and bivariate)
d. Two categories (descriptive and inferential)
e. None of the above
4. A mode is:
a. A means of transportation
b. The most frequently occurring response
c. The statistical average
d. The spread of answers from lowest to highest
e. None of the above
5. Measures of central tendency attempts to represent how data is grouped around
a. The range
b. The mode
c. The mean
d. The frequency
e. None of the above
6. A standard deviation (SD) is :
a. A measure of how much each score varies or deviates from the mean
b. A measure of how much each score varies or deviates from the mode
c. A measure of how much each score varies or deviates from the median
d. A measure of central tendency
e. Both A and D
f. Both B and D
g. Both C and D
h. None of the above
7. In its’ simplest sense, central tendency are statistics that report how much our data
is
a. Different
b. Alike or similar
c. Spread out or varied
83
d. Grouped around the mean
e. Deviating from the mean
f. None of the above
8. Range tells us
a. The number of times a response occurs
b. The statistical average for a response
c. The minimum and maximum numbers of responses for a variable
d. How closely the responses are grouped around a mean
e. None of the above
9. Measures of dispersion
a. Are not very useful in statistical analysis
b. Are measures of how answers are alike or similar to each other
c. Are used only when nothing else is available
d. None of the above
10. Outliers are
a. Anomalies or results that are far different than most of the group
b. Something that are very similar to other results in the group
c. Something that need to be eliminated from the analysis
d. A product of research bias
e. None of the above
Short Answer
1. ______________________________ . is used with interval and ratio level data,
you add all the response scores and divide by the number of responses.
(Mean or Statistical Average)
2. ____________________________ ___________________________ is a
measure of how much each score varies or deviates from the mean.
(Standard Deviation)
3. The most frequently occurring response is known as the __________________.
(Mode)
4. __________________________________ is defined as the number of times that a
response occurs.
(Frequency)
84
5. The larger the ________________________ the further the scores are from the
mean, the smaller the_____________________, the closer the scores are to the
mean.
(Variance, Variance)
Essay Questions
1. List the various measures of central tendency and provide a brief description of
each.
2. List the various measures of dispersion and provide a brief description of each.
3. Provide a thorough discussion of the strengths and weaknesses of descriptive
statistics.
DISCUSSION QUESTIONS
1. Some argue that outliers should be eliminated from results. Do you agree or
disagree with this approach? Why or why not?
2. What are the ethical considerations in eliminating data?
3. What role do descriptive statistics play in research? Do you believe they are
simply superfluous (an unnecessary burden) or are they useful? On what do you
base your opinion?
CHAPTER GLOSSARY TERMS
Bell-shaped curve - The distribution of scores that are symmetrically shaped around the
mean where the majority of the scores are clustered around the mean and each side of the
mean resembles the other
Central tendency - An estimate of the center of a distribution of values
Descriptive statistics - Ways of organizing, describing, and presenting quantitative
(numerical) data in a manner that is concise, manageable, and understandable
Distribution - A summary of the frequency of individual values or ranges of values for a
variable
Frequency - The number of times that a response occurs
85
Histogram - A vertical block graph used in statistics to visually present interval- or ratiolevel data
Leptokurtosis - The shape of a distribution of scores that is tall and narrow because the
majority of scores closely resemble the mean
Mean - The statistical average of a set of numbers
Measure of dispersion - A statistical measure that shows how dissimilar or different the
data are from each other and is reported by the range of scores around the mean
Median - The midpoint of a set of numbers
Mode - The most frequently occurring response for a variable
Normal distribution - The symmetrical distribution of scores around the mean, with the
most scores clustered around the mean and tapering off on both sides
Outlier - An anomaly or result that is far different from most of the results for group and
can skew the overall results (especially in a statistical average)
Platykurtosis - The shape of a distribution of scores that is flat and wide because the
majority of scores differ from the mean
Quantitative data analysis - The process of analyzing data utilizing a variety of
statistical procedures including descriptive and inferential statistics
Range - The overall spread or variability of a variable that tells us the difference between
the lowest (minimum) and highest (maximum) values (responses) for a variable
Skewed distribution - A distribution of scores that produce a nonsymmetrical curve
because there are more responses on the left or right side of the mean
Standard deviation - A measure of dispersion that is calculated by taking the square root
of variance. It is the most commonly used measure of dispersion.
Univariate analysis - A descriptive statistical method that involves the examination
across cases of one variable at a time
Variance - A statistical measure that is used to examine the spread of scores in a
distribution
APPLIED LEARNING ACTIVITY
Using the characteristics of your family members, compute the following statistics:
What is the n?
What is the range, minimum and maximum of ages?
What is the mean age?
Report the percentage for sex.
Looking at relationship status – what is the frequency for married, divorced, single,
widowed and living with someone?
86
KEY POINTS

Data analysis is the process of using a variety of statistical procedures to analyze
data.

Data analysis is usually conducted in two steps—first the descriptive analysis of
the individual variables is conducted and then inferential statistics are used to
analyze the associations between variables.

Descriptive statistics are ways of organizing, describing, and presenting
quantitative data in a manner that is concise, manageable, and understandable.

Inferential statistics are statistical procedures that are used to examine
associations about a population based on the results found in a sample.

The five commonly used types of descriptive statistics are the mean, median,
mode, range, and frequency.

Measures of central tendency are statistical measures that report how much our
data are alike or similar. Mean (average of the scores), median (midpoint of the
scores), and mode (most frequently occurring score) are all measures of central
tendency.

Measures of dispersion, also known as measures of variability, are statistical
measures that reflect dissimilarities in our sample.

Three types of measures of dispersion are range (the overall spread or variability
from the minimum score to the maximum score), variance (the spread of scores in
a distribution of scores), and standard deviation (how much each score varies or
deviates from the mean).

Normal distribution of data is an assumption used in statistical procedures that
scores are probably distributed equally around the mean. Normally distributed
data resemble a bell-shaped curve.
87
ADDITIONAL TEACHING RESOURCES
Tables/Graphs/Diagrams
Relationship of Correlations
No Relationship
Positive Relationship
Negative Relationship
Curvilinear Relationship
Two variables are simply not related (one
variable does not influence the other).
A relationship that occurs when the
independent variable increases as the
dependent variable increases.
A relationship that occurs when one
variable increases while the other
decreases. Sometimes referred to as an
inverse relationship.
A relationship that can start off as either a
positive relationship or a negative (inverse)
relationship and then begins to curve.
Illustration of Findings for Central Tendencies
Score
Frequency
Descriptive
9
1
8
3
7
4
Mode is 7 (most frequently occurring score)
6
3
Median is 6 (3 numbers above and 3 below)
5
3
Mean is 5.6 (112 divided by 20)
4
3
2
3
88
Measure of Distribution
(How many responses)
Type of Statistic
Frequency
Level of Variable
Nominal
Ordinal
Interval
Ratio
Definition
The number of times that a response
occurs. This is done by simply counting
the responses. A total can also be reported
as a percentage (10 out of 100 or 10%).
Measures of Central Tendency
(How similar are the responses)
Type of Statistic
Level of Variable
Mean
Interval
Ratio
Median
Ordinal
Interval
Ratio
Mode
Nominal
Ordinal
Interval
Ratio
Definition
A statistical average. Used with interval
and ratio level data, you add all the
response scores and divide by the number
of responses.
The mid-point between numbers. For
example, if you have a set of numbers from
one to five – the median would be three
because three falls directly in the middle
with two numbers below it (1 and 2) and
two numbers above it (4 and 5).
The most frequently occurring response. If
more people in a study responded that they
were 21 years old than any other age, the
mode would be 21.
Measures of Dispersion/Variability
(How different are the responses)
Type of Statistic
Level of Variable
Range
Nominal
Ordinal
Interval
Ratio
Variance
Interval
Ratio
Standard Deviation
Interval
Ratio
Definition
The possible number of values between the
lowest and highest values. For example, if
the ages of the students in your class cover
a span from 18 years old to 53 years old,
you would say that the range is 36 or the
possible ages between 18 – 53.
The spread of scores in a distribution of
scores. The larger the variance, the further
the scores are from the mean, the smaller
the variance, the closer the scores are
around the mean.
A measure of how much each score varies
or deviates from the mean (average). This
calculation simply tells us how far or how
close scores typically congregate around
the mean.
89
CHAPTER TWELVE: INTRODUCTION TO INFERENTIAL STATISTICS
CHAPTER OUTLINE
I.
What are Inferential Statistics?
A. Definition
II.
Four Types of Correlations
A. Measures of Association defined
B. No Correlation
C. Positive Correlation
D. Negative Correlation
E. Curvilinear Correlation
III.
Determining the Strength of the Correlation
IV.
Probability Values and Confidence Intervals (defined)
V.
Parametric statistics
A. Pearson’s r
1. Bivariate analysis defined
B. Multiple Regression
C. T-tests
1. Independent samples t-test
2. Paired samples t-test
D. Analysis of Variance (ANOVA)
VI.
Non-parametric statistics
A. Chi-square (cross-tabs)
B. Other types of non-parametric statistics
VII.
Strengths and Limitations of Inferential Statistics
TEST QUESTIONS
True/False
1. There are two types of inferential statistics – metric and parametric. (False)
2. The primary purpose of inferential statistics is the test hypotheses. (True)
90
3. Measures of association are any of several statistical procedures that allow you to
measure the correlation between variables. (True)
4. Inferential statistics are statistical procedures that allow us to draw conclusions about
a population based on assumptions we have made from the sample we have selected.
(False)
5. Descriptive statistics are more limiting than are inferential statistics.
(True)
6. Cross-tabs analysis seeks to determine if a relationship exists between two variables
(one independent variable and one dependent variable). (True)
7. Some of the most common of parametric statistics include: Pearson’s r, multiple
regression, t-tests, and analysis of variance (ANOVA). (True)
8. Non-parametric statistics assumes that your data is normally distributed. (False)
9. Describing a sample in term of number and percentage for gender, education and
occupation would be an example of inferential statistics. (False)
Multiple Choice
1. Parametric statistics include:
a. T-tests, bi-variate analysis, multiple regression, and Chi-square
b. Chi-square, Pearson’s Product Moment Correlation, and Multiple
Regression
c. T-tests, Analysis of Variance, Pearson’s r, and multiple regression
d. All of the above
e. None of the above
2. The assumptions of parametric statistics are:
a. That data is normally distributed
b. That the dependent variable is measured at the interval or ratio level
c. That you have a sample size of 50 or greater
d. All of the above
e. None of the above
3. While there are several types of non-parametric statistics, the most commonly
used is/are:
a. Chi-square
b. ANOVA
c. T-tests
d. All of the above
e. None of the above
91
4. Measures of association refer to:
a. Any of several statistical procedures that allow you to draw conclusions
about a dependent variable based on an independent variable
b. Any of several statistical procedures that allow you to measure the
correlation between variables.
c. Any of several statistical procedures that all you to measure the strength of
association of a dependent variable when the DV is measured at a nominal
level.
d. All of the above
e. None of the above
5. Measures of association examine the direction of a relationship in what ways:
a. No relationship, positive relationship, negative relationship
b. No relationship, positive relationship, curvilinear relationship
c. Curvilinear relationship, positive relationship, divergent relationship
d. No relationship, positive relationship, negative relationship,
curvilinear relationship
e. None of the above
6. A t-test is:
a. A statistical test to compare the means of two groups
b. A statistical test to compare the range of two groups
c. A statistical test to compare the median to two groups
d. A statistical test to determine if there is variance in a group
e. All of the above
f. None of the above
7. Chi-square analysis looks at:
a. The impact of two variables on the dependent variable
b. The impact of one independent variable on the dependent variable
c. The relationship of a control variable on the independent variable
d. None of the above
8. Multiple regression is:
a. Also known as linear regression
b. The impact of a dependent and an independent variable on each other
while removing the impact of the other variables in the study
c. More sophisticated than bi-variate analysis
d. All of the above
e. None of the above
9. Chi-Square is:
a. A non-parametric statistical procedure commonly used when a researcher
has a small sample size
b. A non-parametric statistical procedure commonly used when a researcher
has a dependent variable that is nominal or ordinal
92
c. All of the above
d. None of the above
10. Professor Smith conducts an experiment and finds two correlations. One
correlation is
.30 and another that is -.40. Which is the stronger correlation?
a. -.40
b. .30
c. They are both equal
d. Neither correlation is stronger
Short Answer
1. Hypotheses are research questions that are stated in terms that are
_____________________.
(Testable)
2. __________________________ ___________________________ are statistical
procedures that allow us to draw conclusions about a population based on results from the
sample we have selected.
(Inferential Statistics)
3. Chi-square is sometimes referred to as ________________________ ____________.
(Cross tabs)
4. _____________________________ ___ _____________________________ are
any of several statistical procedures that allow you to measure the correlation between
variables.
(Measures of association)
5. ________________________________ does not imply ______________________.
(Correlation, Causation)
Essay Questions
1. Compare and contrast parametric and non-parametric statistics.
2. Discuss the strengths and limitations of parametric statistics.
3. Discuss the interpretation of results in a bi-variate analysis.
93
DISCUSSION QUESTIONS
1. It has been stated that statistics can be manipulated to reflect skewed or even false
results. Do you agree or disagree with this statement? Why or why not?
2. The authors strongly recommend that the researcher return to their literature
review and interpret results in light of others findings. Do you agree with this
approach? Why or why not?
3. Researchers routinely establish confidence intervals of 95% (meaning they can be
correct 95 out of 100 times). Is this margin of error sufficient (in your opinion)?
Is it too low? Too high? Why or why not?
CHAPTER GLOSSARY TERMS
Analysis of variance (ANOVA) - A statistical procedure that allows us difference
between the mean scores of two or more groups simultaneously by computing a statistical
average for one group as a whole and comparing it to another group or groups
Bivariate analysis - An analysis that examines the relationship between one independent
and one dependent variable (also known as simple regression)
Chi-square - A nonparametric statistical procedure that is commonly used when a
researcher has a small sample size and a dependent variable that is nominal or ordinal to
determine if there is truly a difference in groups
Confidence interval - An indication of what level of certainty we can have that our
sample accurately depicts the real world; usually established at 95 percent (or .05) in
statistical analysis
Curvilinear relationship - A statistical relationship that starts off as either a positive
relationship or a negative relationship and then begins to curve
Independent samples t-test - A type of t-test that is utilized when a researcher needs to
compare two groups to see if the independent variable has an effect on the dependent
variable
Inferential statistics - Statistical procedures that examine associations between variables
and use significance tests and other measures to make inferences about the collected
quantitative data
Measure of association - Any of several statistical procedures that allow you to measure
the correlation between variables
Multiple regression - A statistical procedure that measures the correlation between an
independent variable and the dependant variable while holding other independent
variables constant (also known as linear regression)
94
Negative correlation - A statistical relationship that occurs when one variable increases
when the other decreases (also called an inverse relationship)
No correlation - The absence of a relationship between variables—one variable does not
influence the other
Nonparametric statistics - Statistics that are used when the data depart from the criteria
established for parametric statistics, the most common of which is the Chi-square
Paired samples t-test - A test of significance of the differences between two different
sets of scores for the same respondents (also known as dependent samples t-test)
Parametric statistics - A type of inferential statistics in which a certain set of
assumptions or rules must be met: data must be normally distributed, the dependent
variable must be measured at an interval or ratio level, and a sample size of at least thirty
must be used
Pearson’s r - An analysis of correlation that seeks to determine if a relationship exists
between two variables (one independent variable and one dependent variable) and the
direction of the relationship
Positive correlation - A statistical relationship in which the independent variable
increases as the dependent variable increases
Probability value - A report of whether the strength of a relationship is statistically
significant or whether it could have occurred by chance; generally set at .05 or lower
t-test - A statistical procedure that compares the means of two groups to determine if they
are statistically different
APPLIED LEARNING ACTIVITIES
Activity #1:
You are a researcher who wishes to determine if a relationship exists between violent
video games and aggression in children. You conduct a literature review and find
literature that supports this hypothesis. After conducting a research study in which you
examine the amount of time a child spends playing violent video games (independent
variable) and the amount of aggression he or she exhibits (dependent variable), you find
the two variables are related (r = .47, p < .05).
1.
What conclusions can you draw about these statistics?
2.
What type of statistical procedure was conducted?
3.
Was this an appropriate procedure? Why or why not?
4.
Can you trust your findings? Why or why not?
95
Activity #2:
You are a hospital social worker who conducts an educational group for people addicted
to nicotine. You want to determine empirically whether your participants are learning
anything about the effects of tobacco on the body. You develop a test and administer the
test before your participants begin their first group session and again at the end of the last
group session. You compare the results and find the following results: t = 12.71, p < .05.
1.
What type of statistical procedure was conducted?
2.
What, if anything, can you conclude about the statistical procedure?
3.
Was this an appropriate statistical procedure? Why or why not?
Activity #3:
You are a caseworker who helps consumers obtain food stamps. You have noticed
that a large percentage of your consumers have not graduated from high school. You
decide to collect data from several of your consumers’ files and conduct a statistical
analysis to see if education and income are associated.
1.
What type of statistical analysis would you run?
2.
How would you expect to interpret the results?
3.
Would you run a different statistical procedure if you were examining sex
and education?
KEY POINTS

There are two types of inferential statistics—parametric and nonparametric.

Parametric statistics have a set of assumptions or rules that must be met: they
assume that your data is normally distributed, that the dependent variable is
measured at an interval or ratio level, and that you have a sample size of at least
thirty.

Nonparametric statistics are used when the data departs from the criteria
established for parametric statistics.

The probability value is used to determine whether the strength of the relationship
is statistically significant or whether it could have occurred by chance.

Multiple regression measures the impact of a dependent variable and an
independent variable on each other while removing the impact of the other
variables in the study.

A t-test is a statistical procedure that compares the means of two groups to
determine if they are significantly different.

Analysis of variance (ANOVA) is a statistical procedure that allows us to
compare the mean scores of two or more groups simultaneously.
96

Cross-tabs is a nonparametric statistical procedure commonly used when a
researcher has a small sample size and a dependent variable that is nominal or
ordinal to determine if there is truly a difference in groups.
ADDITIONAL TEACHING RESOURCES
Tables/Graphs/Diagrams
Types of Correlations
No Correlation
Positive Correlation
Negative Correlation
Curvilinear Correlation
Two variables are simply not related (one
variable does not influence the other).
A relationship that occurs when the
independent variable increases as the
dependent variable increases.
A relationship that occurs when one
variable increases while the other
decreases. Sometimes referred to as an
inverse relationship.
A relationship that can start off as either a
positive correlation or a negative (inverse)
correlation and then begins to change
directions (curve).
Sample Statistical Notation
Type of Statistic
(Value, Significance)
Bivariate Analysis
(r = -.47, p<.05)
Multiple Regression
(R = .56, p<.05)
t –Test
(t = 26.473, p<.05)
Analysis of Variance (ANOVA)
(F = 22.77, p<.05)
Cross-tabs
(χ2 = 31.68, p<.05)
97
Definitions of Parametric Statistics by Type
Parametric Statistics (n ≥ 30)
Type of Statistic
Level of Variables
Pearson’s r
DV interval or ratio
IV interval or ratio
Multiple Regression
DV interval or ratio
IV all levels
t –Test
DV interval or ratio
IV is dichotomous
Analysis of
Variance
(ANOVA)
DV interval or ratio
IV one or more
groups
Definition
A statistic, usually symbolized as r,
showing the degree of linear relationship
between two variables that have been
measured on interval or ratio scales.
Also known as “linear regression”,
multiple regression measures the impact of
a dependent and an independent variable
on each other while removing the impact
of the other variables in the study.
A t-Test is a statistical procedure that
compares the means (statistical average)
between two groups to determine if they
are significantly different. There are two
types of t-tests; independent samples t-test
and paired samples t-test.
Analysis of Variance (ANOVA) is a
statistical procedure that allows us to
compare the mean scores of two or more
groups simultaneously. ANOVA
computes a statistical average for the
group as a whole (within group) and
compares it to a second or multiple groups
(between groups). Thus ANOVA
compares both within groups and between
group scores.
Nonparametric Statistics
Type of Statistic
Cross-tabs
Level of Variables
Definition
DV discrete variables
IV discrete variables
Cross-tabs is a non-parametric statistical
procedure. Chi-square (pronounced as kiesquare) is the most common significance
test for this procedure and is used when a
researcher has a small sample size and a
dependent variable that is nominal or
ordinal to determine if there is truly a
difference in groups.
98
CHAPTER THIRTEEN: PRACTICING YOUR RESEARCH SKILLS
ADDITIONAL TEACHING RESOURCES
Sample Assignment
RESEARCH PROPOSAL RUBRIC
Part
I
TITLE & ABSTRACT
Title reflects indicates major variables or main point of the study
Abstract accurately & concisely summarizes study
Written composition – spelling, punctuation, grammar, & sentence structure.
Part
II
C
B
E
B
D
Part
IV
INTRODUCTION
Research question or problem clearly identified.
Research question or problem placed in social work context with background and/or
rationale for study provided.
Written composition– spelling, punctuation, grammar, & sentence structure.
LITERATURE REVIEW
Studies are critically reviewed and relevant to research question or problem.
Number of articles or book chapters is appropriate for this study.
Issues of diversity are addressed.
Scholarly sources used
Within-text citations are APA-style
Written composition. Literature review concludes with summary paragraph that synthesizes the
studies reviewed.
Students identify patterns, gaps and disputes within the literature.
Theoretical perspective(s) identified.
METHODS (How study was conducted)
Brief re-statement of research question & major variables begins this section.
Research design is clearly & accurately represented
Independent variable(s) is/are clearly identified & defined conceptually & operationally.
Dependent variable(s) is/are clearly & adequately identified & defined conceptually &
operationally.
Respondents – Sampling frame or criteria on which respondents selected is clear; adequate
description of how respondents recruited; subsequent completion or response rate is provided. \
99
E
Part
V
E
Part
VI
Diversity of the sample is addressed.
Data collection procedures – Statements relating how data collected are clear & informative.
Instrumentation – Brief, clear description of research instrument & its origins; instrument
appears in Appendix.
Data analysis methods – Statistical procedures used in analyzing the data are clearly stated &
flow logically from research question.
Written composition: In addition to criteria listed previously, it is suggested that you organize
Part III using the above underlined terms as subheadings.
RESULTS (Presentation of findings)
Respondents – Aggregate data re: respondent demographics & other characteristics are clearly,
thoroughly & accurately presented. Issues of diversity are addressed.
Univariate analyses of other variables are accurate & clearly represented.
Bivariate analyses – Analyses of relationships between independent & dependent variables, or
between other major variables, are accurate & clearly represented.
Results are presented factually and without opinion.
Students interpret the statistical significance of relationships correctly.
Written composition: In addition to criteria listed previously, it is suggested that you organize
Part IV using the above underlined term & your major variables & relationships between
variables as subheadings.
DISCUSSION – (Interpretation of the results)
Section begins w/ brief summary of major findings or highlights.
B
Interpretations of findings (or conclusions) are critically discussed and reported through
clear, reasonable & warranted by the data.
Strengths and weaknesses of study – what went well and/or did not go according to design – are
clearly & accurately identified.
Limitations of study are clearly & accurately identified.
C, D Application to research practice, policy, and future research are addressed.
Written composition – spelling, punctuation, grammar, & sentence structure.
Part
VII
Part
VIII
A
REFERENCES
References formatted in APA style
Reference list consistent w/ within-text citations – there are no less & no more than, & authors
are same as, those cited within body of paper
APPENDICES
Research instrument follows references as an Appendix and appears in same format as that
approved by instructor & as seen by respondents.
Informed Consent is provided as approved by instructor (if applicable).
100
Measurement items related to CSWE Objectives:
A. The student will apply social work ethical principles to guide professional practice.
B. The student will apply critical thinking to inform and communicate professional
judgments.
C. The Student will engage in research-informed practice and practice informed research.
D. The student will apply knowledge of human behavior and the social environment.
E. The Student will engage diversity and difference in practice.
101