View - OhioLINK Electronic Theses and Dissertations Center

A TEST OF THE RELIABILITY AND VALIDITY OF THE
LIFE-EVENTS CALENDAR METHOD USING OHIO PRISONERS
DISSERTATION
Presented in Partial Fulfillment of the Requirements for
the Degree Doctor of Philosophy in the Graduate
School of The Ohio State University
By
James E. Sutton, M.A.
******
The Ohio State University
2008
Dissertation Committee:
Professor Paul Bellair, Advisor
Professor Steven Lopez
Professor Richard Lundman
Approved By
__________________________
Advisor
Graduate Program in Sociology
ABSTRACT
Previous research indicates that self-report surveys can be used to gather quality
information from respondents. However, the accuracy of self-reported data is often
compromised by respondents’ memory gaps, incomplete responses, and inability to recall
the temporal ordering of past events. These considerations are especially pertinent when
surveying offenders and others who lead unstable lives. To address these challenges
social scientists have increasingly adopted life-events calendars.
Prior studies suggest the life-events calendar method improves respondent recall
in survey research. However, reliability and validity tests of life-events calendar data are
limited. Moreover, within criminology reliability and validity tests of life-events
calendar data have not been conducted using samples that generalize to broader offender
or prisoner populations. Accordingly, this dissertation examines the reliability and
validity of self-reported life-events calendar data collected from a sample of Ohio prison
inmates.
This dissertation makes three distinct contributions. First, it outlines the step-bystep process of conducting original research in prisons. The data used in this study were
collected as part of a multifaceted data collection project composed of test and retest
face-to-face interviews with incarcerated offenders, analyses of official Ohio Department
of Rehabilitation and Correction inmate records, and geo-coded neighborhood data. The
ii
detailed story of this project’s development and administration is told through this
dissertation. Special attention is given to topics such as securing IRB approval,
instrument construction, interviewer training, respondent recruitment, scheduling,
researcher presentation of self, doing reflective quantitative research, and resolving
emergent challenges that are specific to conducting research in correctional settings.
Second, this dissertation draws from a sample that is representative of those who
have been disproportionately affected by the recent and unprecedented growth of the U.S.
prison system. The sample was intentionally designed to be more reflective of current
prison populations than those used in previous self-report studies and life-events calendar
research. For instance, the sampling frame consisted of minimum and medium security
level male inmates between the ages of 18 and 32. In terms of demographic
characteristics, criminal histories, and offending patterns respondents closely matched
those who are now being sent to prison most frequently, getting released, showing the
highest recidivism rates relative to other ex-prisoners, and experiencing the most
noticeable increase in rates of re-arrest.
Third, this dissertation examined the test-retest reliability and criterion validity of
life-events calendar data. Retrospective self-reports of residential moves, job changes,
and arrests during the eighteen-month reference period featured low reliability.
However, moderate to high reliability was found for self-reported use of alcohol and six
iii
other drugs, legal and illegal income, drug dealing, violent offending, property offending,
and three different forms of involvement with the justice system.
Criterion validity tests using official prison records found poor validity for selfreports of arrests over the eighteen-month study period. However, moderate validity was
found for self-reports of total lifetime arrests and convictions, and respondents’
retrospective accounts of age at first arrest and number of prior prison terms featured
strong validity. Poor reliability and validity for self-reported arrests during the study
period may have stemmed from respondent confusion about what constitutes an arrest
and the likelihood that official records are incomplete bases for criterion comparisons.
Several items related to life events, substance use, justice system involvement,
and criminal activity were assessed. Overall, self-reports from the incarcerated men in
the sample featured moderate to high reliability and validity for most indicators
examined, and comparisons of Caucasians and African-Americans found more racial
parity than dissimilarity in reporting behavior. Accordingly, this dissertation found that
prison inmates were good survey respondents and that the life-events calendar method
was an effective strategy for collecting reliable and valid self-reported information.
Most of the incarcerated men in the sample were socially disadvantaged. They
frequently experienced short-term changes in their life circumstances, and many adopted
a foreground orientation as they responded to day-to-day challenges such as supporting
themselves, feeding families, and feeding drug addictions. The life-events calendar
iv
method is particularly well suited for collecting data from individuals who lead unstable
lives. These respondents therefore comprised an ideal sample for reliability and validity
tests of life-events calendar data. The men examined in this dissertation featured
wavering life circumstances and were deemed disreputable by the criminal justice system
and others in mainstream society, yet they typically provided consistent and credible
information in their self-reports.
v
Dedicated to Jessica and My Parents
vi
ACKNOWLEDGMENTS
Writing these acknowledgements has helped me to further appreciate two
realities. First, I could not have gotten this far in school or in life without the assistance
of others. Second, I am extremely fortunate to have a support system that consists of
several very special people. Getting to this point has been tough, and while it would be
impossible to list the names of everyone who has helped out or believed in me along the
way, I would nonetheless like to take this opportunity to acknowledge some key
contributors to my cause.
First off, I would like to thank the incarcerated men who volunteered to
participate in this project. Though I am unable to list them individually due to space
limitations and human subjects’ protections, I would like to thank them collectively for
sharing their lives during interviews. I would also like to thank the Ohio Department of
Rehabilitation and Correction for allowing us to do research in its institutions. Numerous
ODRC employees at all levels played critical roles in making this project possible.
Among them, I would particularly like to acknowledge Melody Haskins and Shane
Dennis.
Others who have made this project possible include Colin in the Sociology
Research Lab and the members of our research team. The broader project that produced
my dissertation data has been a team effort, and I have enjoyed working closely with
vii
Anita Parker, Brianne Hillmer, Grace Sherman, Rachael Gossett, and the other
interviewers.
I need to express my gratitude to the hundreds of students, staff, and faculty
colleagues with whom I have worked at the Ohio State University, Columbus State
Community College, Hobart and William Smith Colleges, and California State
University, Chico. Jane Wilson and Michelle Blackwell have been particularly helpful
and supportive during my time at OSU. I also want to express my heartfelt thanks to
Professors Jo Beth Mertens and Lee Quinby and Provost Teresa Amott from HWS,
Professor Lori Pompa from Inside-Out and Temple University, and Professors Laurie
Wermuth and Rick Narad at Chico State for their deliberate kindness, support,
encouragement, and mentorship during my socialization into junior faculty life.
A handful of sociology faculty members at Ohio State have been particularly
instrumental. I want to thank Chris Browning, Dana Haynie, Bob Kaufman, and
Townsand Price-Spratlen for serving on my committees in the past. I would especially
like to thank Tim Curry for his consistent support of my objectives and interests,
including his assistance with teaching and the academic job market. Finally, I am
indebted to Steve Lopez for stepping up and filling a void in my dissertation committee at
a critical time. Thank you once again, Steve. I sincerely appreciate your kindness and
willingness to help out in such a big way on such short notice.
viii
I am positioned to graduate with my doctorate and assume a tenure-track job due
to the training and guidance I have received from my two advisors, Richard Lundman
and Paul Bellair. In a department where I have consistently been an outlier, Professors
Lundman and Bellair have granted me the space to carve out my own niche.
Professor Lundman was my advisor from the first day of grad school up to when I
became ABD, and I am just beginning to appreciate the depth of his influence on my
intellectual and professional development. I thank him for caring so deeply about his
work and being a scholarly role model of the utmost integrity whom I can always count
on for constructive (and fun) questions, comments, and observations. Thank you,
Professor Lundman, for the honor.
I also thank Professor Bellair for stepping in to become my advisor during a
crossroads in my graduate student career. It is not an overstatement to say that Paul is the
reason why I have been able to complete my PhD and land an assistant professorship.
Between believing in my potential, giving me the chance to play a central role in his (our)
original data collect project, hiring me as a research assistant, teaching me about the
research process, including conference participation, and supporting my teaching and
career objectives, Paul has consistently been there for me. Thank you, Paul, for your
friendship, support, and mentorship, and thank you also for involving graduate students
in your research for the “right” reasons. You have been a true academic mentor.
ix
I wish to extend my appreciation to my fellow sociology graduate students. I
began grad school with over twenty new friends in G99, and I picked up countless others
along the way. In particular, I would like to say thanks to Marianne Abbott, Lori
Burrington/Muccino, Dennis Condron, Lisa Garoutte, Jennifer Green, Brian Kowalski,
Shelley (& Trevor) Pacholok, and Clayton Peoples. None of this would have been
possible or worthwhile without having my friends to experience it with. Moreover, I
cannot imagine a better friend (and “colleague”) than Lisa Hickman. She is uniquely
special and deserves more thanks than I could ever express.
I want to conclude by acknowledging the support of my former professors at
LBCC and CSULB, lifelong friends, family members, and others whose influence was
crucial, including Sarah, Rita, Barbara and Dennis, the Pattons, the Petersons, the
Garcias, (the late) Caronas, Barry Dank, (the late) Ben Holzman, Lynn, Eric, and my
partners in crime: Devan, Jerry, Matt, Dave, Jim, et al. Aside from my in-laws (whom I
met more recently), all of these people have consistently been on my side not only now
but during earlier times when my personal demons held more influence and my life’s
trajectory swayed in a differnet direction. My survival is accurately and appropriately
attributed to them.
Last, and furthest from least, I want to thank my wife, Jessica. She has directly
assisted me with dissertation-related tasks, and I have benefited immensely from our
shared interests in prison topics. But much more importantly, she has supported me
x
unconditionally and believed in my ability when I have not. Thank you, Jes, for making
me happy and better. You are the cornerstone of a future that has provided incentive to
complete my dissertation, and I look forward to sharing life with you.
At the end of the day it is people who matter. The individuals listed above have
inspired me, taught me, supported me, and made it all meaningful, both on the good days
and the bad ones. From my first days to the present, the primary influences in my life
have been my parents, to whom this dissertation is dedicated. Thank you both for
everything! Your unconditional support and love has made the critical difference in my
life. While I can never repay you for all you have done, I am thrilled to be in a position
to share this accomplishment with you.
xi
VITA
February 26, 1973………………….…….Born – Long Beach, California.
1996………………………………….…..A.A. Liberal Studies, Long Beach City College.
Long Beach, California
1998………………………………….…..B.A. Sociology, California State University,
Long Beach. Long Beach, California.
2002………………………………….…..M.A. Sociology, The Ohio State University.
Columbus, Ohio.
1999 - 2006…………………………..…..Graduate Associate, Department of Sociology,
The Ohio State University, Columbus, Ohio.
2005………………………………….…..Adjunct Instructor, Social and Behavioral
Sciences Department, Columbus State
Community College, Columbus, Ohio.
2006 - 2007…………………………..…..Visiting Instructor, Department of Sociology,
Hobart and William Smith Colleges,
Geneva, New York.
2007 - present…………………………….Assistant Professor, Department of Sociology,
California State University, Chico. Chico,
California.
FIELDS OF STUDY
Major Field: Sociology
xii
TABLE OF CONTENTS
Page
Abstract ............................................................................................................................... ii
Dedication .......................................................................................................................... vi
Acknowledgments............................................................................................................. vii
Vita……............................................................................................................................ xii
List of Tables ................................................................................................................... xiv
List of Figures ................................................................................................................ xviii
Chapters:
1.
Introduction............................................................................................................. 1
2.
The life-events calendar method............................................................................. 9
3.
Reliability and validity.......................................................................................... 21
4.
Assessing life-events calendar data: Toward testable hypotheses....................... 45
5.
Methods and data .................................................................................................. 48
6.
Demystifying the data collection process ............................................................. 90
7.
The sample .......................................................................................................... 114
8.
Results: Reliability and validity......................................................................... 132
9.
Discussion and conclusion.................................................................................. 181
Appendix A: Initial recruitment letter............................................................................ 189
Appendix B: Revised recruitment letter ........................................................................ 191
Appendix C: Life-events calendar ................................................................................. 193
Appendix D: ODRC consent form................................................................................. 195
Appendix E: OSU consent form .................................................................................... 197
Appendix F: Field notes form ........................................................................................ 201
Bibliography ................................................................................................................... 208
xiii
LIST OF TABLES
Table
Page
5.1
Interviewing waves……………………………………………………………...59
5.2
Recruitment rates………………………………………………………………...61
7.1
Most common committing offenses……………...……………..……………...115
7.2
Respondents’ racial background…………………...…….……….………….…117
7.3
Respondents’ childhood family environment………………………………......118
7.4
Respondents’ level of education……………………………………………..…119
7.5
Respondents’ annual legal income………………………………….…….….…120
7.6
Respondents’ marital status at time of arrest…………………....……...………120
7.7
Respondents’ parental status at time of arrest……………….…………...……..121
7.8
Respondents’ number of dependents at time of arrest………………....……….121
7.9
Respondents’ age at first arrest…………………………………………...…….122
7.10
Respondents’ number of lifetime arrests…………………………………...…..122
7.11
Respondents’ number of jail terms served…………….……………………......123
7.12
Respondents’ number of prison terms served………………………………......124
7.13
Respondents’ life events during the calendar period…………………………...124
7.14
Respondents’ drug dealing during the calendar period…………………………126
7.15
Month eighteen…………………………………………………………………127
7.16
Estimated illegal income in month 18………………………………………….128
xiv
7.17
Substances used in month 18……………………………….…………………..129
7.18
Average number of drinks consumed………………………………………..…130
8.1
Reports of any job changes for each month during the reference
period………………………………………………………………………………………………………………...139
8.2
Reports of any residential moves for each month during the
reference period………………………………………………………………………………………………...140
8.3
Reports of any school involvement for each month during the
reference period………………………………………………………………...141
8.4
Test-retest correlations of life events across the
eighteen-month calendar period……………………………………………..…142
8.5
Reports of any illegal income for each month during the
reference period………………………………………………………………...144
8.6
Test-retest correlations of frequency of substance use
across the eighteen-month calendar period……………………………………..146
8.7
Reports of any alcohol use for each month during the
reference period………………………………………………………………...149
8.8
Reports of any marijuana use for each month during the
reference period………………………………………………………………...150
8.9
Reports of any crack cocaine use for each month during the
reference period………………………………………………………………...151
8.10
Reports of any powder cocaine use for each month during the
reference period………………………………………………………………...152
8.11
Reports of any heroin use for each month during the
reference period………………………………………………………………...153
8.12
Reports of any speed use for each month during the
reference period………………………………………………………………...154
xv
8.13
Reports of any prescription drug use for each month
during the reference period……………………………………………………..155
8.14
Reports of any incarcerations for each month during the
reference period………………………………………………………………...157
8.15
Reports of any treatment program involvement for
each month during the reference period………………………………………..158
8.16
Reports of any probation/parole supervision for each month
during the reference period……………………………………………………..159
8.17
Reports of any arrests for each month during the reference period…………….160
8.18
Reports of any violent offenses for each month during the
reference period………………………………………………………………...163
8.19
Reports of any property offenses for each month during the
reference period………………………………………………………………...164
8.20
Reports of any drug dealing for each month during the
reference period………………………………………………………………...165
8.21
Reports of any arrests for each month during the reference period…….………169
8.22
Retest reports of any arrests for each month during the
reference period………………………………………………………………...170
8.23
Validity correlations of self-report and ODRC data for
total lifetime arrests……………………………………………………………..171
8.24
Validity correlations of self-report and ODRC data for
total lifetime convictions………………………………………………………..171
8.25
Validity correlations of self-report and ODRC data for
age at first arrest…………………………………………………………...……172
8.26
Validity correlations of self-report and ODRC data for
number of prior prison terms…………………………………………..……….172
xvi
8.27
Summary of reliability and validity kappa results for
African-American and Caucasian respondents………………………………...175
8.28
Z values for criminal history measures……………………………………..….178
xvii
LIST OF FIGURES
Figure
Page
5.1
Typical interview seating chart and equipment layout…………………………..80
8.1
Sample presentation of kappa results………………………………...…………135
8.2
Benchmark kappa coefficient interpretations
(Landis and Koch 1977)………………………………………………….……..136
xviii
CHAPTER 1
INTRODUCTION
“One of the most intriguing and frustrating aspects of memory is its
incompleteness. Some people have “good memories” and can remember many
things in great detail: others have “poor memories” and may feel that they are
constantly forgetting things.” (Sudman, Bradburn, and Schwarz 1996: 171)
How accurate are survey respondents’ retrospective accounts? How well do
respondents remember previous experiences? Are there ways in which researchers can
assess, and perhaps improve, respondents’ accuracy when they speak of their pasts?
“The spoken…word has always a residue of ambiguity…yet interviewing is one of the
most common and powerful ways we try to understand our fellow human beings”
(Fontana and Frey 2003: 645). Accordingly, as social scientists we stand to better discern
the lives of others by assessing and ultimately reducing ambiguities in their memories.
The accuracy of self-reported data is conventionally determined by evaluating
reliability and validity (Litwin 1995; Gottfredson 1971). Self-report surveys are often the
most viable option for researchers who seek to learn about people, and they generally
provide useful information (Northrup 1997). However, prior research indicates that selfreports may feature patterned ambiguities such as memory gaps and may therefore not
always be complete.
1
For instance, it has been suggested that self-reports “not be used to test
hypotheses that demand precision in estimating event frequencies and event dates”
(Henry et al. 1994: 92). Moreover, researchers have found that the reliability of selfreport data may be influenced by when the behaviors and events in question occurred and
their importance to respondents (Blumstein et al. 1986) and how often they took place
(Anglin, Hser, and Chou 1993; Blumstein et al. 1986). To compensate for challenges to
the accuracy of self-report data, Blumstein and colleagues (1986: 98) proposed designing
surveys that specifically account for factors known to affect reliability and validity. An
approach developed to meet this objective is the life-events calendar method.
The Life-Events Calendar Method: An Emerging Research Strategy
The life-events calendar method, also commonly referred to as the life-history
calendar method and event-history calendar method, entails having interviewers work
with respondents to fill out paper-based monthly calendars that chart when various events
occurred in the respondents’ lives (Freedman et al. 1988). This mode of administration
enables researchers to efficiently collect complicated, month-to-month longitudinal data
from respondents (Axinn, Pearce, and Ghimire 1999). Given that survey respondents
may “telescope,” or inaccurately report that an event happened during the researcher’s
time frame when it in fact occurred earlier or later (Sudman and Bradburn 1974; Sudman,
Bradburn, and Schwarz 1996), an advantage to using the life-events calendar method is
that it facilitates respondents’ abilities to more accurately remember the timing of key
events in their backgrounds (Axinn, Pearce, and Ghimire 1999; Lin, Ensel, and Lai
1997).
2
Despite the ability of the life-events calendar method to improve respondent recall
in survey research, it has yet to be widely adopted by researchers (Axinn, Pearce, and
Ghimire 1999). A review of the literature reveals a limited but growing number of recent
examples within sociology that have utilized the life-events calendar method, including a
social networks and organizational dynamics study that was published in the American
Sociological Review in 1992 (see McPherson, Popielarz, and Drobnic 1992) and a project
on voluntary association memberships that was featured in The American Journal of
Sociology in 1995 (see Popielarz and McPherson 1995).
Within criminology, studies that used the life-events calendar method to examine
relationships between changes in offenders’ life circumstances and their patterns of
criminal activity were recently published in the American Sociological Review (see
Horney, Osgood, and Marshall 1995), Criminology (see Kruttschnitt and Carbone-Lopez
2006; Uggen 2000a) and the Journal of Research in Crime and Delinquency (see
MacKenzie and Li 2002; Wittebrood and Nieuwbeerta 2000). Moreover, the findings
from a test of the life-events calendar method’s utility for recalling violent offending
were published in the Journal of Quantitative Criminology (see Roberts et al. 2005).
A cursory examination of prior studies yields two observations. First, research
using the life-events calendar method has recently been published in the top sociology
and criminology journals. Accordingly, it can be argued that studies utilizing the lifeevents calendar method have been granted legitimacy and visibility within the field,
despite being relatively few in number. Second, relying on the life-events calendar
method to examine how criminal behavior changes over time has been found to be an
3
effective research strategy (Horney, Osgood, and Marshall 1995; Lewis and Mhlanga
2001; MacKenzie and Li 2002).
The Life-Events Calendar Method and Criminology
Along with official statistics and victimization studies, self-reports from offenders
are one of the three main sources of quantitative data typically used in criminological
research (O’Brien 2000; Thornberry and Krohn 2000). Although the literature suggests
that offenders generally provide reliable and valid responses in self-report surveys
(Chaiken and Chaiken 1982; Hindelang, Hirschi, and Weis 1981; Horney and Marshall
1992; Marquis 1981; Weis 1986), patterned ambiguities that customarily emerge in
survey research remain a concern. A handful of recent studies within criminology have
employed the life-events calendar method to address respondent recall problems that
commonly affect the reliability and validity of self-report data (Horney et al. 1995; Lewis
and Mhlanga 2001; MacKenzie and Li 2002; Roberts et al. 2005; Uggen 2000a;
Yoshihama et al. 2002).
Moreover, mainstream criminologists are increasingly theorizing about
relationships between life-course occurrences and patterns of criminal behavior
(Blokland and Nieuwbeerta 2005; Farrington 2003; Hagan and McCarthy 1998; Laub
and Sampson 2003; Moffitt 1997; Sampson and Laub 1992; Uggen 2000b). The
traditional approach of employing cross sectional designs to study causes of crime is
limiting when examining dynamic relationships between life-course processes and
offending (Junger-Tas and Marshall 1999; Thornberry and Krohn 2000). Given that
longitudinal designs are particularly well suited for analyzing these relationships,
4
researchers are beginning to turn to the life-events calendar method to capture
relationships between life course events and variations in offending over time (Hagan and
McCarthy 1998; Horney, Osgood, and Marshall 1995; Laub and Sampson 2003;
MacKenzie and Li 2002; Roberts et al. 2005; Uggen 2000a; Wittebrood and
Nieuwbeerta 2000).
Limits of Previous Research
There are two important limitations to previous research using the life-events
calendar method. First, there are few prior studies on the reliability and validity of lifeevents calendar data (Caspi et al. 1996). For instance, my survey of the literature
produced just five studies that have assessed reliability and seven studies that have
assessed validity in life-events calendar research. The majority of these tests have been
conducted in disciplines other than criminology, and they have all been based on samples
that may not generalize to a criminally active population.
A second limitation to prior research utilizing the life-events calendar method is
that researchers have not established that their data are representative of the broader
prison population. For instance, Horney, Osgood, and Marshall’s (1995) influential work
using the life-events calendar method was based on a sample of ‘serious’ offenders, while
Roberts and colleagues (2005) examined respondents from a psychiatric emergency
room. While these populations are important for criminologists to study, it could be that
these groups are more embedded in crime than less serious offender populations. Over
the past two decades it has increasingly become the case that a majority of inmates are
being imprisoned for what many consider to be less serious, or minor, offenses (Austin
5
and Irwin 2001; Elsner 2006; Mauer 1999; Peters 2004: 174). Accordingly, as the adult
prison system continues to lock up disproportionate numbers of lower-level offenders,
researchers need to examine the efficacy of the life-events calendar method for studying
populations that feature greater variability.
The Current Study
To account for the shortcomings of previous research, the current study tests the
reliability and validity of self-reported data collected from adult prison inmates using the
life-events calendar method. Moreover, this study also examines factors that affect the
reliability and validity of the life-events calendar method for criminological research.
Increased attention to the life course and the preliminary success of prior applications of
life-events calendars strongly suggest that the life-events calendar method will be used by
criminologists more frequently in the future. Analyses of how the life-events calendar
method can be used to assess and minimize the patterned ambiguities within offender
self-reports are therefore timely and necessary.
Data and Sample
This study employs life-events calendar data and survey data collected from
prison inmates in the state of Ohio between April of 2005 and June of 2006 to examine
whether or not prisoners provide reliable retrospective accounts and valid responses.
The sampling frame consisted of minimum and medium security level male inmates
between the ages of 18 and 32 who had been in prison for less than one year. The data
used in this study were collected as part of a multifaceted, original data collection project
composed of test and retest face-to-face interviews with incarcerated offenders, analyses
6
of official Ohio Department of Rehabilitation and Correction inmate records, and geocoded neighborhood data. These data are unique in that they were collected from a
sample that was intentionally designed to be more representative of the current prison
population than those used in previous life-events calendar research.
Reliability and Validity
This study extends the life-events calendar method within criminology by
providing the first test-retest analysis of the reliability of life-events calendar data
collected from prison inmates. The test-retest method entails interviewing the same
respondents on multiple occasions and is a preferred strategy for testing reliability
(Carmines and Zeller 1979; Singleton and Straits 1999: 117; Thornberry and Krohn
2000). Given that inconsistent responses over time may indicate unreliable data
(Carmines and Zeller 1979), and in some cases deception on the part of the respondent
(Northrup 1997), findings from test-retest analyses in this study provide insights into
whether or not incarcerated offenders give reliable responses.
In addition to examining the consistency of inmate accounts, this study advances
the life-events calendar method by testing the validity of life-events calendar data
collected from adult prisoners. Previous researchers have argued that self-reports must
be compared to independent sources such as official records to adequately assess validity
(Hindelang, Hirshci, and Weis 1981). Moreover, comparing self-reports to official data
has become an established practice for testing validity in criminological research (Elliott,
Dunford, and Huizinga 1987; Hindelang, Hirschi, and Weis 1979; Maxfield, Weller, and
Widom 2000). Accordingly, this study compares life-events calendar data collected from
7
incarcerated offenders with information contained in their official Ohio Department of
Rehabilitation and Correction records to assess the validity of prisoner responses.
The Sections That Follow
The remaining sections begin with a more thorough introduction to the life-events
calendar method in Chapter 2, followed by a review of the literature on reliability and
validity in Chapter 3. Testable research hypotheses are then presented in Chapter 4.
Given that this study extends from a broader, original data collection project, Chapters 5,
6, and 7 collectively tell the story of how this research was conducted, where it took
place, and whom it examined. For instance, Chapter 5 describes the data and methods
used, Chapter 6 offers a reflective account of doing research in prison, and Chapter 7
contains descriptive information about the sample. More sophisticated analyses testing
the reliability and validity of the data are then presented in Chapter 8, and a concluding
discussion that summarizes the current study’s contributions to the literatures on the lifeevents calendar method, reliability, validity, survey methods, and the use of longitudinal
self-reports in criminology is contained in Chapter 9. Taken together, these contributions
will answer the questions of whether or not incarcerated offenders are accurate in their
retrospective accounts of prior offending and how the accuracy of their responses can be
assessed and improved.
8
CHAPTER 2
THE LIFE-EVENTS CALENDAR METHOD
This chapter begins with a thorough introduction to the life-events calendar
method. Summaries of current applications of the life-events calendar method in the
social sciences and criminology are then provided, followed by an outline of
methodological developments that have facilitated its adoption by criminologists.
The Life-Events Calendar Method: Extending Traditional Self-Reports
The life-events calendar method is beginning to receive serious attention from
researchers because it improves upon traditional self-reports (Belli, Shay, and Stafford
2001; Caspi et al. 1996; Freedman et al. 1988; Lin, Ensel, and Lai 1997). Self-report
research entails the asking and answering of questions (Northrup 1997), yet social
scientists have typically neglected the ways in which cognitive processes affect
respondents’ answers (Bradburn, Rips, and Shevell 1987). Considering that memory
problems can fundamentally compromise the quality of self-report data (Belli 1998;
Sudman, Bradburn, and Schwarz 1996), the consequences of this oversight cannot be
overstated. The life-events calendar method is specifically designed to account for the
role of cognition in response behavior (Belli 1998; Belli, Shay, and Stafford 2001),
giving it an advantage over traditional self-report methods.
9
The life-events calendar method has emerged over the past twenty years as an
extension of traditional self-report surveys. Interviewers who conduct surveys that
incorporate this method work with respondents to fill out calendars that chart when
various events occurred in the respondents’ lives (Freedman et al. 1988). These
calendars are typically paper-based (see Freedman et al. 1988; Lewis and Mhlanga 2001),
though technological advances have led to the use of computer-based calendars in some
recent studies (see Belli 2000; Horney 2001; Wittebrood and Nieuwbeerta 2000).
Life-events calendars collect retrospective data that correspond with time periods
and durations ranging anywhere from a few days to several years, depending on the study
(Lin, Ensel, and Lai 1997). For instance, the life-events calendar method has been used
to collect month-to-month data over a period of 2-3 years per respondent (Horney,
Osgood, and Marshall 1995), gather daily information from respondents during a one
month timeframe (Yacoubian 2003) and collect yearly data for respondents throughout a
40 year period (Laub and Sampson 2003).
Accounting for Cognitive Processes
The fundamental premise of life course theory, which is that human lives are best
understood as a set of experiences and events that are interconnected and mutually
reinforcing (Wheaton and Gotlib 1997), is a guiding principle for researchers who have
pioneered the life-events calendar method’s development (Freedman et al. 1988).
Moreover, the life-events calendar method is informed by findings from cognitive science
that suggest people’s memories are organized within the brain in memory structures that
are both patterned and interrelated (Belli 1998; Bradburn, Rips, and Shevell 1987; Caspi
et al. 1996; Sudman, Bradburn, and Schwarz 1996). The dynamic nature of lived
10
experience is reflected in how humans store memories, which suggests important
implications for social science research.
For instance, given that human memories are interconnected, it is potentially
possible to manipulate memory structures to facilitate respondent recall more effectively
(Sudman, Bradburn, and Schwarz 1996). Belli (1998) notes that “personally experienced
events are structured in autobiographical memory by hierarchically ordered types of
memories for events that vary in their scope and specificity, and this structure is
organized along temporal and thematic pathways which guide the retrieval process” (p.
385). Life-events calendars are specifically designed to tap into these temporal and
thematic pathways, making them advantageous over traditional survey methods (Belli
1998; Belli, Shay, and Stafford 2001).
An example of a strategy that utilizes memory structures to facilitate recall is
having a researcher begin an interview by asking the respondent to indicate memorable
events on a life-events calendar, such as a graduation or anniversary. These events might
then be used to trigger memories of mundane or taken-for-granted activities that were
happening during the times more memorable events took place (Caspi et al. 1996).
Another technique for helping respondents conjure up memories is to pose a
number of questions that are organized around a common theme while simultaneously
working with the respondent to map his or her responses on a life-events calendar. For
instance, a survivor of domestic violence could be asked a series of questions about his or
her abuser, where the violence took place, common patterns of abuse, and other relevant
information (Yoshihama et al. 2002). This approach aids respondents in their
11
recollections because humans often remember related events together (Bradburn, Rips,
and Shevell 1987).
The distinctive ability to stimulate respondents’ memories sets the life-events
calendar method apart from other survey approaches (Belli, Shay, and Stafford 2001).
Researchers have noted that life-events calendar interviews are more dynamic than
traditional self-report surveys because respondents are presented with both mental
(questions) and visual (calendars) “cues” to facilitate recall (Axinn, Pearce, and Ghimire
1999; Caspi et al. 1996; Freedman et al. 1988; Harris and Parisi 2007; Horney 2001;
Laub and Sampson 2003: 67).
Gardner (1983) asserted that “human cognition…includes a far wider…set of
competences than we have ordinarily considered…many if not most of these
competences do not lend themselves to measurement by standard verbal methods” (p. x),
and he subsequently argued that there are visual and spatial components to intelligence.
In line with Gardner’s suppositions, the use of both mental and visual prompts to
generate data suggests that the life-events calendar method taps into a larger pool of
memories than traditional self-reports.
Survey researchers have observed that respondents may “telescope,” or
inaccurately report that an event happened during the researcher’s time frame when it in
fact occurred earlier or later (Bradburn, Rips, and Shevell 1987; Sudman and Bradburn
1974; Sudman, Bradburn, and Schwarz 1996). Through the use of cues and reference
points, deliberate question sequences, and visual calendars, the life-events calendar
method has been found to minimize telescoping when compared to traditional self-report
surveys (Lin, Ensel, and Lai 1997).
12
Researchers who have used traditional self-reports have called for improved
methods that account for how people remember their pasts (see Anglin, Hser, and Chou
1993), better specify the temporal ordering of events (see Anglin, Hser, and Chou 1993;
Yoder, Whitbeck, and Hoyt 2003) and better capture longitudinal offending patterns
(Weis 1986: 42). The life-events calendar method has been designed to answer these
calls, and findings suggest that it has been successful (Belli, Shay, and Stafford 2001;
Caspi et al. 1996; Freedman et al. 1988; Lin, Ensel, and Lai 1997).
An Interactive Mode of Administration
A methodological advantage of using the life-events calendar method over
standard techniques is that its format facilitates interaction between the interviewer and
respondent (Harris and Parisi 2007). Traditional surveys employ a formalized style of
interviewing characterized primarily by unilateral exchanges, which does little to
establish rapport with the respondent (Cannell and Kahn 1968: 527; Fontana and Frey
2003).
By way of contrast, the administration of the life-events calendar method creates a
conversational dynamic as interviewers and respondents work together to fill out
calendars and interviewers read introductory statements that frame sets of interrelated
questions (Belli, Shay, and Stafford 2001). A benefit of facilitating this type of
interviewing context is that it provides opportunities for interviewers to both catch
inconsistencies in respondents’ accounts and probe for clarification (Belli, Shay, and
Stafford 2001; Caspi et al. 1996).
Having a situation where interviewers and respondents work together toward the
common goal of completing a calendar helps establish rapport, which results in improved
13
data accuracy and happier respondents. For instance, Engel, Keifer, and Zahm (2001)
compared their administrations of life-events calendar and traditional self-report surveys
and found that respondents who filled out life-events calendars put more effort into
providing interviewers with correct information. Moreover, the researchers noted that
respondents in the life-events calendar interviews were also more cooperative and patient.
Interpersonal dynamics are explicitly recognized concerns when doing qualitative
research (see Adler 1993; Fontana and Frey 2003: 655; Horowitz 1986; Miller 2001: 29).
Although social interaction is often taken for granted in quantitative studies (Blumer
1969), it is imperative that survey researchers also care about relations between
interviewers and respondents (see Northrup 1997; Sudman and Bradburn 1974). When
compared to traditional self-report surveys, the life-events calendar method appears to be
preferable because its mode of administration allows for more interpersonal interaction
between interviewers and respondents (Belli, Shay, and Stafford 2001; Caspi et al. 1996).
Summary
The life-events calendar method enables researchers to efficiently collect
complicated longitudinal data from respondents (Axinn, Pearce, and Ghimire 1999). The
ongoing development of the method has been informed by findings from the cognitive
science literature (Belli 1998; Bradburn, Rips, and Shevell 1987), and the way in which it
is administered allows interviewers to establish rapport with respondents and straighten
out inconsistencies while in the process of collecting data (Belli, Shay, and Stafford
2001).
Aside from the methodological advantages, the life-events calendar method is
appealing on a more practical level because it allows researchers to collect longitudinal
14
data from a cross-sectional sample, which is substantially cheaper and more time efficient
than conducting a longitudinal panel study (Dex 1991: 5; Freedman et al. 1988; Lin,
Ensel, and Lai 1997). It is for these reasons that the life-events calendar method is
emerging as a preferred data collection strategy in the social sciences.
Current Applications
The life-events calendar method is particularly useful for gathering data from
populations whose members feature chaotic lives, reading difficulties, memory problems,
and multiple changes over the life course (Engel, Keifer, and Zahm 2001; Zahm et al.
2001). A sample of substantive topics in the social sciences that have recently been
studied with life-events calendars includes contraception and childbirth patterns (Axinn,
Pearce, and Ghimire 1999), psychological distress and stressors over the life course
(Ensel et. al. 1996; Lin, Ensel, and Lai 1997), the medical risks of migrant farm laborers
(Engel, Keifer, and Zahm 2001; Zahm et al. 2001), social networks (McPherson,
Popielarz, and Drobnic 1992), the educational, martial, and employment patterns of
young adults (Freedman et al. 1988), welfare transitions (Harris and Parisi 2007), and
memberships in voluntary associations (Popielarz and McPherson 1995).
Moreover, the life-events calendar method has also been incorporated into
research examining topics related to deviance and crime, including homeless youth
(Hagan and McCarthy 1998; Whitbeck, Hoyt, and Yoder 1999), the life course
trajectories of juvenile delinquents (Laub and Sampson 2003), victimization (Wittebrood
and Nieuwbeerta 2000; Yoshihama et al. 2002; Yoshihama et al. 2005), the behavior of
offenders on probation (MacKenzie and Li 2002), drug use (Day et al. 2004), drinking
15
(Sobell et al. 1988), and criminal offending patterns (Horney, Osgood, and Marshall
1995; Kruttschnitt and Carbone-Lopez 2006; Lewis and Mhlanga 2001; Roberts et al.
2005; Yacoubian 2003).
The utility of the life-events calendar method for studying criminological topics is
obvious given its ability to facilitate recall from those who lead unstable lives (Engel,
Keifer, and Zahm 2001; Zahm et al. 2001). However, it is important to recognize that the
use of life-events calendars in criminology has not occurred by coincidence but is instead
the outcome of historical developments in two of the discipline’s dominant research
methods.
Framing the Context: The Origins of the Calendar Method in Criminology
Criminologists have been drawn to the life-events calendar method because it
enables offenders to provide detailed information about their backgrounds and
experiences (Horney, Osgood, and Marshall 1995; MacKenzie and Li 2002; Wittebrood
and Nieuwbeerta 2000). However, the practice of collecting responses from offenders is
not new (Junger-Tas and Marshall 1999). Examining prior criminological research
methods that emphasize the asking and answering of questions is crucial in recognizing
the life-event calendar method’s distinct contributions to criminology.
Life Histories in Criminology
Obtaining firsthand accounts is a well-established practice in criminology. For
instance, sociologists affiliated with the University of Chicago in the early 1900s were
among the first to gather life history data to gain personal insights into the lives of those
they studied (Williams and McShane 2004: 58-59). Among the notable life history
16
studies to emerge from the Chicago School were The Jack Roller (Shaw 1930), The
Natural History of a Delinquent Career (Shaw 1931), The Professional Thief (Sutherland
1937), and Brothers in Crime (Shaw, McKay, and McDonald 1938). These classic works
in criminology all drew from information that was personally shared by active offenders,
and collectively they set the precedent for future qualitative research on crime and
delinquency (Becker 1966).
The life history method continues to be used in contemporary criminological
research (see Athens 1992; Maruna 2001; Messerschmidt 2000; Pino 2005; Steinberg
2004; Terry 2003). Consistent with earlier life histories collected by researchers at the
University of Chicago, virtually all of the recent life history projects in criminology have
been qualitative studies. To quote Howard S. Becker (1966: xii), “…the life history can
be particularly useful in giving us insight into the subjective side of much-studied
institutional processes about which unverified assumptions are…often made.”
Accordingly, the life history method’s strength is its ability to provide rich, in-depth
insights into the subject’s own experiences and the broader social forces that shape them
(Connell 1992).
Despite the qualitative foundation of life history research, three recent life history
studies have also incorporated life-events calendars into their designs (see Hagan and
McCarthy 1998; Kruttschnitt and Carbone-Lopez 2006; Laub and Sampson 2003).
Considering that life-events calendar data lend themselves to quantitative analyses (Caspi
et al. 1996), the adoption of multi-method approaches by high-profile scholars suggests a
potential methodological turning point in life history research.
17
Moreover, as life course theorizing continues to shape mainstream criminological
thought (Blokland and Nieuwbeerta 2005; Laub and Sampson 2003; Sampson and Laub
1992), more attention will be devoted to integrating qualitative and quantitative research
methods (Sullivan 1998). It is likely that life history researchers will increasingly rely on
life-events calendars to track patterns of offending. Accordingly, an assessment of the
life-events calendar method is a timely and necessary undertaking that can potentially
inform future developments in life history methods and longitudinal research.
The Self-Report Method
Another research strategy that collects firsthand responses from offenders is the
self-report method (Junger-Tas and Marshall 1999). While the life history method offers
in-depth information and “shares with autobiography its narrative form, first-person point
of view and its frankly subjective stance” (Becker 1966: v), the self-report method relies
on the use of survey questions (Northrup 1997; O’Brien 2000). Self-reports are a well
established data collection strategy in criminology. For instance, self-reported data were
utilized in the Glueck’s classic longitudinal research on juvenile delinquency (Glueck and
Glueck 1950), and self-reports were also among the data sources incorporated into the
foundational works on prison culture (see Clemmer 1940; Sykes 1958). Despite the use
of self-reports in these prominent studies, Short and Nye’s survey research on
delinquency during the late-1950s is typically credited with legitimizing and
“popularizing” the use of the self-report method in criminology (O’Brien 1985;
Thornberry and Krohn 2000; see Nye and Short 1957; Short and Nye 1957).
Thornberry and Krohn (2000) note that Short and Nye’s research received
widespread attention for two reasons. First, their findings questioned taken-for-granted
18
assumptions of an inverse relationship between class and crime. Second, their research
was more scientifically rigorous than prior self-report studies, which lent credibility to
the use of self-report methods in criminology. Subsequent researchers relied on selfreported data to test etiological theories of crime and delinquency, including social
control theory (Hirschi 1969) and social learning theory (Akers et al. 1979), and findings
from highly regarded studies suggested that self-reported data were valid for
criminological research (Chaiken and Chaiken 1982; Hindelang, Hirschi, and Weis 1981;
Marquis 1981; Weis 1986). Taken together, these developments account for how selfreports have become one of the dominant strategies for studying offending (Junger-Tas
and Marshall 1999; O’Brien 2000; Thornberry and Krohn 2000).
Criminological theorists are now devoting more attention to thinking about the
life course (Hagan and McCarthy 1998; Laub and Sampson 2003; Sampson and Laub
1992), and researchers are recognizing that self-report methodology needs to expand
beyond its traditional focus on causes of crime to examine relationships between lifecourse processes and offending (Junger-Tas and Marshall 1999; Thornberry and Krohn
2000; Weis 1986). Yoder, Whitbeck, and Hoyt (2003) gave voice to the methodological
challenges of this new emphasis on the life course when they asserted that traditional
self-reports could not be used to determine the ordering of life events in their study of
homelessness and gang activity. In their reflections on how their study could have been
improved, Yoder and colleagues specifically endorsed the life-events calendar method as
a strategy with the potential to better specify longitudinal relationships in self-report
studies.
19
Summary
As criminologists have begun focusing more on life course processes (Farrington
2003; Hagan and McCarthy 1998; Laub and Sampson 2003; Moffitt 1997; Sampson and
Laub 1992), concomitant evolutions of well established research methods have generated
growing interest in the life-events calendar method. These developments are exciting,
and they are also unprecedented. For instance, after nearly a century of focusing
explicitly on qualitative analyses, life history researchers are now beginning to
incorporate quantitative data into their designs through the use of life-events calendars
(see Hagan and McCarthy 1998; Laub and Sampson 2003). Similarly, after several
decades of being used primarily to examine the etiology of crime, self-report surveys are
now being used to gather data on longitudinal relationships between life course variables
and offending (Junger-Tas and Marshall 1999; Thornberry and Krohn 2000). These
recent advancements have resulted in the emergence of the life-events calendar method as
an innovative strategy for collecting dynamic quantitative data from offenders (see
Horney, Osgood, and Marshall 1995).
20
CHAPTER 3
RELIABILITY AND VALIDITY
Researchers need to be concerned about the quality of their data, which is
typically evaluated by assessing reliability and validity (Litwin 1995: 124; Gottfredson
1971). The life-events calendar method is a relatively new data collection strategy that
has only been used sporadically (Axinn, Pearce, and Ghimire 1999). Prior studies on the
reliability and validity of data collected with life-events calendars are therefore limited
(Caspi et al. 1996). Moreover, the few tests of the reliability and validity of life-events
calendar data that do exist have not been based on representative samples of offenders or
prisoners. The current study addresses these shortcomings by testing the reliability and
validity of life-events calendar data collected from individuals who are in prison. The
beginning section of this chapter examines recent incarceration trends and provides an
introductory discussion on prisoner samples. Reviews of the literature on the different
types of reliability and validity then follow.
Emergent Trends: The Life-Events Calendar Method and Prisoner Samples
A limitation of previous life-events calendar studies in criminology is that they
have drawn from samples that are not representative of broader offender or prisoner
populations. For instance, Horney, Osgood, and Marshall’s (1995) influential work using
the life-events calendar method was based on a sample of ‘serious’ offenders, while
21
Roberts et al. (2005) examined respondents from a psychiatric emergency room.
Similarly, MacKenzie and Li (2002) used the life-events calendar method to study
probationers, and Kruttschnitt and Carbone-Lopez (2006) focused on female jail inmates,
who comprise only 11% of those in jail (Chesney-Lind and Pasko 2004: 140) and
constitute a unique population when compared to males in jail (Irwin 1985: xiii).
These populations are certainly important for criminologists to examine.
However, there is also a pressing need to study populations that have disproportionately
been affected by recent incarceration and recidivism trends. The prison population in the
United States has experienced tremendous growth in the last 25 years. For instance, there
were 1,525,924 adults being held in state and federal prisons at the end of 2005 (Harrison
and Beck 2006), which triples the 501,886 inmates who were incarcerated in jails, state
and federal prisons in 1980 (Beck and Gilliard 1995).
The number of prison inmates grew at an average rate of 3.1% each year from
1995 to 2004, and the incarceration rate rose from 601 per 100,000 residents in 1995 to
737 per 100,000 in 2005 (Harrison and Beck 2006). These statistics are especially telling
when they are contrasted with incarceration rates of 221 per 100,000 residents in 1980
and 312 per 100,000 residents in 1985 (Beck and Gilliard 1995). Taken together, these
sobering figures underscore the relatively recent and unprecedented expansion of the U.S.
prison system.
Considering that 95% of those who go to prison are eventually released (Hughes
and Wilson 2002), the dramatic increase in the number of people going to prison has
ultimately led to a flood of ex-offenders who are struggling to reintegrate with society
(Petersilia 2003). Unfortunately, research on inmates released in 1994 suggests that
22
approximately 67% of these ex-offenders will be arrested for a new offense within three
years of leaving prison, which is up from around 62% of those released in 1983 (Langan
and Levin 2002).
The majority of those who are now imprisoned have been locked up for what
many consider to be less serious, or minor, offenses (Austin and Irwin 2001; Elsner 2006;
Mauer 1999). These offenders go on to comprise a substantial portion of those who get
released. For instance, the percentage of inmates released from prison who were drug
offenders went up from 26% in 1990 to 33% in 1999, while 31% of those released in
1999 were property offenders and 25% were in for violent offenses (Hughes and Wilson
2002).
Offenders who served time for burglary, larceny, motor vehicle theft, and
possession of stolen property have the highest recidivism rates relative to other exprisoners, while murderers and rapists rank amongst those who are least likely to be
rearrested (Langan and Levin 2002). Moreover, while the recidivism rates for violent
offenders showed little change between 1983 and 1994, the rates of re-arrest for lowerlevel offenders during this period went up substantially, including a jump from 68.1% to
73.8% for property offenders and increases from 50.4% to 66.7% for drug offenders and
54.6% to 62.2% for public order offenders, respectively (Hughes and Wilson 2002).
Implications of Current Trends
These trends suggest two important implications for crime research. First, studies
focused primarily on violent offenders are not representative of the majority of inmates
serving time in prison. It is therefore crucial that criminologists conduct more self-report
research with samples reflective of the changing prison population. Second, recidivism
23
rates are particularly high for lower-level offenders. Moreover, rates for these groups are
rising rapidly, and they have shown disproportionate increases when compared to rates
for other offenders.
The need to better understand life circumstances that contribute to higher reoffending rates for less serious offenders is clear, and more studies will be done on the
growing populations of prisoners and recidivists in the future. Some of these efforts will
potentially benefit from incorporating the life-events calendar method into their designs.
However, researchers must first assess the quality of life-events calendar data for
criminological research.
Can We Really Trust What Prisoners Say?
Scholars who study people of disrepute often encounter skeptics who doubt that
deviants and criminals would be honest with interviewers (Anglin, Hser, and Chou 1993;
Maruna 2001; Rhodes 1999: 60). Hughes (1945) observed that individuals often
unwittingly expect others to possess certain “auxiliary” traits based on their most visible
social identities. Accordingly, it is likely that those who dismiss the truthfulness of
offenders’ responses have associated auxiliary traits such as dishonesty and insincerity
with criminality, regardless of whether they have evidence to support these linkages.
Despite fears that prisoners would provide dishonest answers in criminological research
(see Sorensen 1950), the literature suggests that incarcerated offenders usually provide
researchers with reliable and valid information (Chaiken and Chaiken 1982; Horney and
Marshall 1992; Lewis and Mhlanga 2001; Marquis 1981).
A number of plausible explanations exist for why prisoners would be truthful.
Those who are incarcerated may not have anything left to conceal (Hser, Maglione, and
24
Boyle 1999), they might not perceive any additional threat since they are already locked
up (Junger-Tas and Marshall 1999: 323; Lewis and Mhlanga 2001), or they may not feel
the need to engage in impression management since they are already labeled as deviant
(Junger-Tas and Marshall 1999: 323; Weis 1986: 26).
Moreover, what might seem like sensitive lines of questioning to outsiders may in
fact be relatively innocuous to those who are being interviewed (Hser, Maglione, and
Boyle 1999; Northrup 1997). Support for this supposition comes from a recent study that
found jail inmates provided more valid self-reports of drug use than individuals who were
not incarcerated (Hser, Maglione, and Boyle 1999). In accounting for this finding the
researchers suggested that drug use is probably not a sensitive topic among those who are
in jail. Jolliffe and colleagues (2003) reached a similar conclusion when examining the
validity of self-reported drug use by kids in Seattle.
Two basic reasons for prisoners to be truthful may be that they welcome the
temporary escape from day-to-day prison life that comes with being interviewed and they
like having an opportunity to talk about themselves (Copes and Hochstetler 2006; Lewis
and Mhlanga 2001). For instance, Wright and colleagues (1992) conducted ethnographic
research with burglars and found that many of their respondents were forthcoming and
engaged in the project. They concluded that the burglars they studied may have been
motivated to share their experiences because individuals who engage in crime often do
not have anyone to talk to about their work. Desires to relieve boredom and converse
with others are not exclusive to deviants. Considering that “criminal behavior is human
behavior” (Sutherland and Cressey 1970: 73), it is conceivable that prisoners are
25
motivated to be honest with researchers by the same forces that influence conventional
others.
Aside from honesty, other factors that may affect data quality include memory
and recall problems (Bradburn, Rips, and Shevell 1987), the type of methods used by
researchers (Belli, Shay, and Stafford 2001), background characteristics of respondents
(Hindelang, Hirschi, and Weis 1981) and in some instances of interviewers (Northrup
1997), and features of the interview instrument (Weis 1986). Researchers typically
assess reliability and validity to determine the extent to which these factors affect the
quality of their data (Gottfredson 1971).
Reliability
It is impossible for social scientists to develop methods and measures that
perfectly capture the processes they study, which makes the presence of random error
inevitable when doing research (Carmines and Zeller 1979; Litwin 1995). However,
random error can be assessed and minimized. A basic approach to dealing with random
error is to improve reliability.
Reliability can be defined as “the extent to which a measuring instrument
produces consistent results” (Kinnear and Gray 2006: 548). Researchers also commonly
assess the reliability of data (Litwin 1995). Reliability focuses mostly on testing and
measurement, making it an empirical consideration (Carmines and Zeller 1979). Given
that completely eliminating random error is impossible, referring to instruments or data
as being “more reliable” or “less reliable” rather than “reliable” or “not reliable” is the
preferred practice (Carmines and Zeller 1979; Thornberry and Krohn 2000).
26
There are four main ways to assess reliability, all of which focus on consistency.
The Split Halves and Internal Consistency methods, which are the first two strategies
examined in this section, are typically utilized when tests have only been conducted at
one point in time. The Alternative Form and Test-Retest methods are options for
researchers who are able to survey the same set of respondents on more than one
occasion.
Split Halves Method
Researchers who do not have the opportunity to test respondents more than once
need to devise alternative strategies for assessing the consistency of their instruments and
measures. One way in which this has been done is through the use of the split halves
method. Researchers who adopt this approach typically separate their measures into two
comparable categories and then see how highly correlated they are (Carmines and Zeller
1979; Huizinga and Elliott 1986). A fundamental problem with the split halves method is
that items on a questionnaire can potentially be split in multiple ways that would each
produce a unique correlation (Carmines and Zeller 1979). Another shortcoming is that,
depending on the topic studied, there may be little rationale for expecting halves to be
correlated to begin with. For instance, in a crime survey it would be erroneous to
presuppose that a set of questions pertaining to various forms of offending, such as arson
and drug dealing, would be highly correlated with another set of measures that focus on
other crimes, such as bank robbery and rape (Huizinga and Elliott 1986; Thornberry and
Krohn 2000). Accordingly, the split halves method is not typically used in
criminological research (Huizinga and Elliott 1986).
27
Internal Consistency Method
The internal consistency method examines “groups of items that are thought to
measure different aspects of the same concept” (Litwin 1995: 21). Researchers who use
this approach develop scales and then examine how the indicators correlate with one
another (Carmines and Zeller 1979; Litwin 1995). For instance, criminologists might ask
a respondent about his or her involvement in a number of antisocial behaviors to measure
the concept of psychopathology. Internal consistency reliability is assessed with the
Cronbach’s alpha statistic (Litwin 1995). Alpha values improve as more indicators get
added to scales (Carmines and Zeller 1979; Litwin 1995). However, each addition has
less of an overall effect as more indicators are added and scales expand (Carmines and
Zeller 1979).
The internal consistency method of testing reliability is not a recommended
strategy in criminology due to its reliance on the assumption that individuals who commit
one type of offense will be engaged with other forms of offending (Huizinga and Elliott
1986; Thornberry and Krohn 2000). Moreover, it is tenuous to conclude that patterns of
offending for multiple types of crime are comparable (Huizinga and Elliott 1986).
Focusing on an instrument’s internal consistency to assess reliability is therefore limiting
when evaluating self-reports in criminological research.
Alternative Form Method
Researchers who are able to conduct repeated tests with the same group of
respondents sometimes rely on the alternative form, or parallel (DeCoster and Claypool
2004: 46) method to assess reliability. This method entails administering different
versions of an instrument during each contact with respondents (Carmines and Zeller
28
1979). For instance, in a second interview the researcher might alter the wording,
ordering, content, or structure of questionnaire items (Litwin 1995).
A necessary assumption of the alternative form method is that each version of the
instrument ultimately captures the same processes (Carmines and Zeller 1979). The
alternative form method is appealing because it may reduce the influence of conditioning
and testing effects on second interviews (Litwin 1995). However, a fundamental
limitation of the alternative form method is that it is difficult to develop two versions of
an instrument that measure the same processes comparably (Carmines and Zeller 1979).
Moreover, creating two instruments is also likely to be time consuming and expensive.
Test-Retest
The test-retest method is similar to the alternative form method in that it involves
two different contacts with respondents. However, unlike the alternative form method
the same version of the instrument is administered to respondents (Carmines and Zeller
1979; Singleton and Straits 1999: 117; Thornberry and Krohn 2000). After completing
each interview, researchers create correlation coefficients to assess the extent to which
the instrument and measures produced consistent results (Huizinga and Elliott 1986;
Litwin 1995; Thornberry and Krohn 2000). Correlations above 0.7 are typically
indicative of decent levels of reliability when using the test-retest method in criminology
(Thornberry and Krohn 2000).
The test-retest method is the most commonly used reliability test in the social
sciences (Litwin 1995), and it is widely considered to be the best strategy for assessing
reliability in criminological research (Huizinga and Elliott 1986; Thornberry and Krohn
2000). For instance, the test-retest method is preferable to the split halves and internal
29
consistency methods because reliability coefficients are not affected by the lack of
correspondence across different forms and patterns of offending (Huizinga and Elliott
1986).
There are three potential shortcomings of the test-retest method of reliability
assessment. First, it can be costly and difficult to contact the same respondents more than
once (Carmines and Zeller 1979). Second, items yielding consistent responses over two
points in time may reflect respondents’ familiarity with the testing process or their ability
to remember answers given in first interviews rather than instrument reliability (Litwin
1995). Third, it is possible that variables might change (Carmines and Zeller 1979). For
instance, an offender might believe that being in a gang is a good idea when contacted the
first time. However, his or her opinion about gang membership may change during the
subsequent period leading up to the second interview.
It is crucial that researchers leave enough time between contacts with respondents
to minimize conditioning effects, while simultaneously avoiding situations in which
variables may change (Carmines and Zeller 1979; Junger-Tas and Marshall 1999: 353;
Litwin 1995). Thornberry and Krohn (2000) advise researchers to use intervals of one to
four weeks when developing test-retest designs in criminological research.
Test-Retest Reliability of Traditional Self-Reports in Criminology
Assessments of the test-retest reliability of self-reports in criminology are limited
and have typically not been conducted with prisoner samples. However, there are two
notable exceptions to this general pattern. First, researchers studying hardcore
delinquents administered test and retest surveys to 50 incarcerated juveniles using a
seven-day interval between interviews (DeFrancesco, Armstrong, and Russolillo 1996).
30
Findings indicated that the respondents’ accounts were extremely reliable, though it
should be noted that these data were collected from juveniles rather than adults.
Accordingly, findings from a sample of serious youthful offenders may not generalize to
less serious adult offender populations.
A second application of the test-retest method with incarcerated offenders was
conducted with 252 adult prisoners from California, Michigan, and Texas (Marquis
1981). The interval between interviews ranged from seven to ten days, and test-retest
correlations were compared to the correspondence between first interviews and official
records. Test-retest reliability was high for responses about prior convictions but
uncertain for previous arrests, based on comparisons to record checks. Marquis
hypothesized that the repetition of error from the first interviews may have affected the
second contacts. However, he also lamented that he did not have the means to determine
whether these findings reflected duplications of error in second interviews or inaccuracies
in the records he used for comparisons.
Criminologists interested in the test-retest method of reliability assessment can
benefit from examining its applications in substance abuse research. For instance,
researchers studying adult narcotics addicts used a ten-year interval between test and
retest interviews and found that for the most part respondents provided highly reliable
data (Anglin, Hser, and Chou 1993). The main items that had lower reliability were those
measuring activities that were more socially unacceptable and those related to events that
happened infrequently. Reliability also went down as more time passed from the initial
interview. Wage information for illegal activities was less reliable, but estimated
earnings from legal work were consistent in both interviews. The researchers concluded
31
that good data can be gathered from narcotics addicts as long as they are notified about
the study ahead of time and are assured that their identities will be protected.
Test-Retest Reliability of the Life-Events Calendar Method
A survey of the literature produced five studies that have assessed the test-retest
reliability of the life-events calendar method. The authors of all five of these studies
concluded that data collected with life-events calendars have decent reliability. For
instance, the first test-retest assessment of the reliability of life-events calendar data used
a five year interval and found evidence of high reliability for indicators measuring the
marital, childbirth, school, and work histories of young adults (Freedman 1988). Other
researchers examined alcohol use in a general sample and found that respondents
provided consistent information in test-retest interviews that were conducted using an
interval ranging from three to four weeks (Sobell et al. 1988). Another study found that
domestic violence victims provided consistent accounts of past experiences with threats,
sexual abuse, and physical abuse (Yoshihama et al. 2002).
Drug abuse researchers employed the life-events calendar method to examine 27
heroin addicts over a 24-month timeframe (Day et al. 2004). The researchers used testretest interviews with a seven-day interval and found that reliability decreased for items
about events that were further in the past and that reliability varied by the types of drugs
respondents abused. In particular, reliability was higher for indicators of marijuana and
heroin use, which were more frequently abused, than for cocaine, which was used more
infrequently.
Finally, a test-retest interval of approximately one year was used in a study of
migrant agricultural laborers that inquired into life course events that occurred over a
32
median period of 28 years (Engel, Keifer, Thompson et al. 2001). The researchers found
that test-retest reliability was high for indicators related to the various categories of farm
work engaged in, geographic regions of employment, and activities that respondents
performed most often. However, reliability was lower for events that were more distant
from the time respondents were surveyed.
Taken together, the findings from these divergent studies provide evidence that
the life-events calendar method can be used to gather good data. However, the test-retest
reliability of life-events calendar data collected from offenders has not yet been assessed.
Given that criminologists have recently incorporated the life-events calendar method into
their research designs (see Horney, Osgood, and Marshall 1995; Kruttschnitt and
Carbone-Lopez 2006; Laub and Sampson 2003; Lewis and Mhlanga 2001; MacKenzie
and Li 2002), there is a need for reliability tests of life-events calendar data collected
from offender samples.
Concluding Points: Reliability
Reliability refers to the consistency of research items (Kinnear and Gray 2006).
Although there are multiple ways to assess reliability, the test-retest method is the
recommended strategy for criminological research (Huizinga and Elliott 1986;
Thornberry and Krohn 2000). Prior studies suggest that self-reports provided by
offenders have good reliability (Anglin, Hser, and Chou 1993; DeFrancesco, Armstrong,
and Russolillo 1996; Marquis 1981). However, researchers have not assessed the testretest reliability of offender data collected with life-events calendars. Previous research
has found that test-retest reliability is affected by the frequency in which events occurred,
the social stigma related to variables of interest, and the length of time elapsed after first
33
interviews (Anglin, Hser, and Chou 1993). Testing effects may also affect reliability
when researchers use the same instrument more than once (Litwin 1995). The ideal
interval between first and second interviews when evaluating reliability with the testretest method is one to four weeks (Thornberry and Krohn 2000). Although reliability
tests provide important information about the consistency of measures and data, the
presence of high reliability does not automatically ensure that there is high validity
(Litwin 1995).
Validity
Kinnear and Gray (2006: 550) point out that “a test is said to be valid if it
measures what it is supposed to measure.” Validity is affected by nonrandom error that
occurs when research instruments capture processes other than the ones they set out to
study (Carmines and Zeller 1979). For instance, a series of indicators designed to
examine attitudes will lead to low validity if it measures moods instead (Litwin 1995).
As opposed to being a property of the research instrument, validity reflects the
association between measurement items and the processes being studied (Carmines and
Zeller 1979). Similar to reliability, it is standard practice to describe these relationships
as “more valid” or “less valid” rather than “valid” or “not valid” (Carmines and Zeller
1979; Thornberry and Krohn 2000). The following sections examine Content Validity,
Construct Validity, and Criterion Validity, which are regarded as the three main types of
validity in social science research.
34
Content Validity
Content validity essentially refers to a researcher’s personal opinion of how well
measures correspond with what they are meant to assess (Huizinga and Elliott 1986;
Litwin 1995; Thornberry and Krohn 2000). Face validity is sometimes regarded as a
form of content validity (Huizinga and Elliott 1986). However, while content validity
reflects the evaluation of one who is regarded as being knowledgeable about the topics
under study, face validity is based on the assessment of laypersons (Litwin 1995).
Carmines and Zeller (1979) suggest that the lack of consensus among social scientists on
basic constructs poses a challenge to content validity. Moreover, given that content
validity is based on a subjective assessment it does not lend itself to statistical analyses
(Litwin 1995). For these reasons content validity provides more of a starting point than
benchmark for assessing validity.
Construct Validity
Construct validity represents the effectiveness of measurement items when they
are applied in research (Litwin 1995) and how well indicators serve as proxies for the
theoretical constructs they have been designed to measure (Carmines and Zeller 1979).
Determining construct validity involves examining the extent to which a measure’s
performance is similar to that of comparable items used in the discipline (Huizinga and
Elliott 1986; Thornberry and Krohn 2000). The assessment of construct validity is
therefore an ongoing and cumulative process that occurs across multiple studies and
applications, typically over a period of several years (Litwin 1995). Although construct
validity can potentially provide considerable support for the validity of crime indicators
(Thornberry and Krohn 2000), it is rarely used in criminology due to the inherent
35
difficulty of assessing theoretical relationships and the validity of measurement items at
the same time in one study (Huizinga and Elliott 1986).
Criterion Validity
It is recommended that criminologists focus on criterion validity rather than
construct validity (Junger-Tas and Marshall 1999). There are two forms of criterion
validity: predictive validity and concurrent validity (Litwin 1995). Predictive validity
measures an indicator’s association with future outcomes (Carmines and Zeller 1979).
For instance, criminologists could examine whether an indicator such as prior offending
forecasts offending at later points in time (Junger-Tas and Marshall 1999; Marquis 1981).
Predictive validity is regarded as a convincing measure of quality in criminological
research (Junger-Tas and Marshall 1999).
Besides prediction, criterion validity is also associated with the consistency of
findings across multiple sources that use the same reference period (Huizinga and Elliott
1986; Weis 1986). This form of criterion validity is known as concurrent validity
(Carmines and Zeller 1979), and it entails assessing how well self-reported data matches
up with findings from a different source (Northrup 1997). For instance, criminologists
have traditionally compared self-reports to police records to determine the validity of
offenders’ responses (Thornberry and Krohn 2000). Scholars have also compared
offenders’ self-reports to information provided by third person informants (see Schubert
et al. 2005). A more recent trend in criterion validity testing is to compare respondent
accounts of substance use with hair test (see Knight et al. 1998) and urinalysis results (see
Golub et al. 2002; Hser, Maglione, and Boyle 1999; Webb, Katz and Decker 2006;
Yacoubian 2001). Emphasis on correlations between two different sources lends tests of
36
criterion validity to statistical analyses, making this form of validity popular among
researchers (Huizinga and Elliott 1986; Litwin 1995).
One challenge to assessing concurrent validity in criminological research is that
an ideal, comprehensive source for information about offending does not exist
(Thornberry and Krohn 2000). Hindelang, Hirschi, and Weis (1981) endorse measuring
self-reports against official records because these two data sources are independent from
each other. However, previous researchers have observed that official data often feature
inaccuracies stemming from human involvement in the collection process (see Chaiken
and Chaiken 1982; Huizinga and Elliott 1986; Junger-Tas and Marshall 1999: 350).
Another shortcoming of relying on official records is that they only contain offenses that
authorities know about, thus excluding all crimes that offenders got away with (Huizinga
and Elliott 1986). Considering that less than half of all violent offenses and
approximately one-third of all property crimes are reported to police (Miethe, McCorkle,
and Listwan 2006: 8), official records are an incomplete basis for comparison when
examining concurrent validity.
Criterion Validity Tests of Traditional Self-Reports in Criminology
An unresolved issue in criminology is whether there are racial differences in the
validity of offender self-reports. Studies on this topic have focused primarily on juvenile
respondents (Weis 1986). For instance, African-American youths have been found to
provide less valid self-reports when compared to whites (Hindelang, Hirschi, and Weis
1981; Farrington et al. 1996; Fendrich and Vaughn 1994; Mensch and Kandel 1988).
However, other research indicates that the validity of self-reports from African-American
juveniles is high and comparable to the validity of accounts given by white respondents
37
(Jolliffe 2003). Whether there are racial differences in the validity of self-reports is a
contentious subject in criminology (Weis 1986). Thornberry and Krohn (2000: 58)
believe “this is perhaps the most important methodological issue concerning the selfreport method and should be a high priority for future research efforts.”
A study of adult prisoners that examined the self-report method found that selfreports from incarcerated offenders tend to have high validity (Marquis 1981). However,
Marquis found that self-reports and official records each contain errors. Surprisingly, he
also discovered cases where respondents reported arrests that did not show up in their
official records, an outcome that he referred to as “positive bias.” Maxfield, Weiler, and
Widom (2000) recently surveyed 1196 young adults and also found examples of positive
bias for twenty percent of their respondents, which strongly suggests that the concept of
positive bias is an important contribution to the literature that requires further attention
and study.
A growing trend in criterion validity testing involves matching self-reported
information with the results of urinalysis tests. Evidence indicating that inmates provide
valid self-reports comes from a recent study that incorporated urine tests to compare
responses on drug use items provided by jail inmates, emergency room patients, and
individuals with sexually transmitted diseases (Hser, Maglione, and Boyle 1999). The
researchers found that the jail inmates gave the most valid responses of the three groups
examined. Another study of drug abuse combined urinalyses with test-retest interviews
(Anglin, Hser, and Chou 1993). The validity of self-reports provided in first interviews
was low. However, validity was high in the second interviews. The researchers believed
38
that the second interviews were more valid because respondents became more trusting of
the interviewers and the study’s legitimacy after the first contact.
Much of the criterion validity research involving urinalysis and self-reports
extends from the Arrestee Drug Abuse Monitoring (ADAM) program. Supported by the
National Institute of Justice and administered through local law enforcement agencies,
ADAM collected voluntary surveys and urine tests from samples of arrested individuals
to examine patterns of substance use (Thornberry and Krohn 2000). Data from ADAM
program samples indicate that self-reports from arrestees have high validity for items
related to marijuana use (Golub et al. 2002) and drug use more generally (Johnson et al.
2002), and that gang membership does not affect the validity of self-reported drug use
(Webb, Katz, and Decker 2006).
Validity is high for most indicators related to criminal backgrounds in studies that
analyzed ADAM data (Golub et al. 2002; Johnson et al. 2002). Minor offending
information tends to have higher validity (Golub et al. 2002), while self-reports of serious
and index offenses have lower validity (Golub et al. 2002; Johnson et al. 2002). Golub
and colleagues (2002) found that the best predictor of high validity in self-reports was the
disclosure of having previous arrests. In general, findings from studies using ADAM
samples indicate that self-reports provide data that have high validity. However, these
data only include respondents who were arrested and comfortable discussing their
offending while being processed by the police, thus limiting the generalizability of these
findings (Thornberry and Krohn 2000).
39
Criterion Validity Tests of the Life-Events Calendar Method
Previous research suggests that the life-events calendar method can be used to
collect data with good validity, though it should be noted that validity tests of the lifeevents calendar method are limited (Caspi et al. 1996). Moreover, out of seven studies of
the validity of life-events calendar data that emerged when reviewing the literature, none
have been based on a sample that is representative of general offender or prisoner
populations.
Like traditional self-reports, validity tests of the life-events calendar method have
focused on concurrent validity. The most common strategy employed by researchers thus
far has been to compare life-events calendar data to information collected with traditional
questionnaires. The five studies that have adopted this approach all indicate that lifeevents calendar data have high validity. For instance, in their study of farm laborers
Engel, Keifer, and Zahm (2001: 511) observed that the life-events calendar method
collects background information that is “much more detailed and full” than traditional
surveys, thus providing a more thorough and dynamic portrayal of respondents’
backgrounds.
Another comparison of the life-events calendar method and traditional surveys
comes from Yoshihama and colleagues (2005), who interviewed comparable samples of
domestic violence survivors with each method. Respondents in the sample interviewed
with the life-events calendar method recalled more instances of abuse and did a better job
of remembering events that transpired in their distant pasts than those who were surveyed
with a traditional self-report instrument.
40
Other researchers have used respondents’ answers to traditional surveys that were
previously administered in longitudinal studies as comparison criterion. For instance,
Caspi and colleagues (1996) found high validity for life-events calendar data about sociodemographic variables that were compared to traditional survey data collected from the
same respondents three years earlier. Both sets of data corresponded with the same
reference period, suggesting that the life-events calendar method effectively facilitated
recall. However, a potential shortcoming of comparing self-report data to other
information that is self-reported is that the self-reported data serving as the standard of
comparison may be inaccurate to begin with.
In an extension of this design, Belli, Shay, and Stafford (2001) compared
respondents’ self-reports about socio-demographic items to data they provided in a
survey the year before that corresponded with the same reference period. Prior to
conducting second interviews, the respondents were divided into two experimental
groups. Interviewers then administered life-events calendar surveys to the first set of
respondents and traditional surveys to the second. When comparing the findings of these
surveys to the information provided by the respondents the year before, validity was
much higher among the group that was interviewed with the life-events calendar method.
A final comparison of life-events calendar data to panel data for the same reference
period comes from a study of distress (Lin, Ensel, and Lai 2006). The researchers
concluded that the life-events calendar method holds promise for improving data
collection in the social sciences, though they found that using the calendar method led to
more under- than over- reporting
41
Other research also suggests that respondents may underreport in life-events
calendar interviews. Roberts and colleagues (2005) conducted weekly interviews with
psychiatric emergency room patients over a 1-3 year timeframe to gather data on violent
behavior. A life-events calendar survey was then subsequently administered to the same
respondents to collect retrospective accounts of their violence during the months of the
weekly interviews. When comparing these data sources the researchers found that
respondents underreported violence in their calendar interviews. Given that psychiatric
patients perpetrate a small fraction of the violent crimes in the United States (Fox and
Levin 2005) and that most offenses are not violent to begin with (Miethe, McCorkle, and
Listwan 2006), this interesting finding needs to be tested with samples that are more
representative of broader offending populations.
Comparing Self-Reported Information to Independent Sources
Future studies should also compare life-events calendar data to independent
sources such as official inmate records. As Roberts and her colleagues’ study of
psychiatric patients demonstrates, research on the validity of the life-events calendar
method has typically been limited to comparing life-events calendar data to other selfreported accounts (Caspi et al. 1996). The lone exception to validity tests comparing lifeevents calendar data to other self-reported information is a study based on the ADAM
program. Yacoubian (2003) compared accounts of drug use for a 30-day timeframe in
2000 that were collected with a life-events calendar to accounts of drug use for a 30-day
timeframe in 1999 collected with a traditional survey instrument. The respondents were
different in each sample, and Yacoubian used urinalysis tests as a standard of comparison
42
to assess the validity of self-reported information. Findings indicated that validity did not
noticeably improve with the use of the life-events calendar method. Like other ADAM
studies, these data come from arrestees who volunteered to be interviewed while being
processed by the police (Thornberry and Krohn 2000). Accordingly, future validity tests
of the life-events calendar method need to be conducted with samples of respondents who
are interviewed under more ideal circumstances and who are more representative of
criminally involved populations.
Concluding Points: Validity
Validity relates to the extent to which an indicator measures what it is designed to
represent (Kennear and Gray 2006). Of the three common forms of validity in the social
sciences, criterion validity is recommended as the best validity measure for
criminological research (Junger-Tas and Marshall 1999). There are two forms of
criterion validity: predictive validity and concurrent validity (Litwin 1995).
Criminologists have typically examined concurrent rather than predictive validity, with
comparisons between self-reports and official records being the most commonly used
strategy (Thornberry and Krohn 2000).
Research suggests that offenders generally provide valid self-reported data
(Chaiken and Chaiken 1982; Hindelang, Hirschi, and Weis 1981; Marquis 1981).
However, studies on whether there are racial differences in reporting behavior have
produced inconclusive findings (Thornberry and Krohn 2000; Weis 1986). Moreover,
validity tests of the life-events calendar method are limited and need to be conducted with
representative samples of prisoners. It was previously noted that high reliability does not
guarantee high validity (Litwin 1995). However, if there is high validity there will be
43
high reliability because indicators that effectively measure what they are designed to
represent should produce comparable results each time they are used (Thornberry and
Krohn 2000).
44
CHAPTER 4
ASSESSING LIFE-EVENTS CALENDAR DATA: TOWARD TESTABLE
HYPOTHESES
This dissertation’s analyses are mostly exploratory given that prior reliability and
validity tests of life-events calendar data have not been conducted using representative
offender or prisoner samples. Test-retest reliability is therefore examined for all lifeevents calendar questions administered in both test and retest interviews, and criterion
validity is assessed for all items that were represented in both survey and official data.
However, a handful of testable research hypotheses have been derived from the preceding
literature review to determine whether findings from other reliability and validity tests are
supported by this study. The six hypotheses presented in this chapter pertain to life
events, substance use, justice system involvement, criminal activity, and race.
Life Events
Prior research has found that offenders provide reliable self-reports of income
from legal sources, yet they underreport illegal earnings (Anglin, Hser, and Chou 1993).
Accordingly:
H1:
Self-reports of prisoners’ legal income will be more reliable than selfreports of their illegal income.
45
Substance Use
The quality of retrospective data about substance use varies by type and frequency
of drug use. Respondents tend to do a good job of reporting the use of marijuana
(Fendrich and Vaughn 1994; Golub et al. 2002). However, reports of cocaine (Fendrich
and Vaughn 1994; Golub et al. 2002) and heroin (Golub et al. 2002) use have been found
to be less reliable and valid.
Accordingly:
H2:
Self-reports of marijuana use will be more reliable than self-reports of
other types of drug use.
Justice System Involvement
An interesting finding from prior research is that respondents sometimes report
arrests that do not show up in their official records (Marquis 1981; Maxfield, Weiler, and
Widom 2000). This phenomenon is known as “positive bias” (Marquis 1981). It is
possible that respondents confuse being temporarily detained with being arrested.
Moreover, positive bias may occur when official records are incomplete.
Accordingly:
H3:
Positive bias will be present in prisoners’ self-reports.
Criminal Activity
Individuals who engage in certain types of offenses may provide poorer data in
self-reports when compared to other types of offenders. For instance, drug dealers have
46
been found to be less accurate respondents (Weis 1986: 28), while burglars seem to be
among the most accurate (Chaiken and Chaiken 1982).
Accordingly:
H4:
Drug offenders’ self-reports of offending will be less reliable than other
offenders’ self-reports of offending.
H5:
Property offenders’ self-reports of offending will be more reliable than
other offenders’ self-reports of offending.
Race, Reliability, and Validity
Prior research indicates that the reliability and validity of offenders’ self-reports
may differ by race. For instance, a number of studies have found that African-Americans
underreport involvement in offending when compared to white respondents (Hindelang,
Hirschi, and Weis 1981; Fendrich and Vaughn 1994; Mensch and Kandel 1988).
However, other research suggests that reporting behavior of African-Americans does not
differ from that of other racial and ethnic groups (Jolliffe et al. 2003; Webb, Katz, and
Decker 2006). The relationships between race, reliability, and validity are inconclusive
(Thornberry and Krohn 2000: 58).
Accordingly:
H6:
Self-reports of life events, substance use, justice system involvement, and
criminal activity provided by African American prisoners will be less
reliable and less valid than those provided by Caucasian prisoners.
47
CHAPTER 5
METHODS AND DATA
The current study employs data from a broader project designed to examine the
life course experiences of offenders. This project is composed of test and retest face-toface interviews with prisoners, analyses of official Ohio Department of Rehabilitation
and Correction inmate records, and geo-coded neighborhood data. The test and retest
interviews employed life-events calendars and comprise the primary data source, with
information from official records serving as the criterion for testing validity. The geocoded data were incorporated into the project for future analyses of prisoner recidivism.
Given the current study’s focus on life-events calendar data, the geo-coded data are not
described in this chapter, nor are they analyzed in this dissertation.
Project Origins
Studies of offenders are often rooted in fortuitous circumstances (see Adler 1993;
Bourgois 2003; Sampson and Laub 1993). The broader project from which the current
study extends is no exception. Two important developments led to the impetus to pursue
this project.
First, I invited Paul Bellair, Professor of Sociology, and Brian Kowalski, a fellow
PhD candidate in Sociology, to join my undergraduate Penology class on a tour of an
Ohio prison in January 2003. The three of us recognized a shared interest in prisons
48
while on this tour, which led to subsequent conversations about corrections and
incarcerated offenders. A recurring theme of this dialogue was the feasibility of
conducting our own research with prisoners.
The second factor that shaped this project’s inception was the Ohio Department of
Rehabilitation and Correction’s (ODRC) ongoing commitment to research. In recent
years the ODRC has established a partnership with the Ohio State University to facilitate
research on prisoner reentry and recidivism. Outcomes of this partnership include cosponsored seminars, shared office space, and the development of the Institute for
Excellence in Justice (IEJ) in 2006. The IEJ promotes research collaboration by bringing
members of the ODRC’s Institute on Correctional Best Practices together with affiliates
of Ohio State University’s Criminal Justice Research Center.
The ODRC annually sponsors the Ohio Criminal Justice Research Conference,
and it has been supportive of academics who aspire to study Ohio’s inmates and
programs. For instance, Dr. Reginald Wilkinson, who served as the Director of the Ohio
Department of Rehabilitation and Correction from 1991 to 2006, gave a talk entitled “The
Utility of Research for the Practice of Corrections” on the Ohio State University campus
in February 2003. Dr. Wilkinson encouraged the submission of research proposals,
described the process for gaining approval to do research with inmates, and identified
research topics that were of interest to the ODRC during this presentation.
The challenges of initiating prison research can be reduced when a study’s aims
fit into a correctional agency’s broader objectives (Hart 1995). Dr. Wilkinson’s
presentation coincided with our discussions about doing research with inmates, and our
49
reasons for wanting to study prisoners matched up well with the ODRC’s emphasis on
recidivism. These developments converged to bring this project to fruition.
Research Team
Professor Bellair and I have guided the development and administration of this
project in our respective roles as Principle Investigator and Project Manager. Several
other individuals with ties to the Department of Sociology at Ohio State University have
also been integral to the project’s implementation. For instance, Brian Kowalski was one
of the project’s architects during its formative stages. Professor Bellair, Brian, and I
completed the applications for research approval, worked on the survey instruments, and
developed the interviewing procedures.
Brian was actively involved in the project until the middle of the first wave of
interviewing in summer 2005. He then left the project to assume a research analyst
position with the Ohio Department of Rehabilitation and Correction. Our research was
independent from the ODRC. Moreover, we assured our respondents that the prison
system would not have access to their responses. Accordingly, Brian ended his work on
the project to avoid any potential conflicts of interest.
There were four other members of the research team who were active during the
initial stages of the project. Donald Hutcherson, a sociology graduate student, and Shawn
Vest, an undergraduate criminology student, were involved with pre-testing the survey
instruments and refining the interviewing procedures. Donald and Shawn also conducted
interviews during the first wave of interviewing. Ryan Light, another sociology graduate
student, was the project’s computer programmer and worked with Paul, Brian, and me on
50
the creation of the survey instruments. Colin Odden, a staff member in the Sociology
Research Lab, provided additional technical support.
Brianne Hillmer, a master’s student in sociology, became involved in the project
in summer 2005. Brianne performed several tasks including conducting interviews in
prisons, developing an instrument for follow-up interviews, and pilot testing the followup interview questionnaire. Brianne also geo-coded the neighborhood data and collected
the criterion validity data from official ODRC records.
Five other students were involved with interviewing prisoners. Rachael Gossett, a
graduate student in sociology, joined the research team in fall 2005. Rachael was also a
member of the Sociology Department’s Graduate Research Practicum in winter and
spring of 2006. The project served as the basis for the practicum. Professor Bellair was
the instructor, and I served as the teaching assistant. Students gained firsthand
experience in the research process by conducting interviews with prisoners. In addition
to Rachael the members of the Practicum were sociology master’s degree students Anita
Parker and Grace Sherman, criminology undergraduate student Matt Valasik, and public
health graduate student Ross Kauffman.
Besides providing data for this dissertation the project has been the subject of
presentations delivered at the Ohio Criminal Justice Research Conference and the Annual
Meetings of the American Society of Criminology. Data from the project were also used
in master’s theses that were successfully defended by Rachael Gossett, Brianne Hillmer,
Anita Parker, and Grace Sherman.
The gestalt of the research team has been directly responsible for the broader
project’s success. Team members have divided labor within an interactive and
51
cooperative group context, and each person has contributed his or her individual
strengths. These arrangements have produced an impressive assortment of data and
studies. They have also resulted in engagement, excitement, fun, and growth for
members of the research team. I elaborate more on these dynamics and my own
experience with the project in this chapter and in Chapter 6.
Institutional Review Boards: Approval and Protocol
This project has been carried out in accordance with ethical standards established
by the scientific community. Each member of the research team completed the
Collaborative Institutional Training Initiative’s (CITI) online course on human subjects.
Proposals were also submitted to institutional review boards at the Ohio Department of
Rehabilitation and Correction and the Ohio State University. Although some minor
alterations needed to be made to our original protocol, which is typical when doing
research with prisoners (King 2000), each of these boards approved our research.
We had intended to compensate inmates for their participation. However, neither
institutional review board would allow us to do so. It is crucial that prisoners make
informed decisions that are free from “undue inducements” when they are recruited as
research subjects (Department of Health, Education, and Welfare 1978; Kiefer 2004;
Martin 2000; Overholser 1987). In the spirit of this provision Ohio has a law prohibiting
researchers from compensating incarcerated respondents.
Forms of remuneration we inquired about ranged from cash and gift certificates to
snacks from prison vending machines. The thought of being prohibited from giving
inmates something as insignificant as a candy bar in exchange for an interview seemed
excessive. However, items that are insignificant on the outside can be extremely
52
significant on the inside given the deprivations found in prison and the inflation of the
inmate economy (Santos 2004: 104). The ODRC feared that compensating respondents
could pose security risks if other prisoners who were not recruited felt they were being
discriminated against. This was a valid concern considering that inmates are acutely
sensitive to issues of fairness (Toch 1992: 133).
We also set out to interview male and female prisoners. However, we had to limit
our sample to incarcerated men because we were denied access to the Ohio Reformatory
for Women (ORW). Our request to conduct interviews at ORW was declined on the
grounds that the prison was already accommodating other research projects. It made
sense that this facility would be a popular research site given ORW is the only women’s
prison in Ohio. Moreover, we recognized that allowing researchers into institutions
disrupts operations and takes resources away from other institutional tasks (Hart 1995;
King 2000; Martin 2000). We therefore understood that only a finite number of studies
could be fielded in ORW at any given time.
However, it must be noted that when we sought our approvals the ODRC and
ORW faced a barrage of public criticism in response to controversial allegations that
male staff sexually abused female inmates (see Johnson 2004; McCarthy 2004; Stop
Prisoner Rape 2003). Decisions to allow researchers into prisons are made within
broader political contexts (King 2000). The possibility that fallout from the abuse
scandal may have influenced the decision to deny us access to ORW therefore cannot be
completely ruled out.
Members of the Institutional Review Board at Ohio State University included a
prisoner, which is recommended when agencies review proposals involving prison
53
research (Kiefer 2004). OSU’s board helped us improve our instrument and refine our
plans for protecting research participants and ourselves. For instance, the Board
informed us that respondents’ disclosures of plans for future offending, perpetration of
child abuse, and intent to engage in self-harm were not protected.
The Sample
Our sample was intentionally designed to be more representative of the current
prison population than those used in previous self-report and life-events calendar studies.
In the last two decades there have been dramatic increases in the numbers of offenders in
the United States who were imprisoned (Austin and Irwin 2001; Elsner 2006) and
ultimately released back into society (Petersilia 2003; Visher and Travis 2003). Ohio’s
patterns of incarceration and reentry have mirrored national trends (see La Vigne et al.
2003; Peters 2004; Williams, King, and Holcomb 2001). Unprecedented increases in
imprisonment and subsequent reentry have led to a pressing need for more research on
offenders’ lives before, during, and after their incarcerations (Visher and Travis 2003).
Nonviolent criminals became the majority of those imprisoned in the United
States in the late 1990s (Irwin, Schiraldi, and Ziedenberg 1999). During this time drug
offenders became the largest group of new inmates sent to prison in Ohio (Williams,
King, and Holcomb 2001). When comparing recidivism rates for lower-level offenders
and violent offenders the rates for lower-level offenders are higher (Langan and Levin
2002) and have been rising (Hughes and Wilson 2002). Accordingly, we elected to study
level one and level two prisoners in Ohio.
There are five security levels in Ohio’s inmate classification system, with levels
one (minimum) and two (medium) anchoring the least serious end of the continuum
54
(Peters 2004). Approximately 73% of Ohio’s prisoners were level one or level two
inmates while we were in the field (ODRC 2006). Drug trafficking and burglary are two
of the most common offenses perpetrated by level one and level two inmates. The
following paragraphs explain the rationale for limiting the sample to prisoners with level
one and level two statuses. Descriptive information about the sample is provided in
Chapter 7.
There were four advantages to studying level one and level two offenders. First,
these inmate populations comprise offenders that have been most affected by recent
trends in corrections. Second, level one and level two prisoners typically serve shorter
sentences than those with higher security classifications. Studying the recidivism of
inmates with lengthy sentences would require several years of waiting for them to be
released from prison, making inmates with shorter sentences more suitable for our
purposes.
A third reason for focusing on inmates with level one and two classifications was
that we believed they would pose less of a threat to our interviewers than higher security
inmates. Other researchers have pointed out the need to take safety into consideration
when researching prisoners (Kiefer 2004; Martin 2000). We concluded that interviewing
lower-level offenders in minimum-security institutions would reduce the potential risks
that come from interviewing in correctional facilities.
The fourth advantage of studying level one and level two offenders was that doing
so was practical. There are several minimum-security prisons located within a forty-five
minute drive from the Ohio State University campus. However, every institution in Ohio
with higher security populations is over an hour from Columbus and most are at least two
55
hours away. Location quickly emerged as a foremost concern when we considered the
hazards of driving in wintry weather, the cumulative time and money spent on driving to
research sites for several months, and wear and tear on graduate students’ moribund
vehicles.
Besides security level the sample was limited by gender, age, and time in prison.
We were only able to study incarcerated men because we were denied access to female
prisoners. The sample was also limited to inmates who were between the ages of 18 and
32. We chose to study this age group for two reasons. First, targeting this population
offered the best potential to conduct future longitudinal research on life-course events.
Second, studying younger adults increased the likelihood of getting a diverse offending
sample because large numbers of sporadic and chronic offenders are most likely to be
found in this age group (Moffitt 1997). We determined that as a group these inmates
would be less embedded in criminal careers and would thus comprise a sample with more
variance.
Individuals who had been in prison for less than one year at the time of
recruitment were exclusively solicited for participation in the project. Given that
memory declines with time (Belli, Shay, and Stafford 2001), we believed these inmates
would have better recollections of events that occurred prior to being locked up than
those who had been in prison longer. We also figured these inmates would be less
socialized into the prison culture and would thus make better subjects. In sum, the
sampling frame comprised level one and level two male inmates between the ages of 18
and 32 who had been in prison for less than one year at the time of recruitment.
56
The Interviewing Sites
The ODRC granted us access to the Madison Correctional Institution (MaCI), the
London Correctional Institution (LoCI), and the Southeastern Correctional Institution
(SCI). All three of these facilities are minimum-security men’s prisons. I had lined up
internships for a few of my students at MaCI and taken several classes on tours of the
institution before this project was conceived. We therefore launched the project in MaCI
first because I was familiar with the facility and staff.
The Madison and London Correctional Institutions are located across the street
from each other on State Route 56 in London, which is about 30 miles west of Columbus
in Madison County. The town of London is the county seat and had approximately 9,400
residents in 2005. Farms and fields mostly surround the town, though
immediately adjacent to the correctional institutions on State Route 56 are a number of
justice-related facilities, including a law enforcement officers’ memorial and a training
academy for the Ohio State Patrol.
The Madison Correctional Institution opened in 1987 and sits on 125 acres. There
are two compounds at MaCI, Zone A and Zone B. Level three inmates are housed in
Zone A. This compound contains a housing unit for all of the juveniles who have been
adjudicated and sentenced as adults in Ohio, and it also features the Sex Offender Risk
Reduction Center (SORC), which is a program that all newly sentenced sex offenders in
Ohio complete before being sent to their parent institutions.
We conducted interviews in Zone B, which houses level one and level two
inmates in single story dorms. Respondents told us that inmates at other Ohio institutions
perceive MaCI as a sex offender prison and stigmatize those who have served time there.
57
To the extent this is true it likely stems from the presence of SORC in Zone A. MaCI is
also known for its “cell dog” program, which incidentally trained a dog that one of our
interviewers adopted for a pet.
The London Correctional Institution opened in 1924 and features the classic
“telephone-pole design” in prison construction (see Bartollas 2002: 231). LoCI is spread
over 2,950 acres and has an average population of 2,150 inmates who live primarily in
dormitory style housing. A correctional camp is located next to the main prison grounds,
and many of the LoCI inmates are involved in agricultural work
The Southeastern Correctional Institution (SCI) is in Fairfield County near the
city of Lancaster, which is approximately 35 miles southeast of Columbus. Lancaster is
the county seat and had a population of 36,063 in 2005. Fairfield County features
farmland, hill country, forests, and rivers. It is in a rural part of Ohio at the northern edge
of the Hocking Hills region, which is renowned for its state parks, outdoor recreational
activities, and fall foliage.
Although in mileage MaCI, LoCI, and SCI are similar distances from Columbus,
psychologically SCI seems much further away because of its topography. Driving to SCI
entails traveling on winding roads that traverse tightly through wooded hills. Speaking
from firsthand experience, drivers need to anticipate the possibility that deer and inmate
work crews lurk around blind bends in the road that leads to the prison.
SCI incarcerates approximately 1,600 inmates in dormitory style housing and sits
at the top of a hill on 1,377 acres. There is also a boot camp next to the main prison
compound. SCI opened as an adult prison in 1980. However, the facility itself dates
back to the 1800s. The institution was initially operated as an industrial school for boys,
58
and historic monuments and markers dot the property. Aside from the razor wire
perimeter, guard tower, and people in uniforms the grounds look more like a small liberal
arts school than a prison.
LoCI, MaCI, and SCI were ideal sites for our research because they all had large
populations of level one and level two inmates. Moreover, they were relatively close to
Columbus, and I was able to take advantage of a pre-existing relationship with MaCI
when we initiated the project. These institutions were also appealing from a logistical
standpoint because they were well run, evidenced by the fact that LoCI and MaCI
received favorable evaluations in recent inspections by Ohio’s legislative prison oversight
committee (see CIIC 2006a; CIIC 2006b).
Five waves of interviews were completed (see Table 5.1). We were in the field
from April 2005 to May 2006, excluding holidays, weekends, and breaks between Ohio
State University’s academic quarters.
Wave
Wave 1
Wave 2
Wave 3
Wave 4
Wave 5
Institution
Madison (MaCI)
London (LoCI)
Southeastern (SCI)
Madison (MaCI)
London (LoCI)
Dates
4/05-6/05
7/05-10/05
7/05-12/05
1/06-3/06
4/06-5/06
Table 5.1: Interviewing Waves
We refined our procedures in MaCI and then moved on to LoCI and SCI for the second
and third waves. Managing concurrent interviewing waves in two different institutions
59
required meticulous attention to organization and detail. We therefore chose to complete
the fourth and fifth waves consecutively rather than simultaneously.
Recruiting Research Respondents
We were concerned prisoners would be reluctant to volunteer an hour and fifteen
minutes of their time without receiving any form of compensation. Despite this
limitation over 20% of the inmates who were recruited were interviewed (see Table 5.2).
Recruitment procedures were refined with each successive wave of interviewing, and
participation rates noticeably improved as our direct involvement in the recruitment
process increased.
Refusals included prisoners who declined to participate when recruited and
inmates who agreed to be in the study but were never interviewed. For instance, many
inmates who indicated they wanted to participate never showed up when they were issued
passes, including some who were passed multiple times. We had evidence suggesting
several of these inmates never received their passes, and we learned of other instances
where inmates tried to meet with us but were denied access to the buildings we
interviewed in. It was impossible to know the extent to which respondents were deterred
from participating by prison employees’ (in) actions. Inmates who did not explicitly
agree to participate after the administration of the consent forms, including the handful
who said “no” and those who simply did not show up for their appointments, were
therefore assumed to no longer be interested in participating and were thus counted as
refusals. The recruitment rates for this study were therefore calculated conservatively.
60
Institution
Frame
Wave 1 – MaCI
Wave 2 – LoCI
Wave 3 – SCI
Wave 4 – MaCI
Wave 5 – LoCI
662
145
217
491
200
1715
Asked Refused*
186
122
187
120
70
685
154
99
153
91
40
537
Sample
32
23
34
29
30
148
Recruitment
Rate
17.20%
18.85%
18.18%
24.17%
42.86%
21.61%
Table 5.2: Recruitment Rates
We developed a recruitment form for the first wave of interviews that the prison
distributed on our behalf (see Appendix A). Suggestions from prison administrators and
other researchers led us to adopt this approach. Inmates who were interested in
participating were asked to return their forms to housing unit staff. Unfortunately, this
strategy did not work well and only eight forms were initially returned. An administrator
who assisted us was puzzled by the low completion rate. She visited each housing unit
and found there were several other interested inmates besides the eight whose forms were
initially received.
There were three shortcomings to having prison staff distribute recruitment forms.
First, it required prisoners to be literate, which may have been problematic given the low
levels of education found among Ohio prisoners (Williams, King, and Holcomb 2001:
24). Second, inmates did not have an opportunity to meet with us or ask questions about
the project.
Third, success depended on the cooperation of multiple employees in a large
organization who had no vested interest in the project. It is likely staff members in some
units never distributed recruitment forms to inmates. Moreover, prison employees may
61
not have returned completed forms to the administrator that ultimately submitted them to
us. We never received forms from several inmates who reported turning them in. It is
therefore probable that many refusals stemmed from the inefficiency of the prison
bureaucracy.
The same basic strategy was used to recruit respondents for the second wave of
interviewing. However, fewer staff members were involved in the distribution and
collection of recruitment forms. A slightly different recruitment strategy was used for the
third wave of interviewing. Treatment staff at SCI met with groups of prisoners and
informed them of our study, and inmates were instructed to sign up if they were
interested. More inmates expressed interest in participating for the third wave than the
first and second waves. However, the third wave had a lower response rate than the
second wave because SCI had more prisoners who never showed up for their interviews.
The fourth wave’s respondents were recruited seven months later. By this time
we had gained more knowledge of prisons and prisoners. We revised our approach and
became more directly involved in the process because relying on prison staff caused us to
lose several potential respondents during the first three recruitments.
For instance, I went out to MaCI to meet with four groups of thirty prisoners. I
also modified the recruitment form (see Appendix B). The revamped form was more
consistent with how the study was described when consent forms were administered
during interviews, and it featured greater emphases on confidentiality and our lack of
affiliation with the prison system.
I presented myself as a researcher from Ohio State University who was interested
in learning more about the backgrounds and needs of people in prison. I then elaborated
62
on the recruitment form’s main points. I concluded by asking the inmates to mark the
“yes” box if they were interested in being interviewed or wanted to learn more about the
study, or the “no” box if they were not interested. I intentionally made large boxes that I
could point to while giving these instructions in case there were inmates who could not
read.
This approach enabled us to overcome problems that affected the first three
recruitments. However, there were challenges I did not anticipate. For instance, a few
vocal critics tried to encourage others not to participate. The anonymity, size of the
groups, and large classroom made it difficult to make personal connections and keep
corrupting influences at bay. Another problem was that I stood and talked while inmates
sat, which may have subtly reinforced social distance and power differentials. The most
significant obstacles were that many inmates arrived late, a few never showed up, and
some got confused and came to the wrong sessions.
I met with two groups before lunch and two in the afternoon. I waited until most
of the inmates were present before beginning my meetings with the first two groups. The
idle time gave scoffers a chance to discourage their peers. Moreover, as soon as I began
talking inmates continuously came in late and wondered what was going on. I had to
explain the project and who I was to latecomers after they had heard the last part of my
recruitment presentation, which sometimes led to confusion.
During the lunch break I reflected on the morning meetings and decided to alter
my approach. I previously waited for the whole group to arrive, delivered a presentation,
and compensated for tardy inmates. My new approach was to quickly meet with smaller
groups of inmates. As prisoners came in for the afternoon sessions I introduced myself
63
and then briefly described the project, why they were contacted, what was in it for them,
how much time they would need to devote, and how we protected the data. I then
showed them where to check yes or no, thanked them for coming in, and went on to the
next group of inmates that always seemed to be entering the room.
Prisoners cycled through continuously over a two hour period, and I typically
spoke to three or four at a time. A member of the treatment staff shadowed me and
collected the recruitment forms after I finished speaking to each group of inmates. This
was an intense and efficient strategy. I immediately went up to prisoners before they sat
down, shook their hands, and talked to them eye-to-eye. These tactics cut down on
anonymity and reduced negative group dynamics by making the process more personal.
It also got inmates in and out of the room quickly.
A similar approach was used for the fifth recruitment. Rachael Gossett joined me
at the prison, which eliminated the need for prison staff to be immediately present during
the recruitment process. Our primary objective was to encourage inmates to make an
appointment to either be interviewed or learn more about the project. We met with
groups of five to eight prisoners instead of thirty to reduce anonymity and large group
dynamics. We were also able to meet in a staff conference room rather than a classroom.
Sitting in comfortable chairs in a circle around the conference table and having an
interactive conversation with inmates facilitated the recruitment process by minimizing
power differentials and putting inmates at ease.
Recruitment: Final Thoughts
Meeting with groups of inmates in a prison is disruptive to the institutional
environment and requires staff assistance. As new researchers with no track record and
64
little rapport with staff it is unlikely we could have adopted the group method during our
first recruitments. However, the group method emerged as a viable option after we
gained experience in the field, established working relationships with the institutions, and
had proven ourselves by successfully completing previous waves of interviews. By the
time the fifth recruitment was conducted we could point to previous successes when
presenting our recruitment plans to prison staff.
Prisons are coercive environments, and inmates are used to being told what to do.
Within this context emphasizing the voluntary nature of participating in a research study
and being friendly can substantially improve recruitment. We shook inmates’ hands,
thanked them for their time, and otherwise tried to show them respect.
Ideal recruitment conditions include small group interaction between members of
the research team and inmates without staff immediately present. The recruitment form
should be brief and clear, and it should be presented using a conversational tone. The
main objective of the recruitment meeting should be to encourage inmates to make an
appointment for either an interview or a chance to discuss the project further. Trying to
secure their agreement to participate in an interview within a group setting may be overambitious. Inmates are best able to make an informed decision when meeting alone with
interviewers and going over the consent form.
Scheduling
Researchers who study inmates must adjust to institutional schedules (King 2000;
Martin 2000; Newman 1958). Each of the prisons we interviewed in followed the same
daily routine. Prisoners ate at the same time every day and were required to be present
for a series of counts. Scheduling around these institutional demands left mornings from
65
8:00 to 10:30 a.m. and afternoons from 1:00 to 3:30 p.m. as the only windows for
interviewing.
We were further restricted by the commitments of interviewers who were taking
courses on the Ohio State University campus. Graduate classes were offered in the
afternoons, which made mornings preferable for most interviewers. However, afternoon
interviews were also conducted because classes did not meet everyday and some
interviewers had completed their graduate coursework.
Staff members at MaCI and LoCI indicated mornings worked best for their
institutions. Accordingly, interviews with inmates from these facilities were conducted
during the morning time slot. SCI was able to accommodate interviewers at either time,
though afternoons were preferable. Most of the SCI interviews were therefore completed
in the afternoon.
Each prison was visited an average of three-to-five times a week. First interviews
took about an hour and fifteen minutes to complete, and retests took about thirty-five
minutes. We therefore scheduled appointments with two inmates on days we conducted
first interviews, and three interviews were scheduled on retest days. Unfortunately,
meeting these daily quotas was often impossible due to challenges posed by the day-today prison environment. Details on a typical day of interviewing in prison are presented
in Chapter 6.
Test and Retest interviews were completed for 110 respondents. The N for testretest analyses is therefore 110 cases. Several unpredictable challenges made it
impossible to complete retest interviews with every original respondent. Most of the
66
individuals who were not retested were missed because they were released during the
test-retest interval.
Interviews were scheduled based on the respondents’ release dates, with those
having the earliest dates scheduled first. Unfortunately, our list of release dates did not
reflect outcomes of parole board hearings or sentence reductions from boot camp
participation. For instance, a prisoner listed as having a 2009 release date could be
released from prison after three months if he successfully completed the boot camp.
Other inmates were transferred to different prisons during the test-retest interval, a
handful were AWOL, and some were not retested because we needed to pull out of the
field before we could get to them. Accordingly, the failure to complete retests for all of
the original respondents was generally reflective of prison practices and random forces
rather than inmate decision-making.
Protecting Respondents
Several steps were taken to protect respondents’ identities. All equipment and
documents containing identifying information were stored in a locked file cabinet in
Professor Bellair’s campus office. Moreover, inmates were assigned case numbers that
were used within the interviewing program and all of the data files. It would have been
impossible to identify our respondents had our laptop or data been seized by prison staff,
stolen, or lost because case numbers were used instead of names and ODRC inmate
numbers.
Our research approval specified that interviews needed to be private and
confidential. All interviews were therefore conducted in private rooms without prison
staff or other inmates present. Efforts were also taken to ensure that our data were
67
protected from prison authorities. For instance, we obtained a federal Certificate of
Confidentiality from the National Institutes of Health. Confidentiality Certificates
protect data from being subpoenaed or confiscated by law enforcement and governmental
agencies and are commonly used by researchers who collect data that are potentially
stigmatizing or incriminating (see Bourgois 2003: 350; Wolf, Zandecki, and Lo 2004).
Interviewing
Survey Instruments
Two interviewing instruments were created, which for convenience will be
referred to as the main instrument and the retest instrument. Cannell and Kahn (1968)
noted the ideal length for interviews is anywhere from half an hour to two hours. The
main instrument took approximately one hour and fifteen minutes to administer, while
administration of the retest instrument took an average of thirty-five minutes. Each
instrument therefore fit within Cannell and Kahn’s suggested range for interview length.
The main instrument was used in first interviews and is the source of “test” data
in the test-retest assessments of reliability. The retest instrument was employed in follow
up interviews that took place anywhere from one to four weeks later. This interval is
considered ideal when using the test-retest method to assess reliability (Thornberry and
Krohn 2000).
Both instruments were developed in Microsoft Access between spring 2004 and
spring 2005 and were administered with laptop computers. They were adapted from a
program provided by Dr. Julie Horney, Dean of the School of Justice at the University of
Albany. Dr. Horney has published several articles on using the life-events calendar
68
method in criminological research (see Horney 2001; Horney and Marshall 1992; Roberts
et al. 2005), including a widely cited article in the American Sociological Review in 1995
(see Horney, Osgood and Marshall 1995). Dr. Horney’s work stimulated our interest in
adopting the life-events calendar method for this project, and her support was crucial in
the beginning stages of our research.
Our main instrument was a modified version of Dr. Horney’s instrument. It
featured a comprehensive set of questions and consisted of several modules. A sample of
relevant domains included but was not limited to routine activities, peer relations,
demographic information, family and relationship background, employment history,
educational background, treatment program involvement, drug dealing, incarceration
history, living arrangements, drug and alcohol use, victimization, attitudes, legal and
illegal income, access to weapons, criminal involvement, self control, stress, and
exposure to family, school, and community violence. Most of these modules were
revised versions of those contained in Dr. Horney’s instrument. However, we also
developed new modules, including one that examined gang membership.
The instrument was composed of a blend of traditional survey items and lifeevents calendar questions. Interviews began with the collection of basic demographic
information, followed by life-events calendar questions about residence, employment,
relationships, and education. These items were crucial because they established reference
points and context for the rest of the interview. Respondents’ answers to the life-events
calendar questions were entered into the computer and were also recorded on a paperbased calendar that was used as a visual prompt throughout the interview (see Appendix
C).
69
Most of the traditional survey items and life-events calendar questions focused on
the eighteen-month period prior to incarceration. Memory loss is linear over time (Belli,
Shay, and Stafford 2001) and respondents do a better job of remembering when they
begin with their freshest memories and then gradually work backward (Bradburn, Rips,
and Shevell 1987). We therefore began life events calendar questions by asking about
the most recent date on the calendar and then used that date as a starting point for tracing
the more distant past.
For instance, when posing our question about drug dealing we asked, “Did you
ever engage in any drug dealing in the month leading up to your arrest?” If a respondent
said yes we would then work backward and ask about each of the other months on the
calendar. If a respondent said no we would follow up with, “Did you ever engage in any
drug dealing during any of the other months in the calendar period?” While asking these
questions we would simultaneously point at the corresponding months on the paper-based
calendar.
Questions about incriminating behavior were worded vaguely for two reasons.
First, this approach was a strategy for encompassing a wide range of behaviors. Second,
it helped ensure confidentiality and is a suggested practice when studying prisoners
(Overholser 1987). Moreover, Ohio State University’s Institutional Review Board
mandated that we use vague wording in questions about offending because Ohio has a
law that requires individuals to report felonies to law enforcement. Accordingly, rather
than asking questions such as, “During this time did you ever shoot anyone?” and,
“During this time did you ever commit any burglaries?” we asked, “During this time did
70
you ever participate in any violent crimes?” and, “During this time did you ever
participate in any property crimes?”
Brief introductions to question sets were placed throughout the instrument to help
respondents make cognitive transitions. These passages were also designed to foster a
conversational dynamic (see Schubert et al. 2005). For instance, we read the following
script before asking about alcohol and drug use:
We are now going to ask you some questions about substance use, which may be
sensitive depending on your background. We just want to remind you that your
answers will be kept confidential and that you do not have to answer any
questions you do not want to answer.
Most of the questions in the survey were closed-ended because we were focused
on collecting quantitative data. However, the following open-ended questions were
incorporated into the instrument:
Do you think being in prison has changed you? If yes, how?
Before you came in here, did you ever think that you would end up serving time in
prison?
We have noticed that a good number of people who are released from prison end
up coming back. Do you have any thoughts on why this is the case?
What, if anything, can the prison system do to better meet your needs?
These questions were posed at the conclusion of each interview. Aside from yielding
insightful data we believed an interactive end would encourage respondents to participate
again when re-contacted in the future.
The retest instrument was a stripped down version of the main instrument. Most
non-calendar questions were removed, while the calendar questions were retained. For
instance, life-events calendar questions about employment, education, treatment,
71
incarceration, arrests, peer networks, drug and alcohol use, violent offending, property
offending, income, drug dealing, and weapons possession were identical on each
instrument.
Both instruments contained a separate screen for entering notes. A sample of
content recorded in the notes box includes details on coding decisions made by
interviewers, problems with the instrument, reasons questions were skipped, and
unanticipated events that occurred during interviews. For instance, I once had to
interrupt a respondent mid-interview because we needed to move from one room to
another. I put a brief summary of the situation in the notes box. On another occasion an
inmate did not respond when asked about the age of a person in his social support
network. He informed me that he could not give this information because his mother did
not want anyone to know how old she was. I put an account of this scenario in the notes
box so we would know why this question was not answered.
The notes box was also used to record supplemental information shared by
respondents. Inmates told stories, discussed things that were significant to them, and
provided contextual information that went beyond the scope of the response sets for
closed ended questions. Entering notes on respondents’ subjective accounts allowed us
to collect information that would potentially be useful in future analyses and data
cleaning. More importantly, it validated respondents’ disclosures.
For instance, I once asked an inmate whether or not he had any children. He
responded with a story about previously having one child that died. All I technically had
to do was click on the box indicating the respondent had no children during the calendar
period. However, notes were also entered because a story about a dead child warranted
72
more than just a mouse click. Had I simply checked the box and moved on the inmate
might have felt slighted and been reluctant to continue with the interview.
Both instruments concluded with a screen that interviewers completed when they
were alone after respondents left the room. This screen assessed interviewers’
perceptions of the interview and required them to rate respondents’ interest, mood, and
accuracy. We were usually able to complete these screens. However, sometimes the
privacy necessary to do so was not available because the next respondent entered the
room as the previous one left. It was our practice to record interviewers’ perceptions of
the interview in our field notes on days we were unable to enter them into the computer.
Interviewer Training
Our standard procedure was to do interviews using teams of two interviewers,
though it should be noted that I conducted many interviews alone. I was dispatched as a
solo interviewer because I had extensive experience and comfort with interviewing
prisoners. Using me in this capacity enabled us to complete more interviews in less time.
I was the only member of the research team to conduct interviews alone.
Being an effective interviewer requires mastery of the interviewing instrument
and strong interviewing skills (Cannell and Kahn 1968). We found that interviewers who
work in teams need to have chemistry with each other. Moreover, those who interview
inmates should possess knowledge of corrections and be comfortable within prison
settings. All interviewers involved in this project underwent extensive training in these
areas before entering the field.
Brian Kowalski and I were the prototypical interviewing team. Our preparation
for interviewing together was optimal and would be difficult to duplicate. For instance,
73
we gained proficiency with the instruments from spending a year working on their
development. We were also compatible because we had a strong friendship that began
when we entered graduate school together in 1999. Knowing each other well enabled us
to be composed while interviewing, and our interviews flowed smoothly because we
could anticipate what the other was thinking and would say. Brian and I each had
research interests in corrections and had previously visited several Ohio prisons together.
We were therefore the ideal team to establish the interviewing procedures and conduct
the first interviews.
Our formal training for interviewing consisted primarily of conversations about
hypothetical situations that might be encountered. We also used role-playing to practice
administering the instrument, which is a recommended strategy when training
interviewers (Cannell and Kahn 1968: 587; also see Schubert et al. 2005). Professor
Bellair, Brian, and I were regularly involved in these training sessions, and Donald
Hutcherson and Shawn Vest participated periodically.
The five of us visited the first research site before entering the field. During this
time we observed the setting, met with administrative staff, and requested feedback on
our protocol. Staff members recommended we have respondents sit furthest from the
door in case we needed to summon assistance, which was helpful given that safety is an
underlying concern when doing interviews in prisons (Kiefer 2004; Martin 2000).
Training procedures became more standardized and thorough as others joined the
research team. The practicum students’ training reflected the culmination of our best
practices for preparing prison interviewers. Professor Bellair began this training by
providing overviews of the project, corrections, the life-events calendar method, and the
74
interviewing instruments. Ice-breaking activities were also done to foster cohesiveness
among the new members of the research team. I designed a series of graduated stages
that practicum students were required to complete before being permitted to enter the
field. These activities were developed based on my own experiences with developing the
instruments and conducting interviews.
The first step involved teaching members of the practicum how to conduct
interviews using the Access program. I created scripts for several “mock respondents”
that contained unique sets of answers to each question in the instrument. Practicum
interviewers were given printouts of the mock respondents’ answer scripts, which they
then practiced entering into the program. The scripts featured dilemmas and responses
commonly encountered in the field. We then met as a group to discuss strategies for
dealing with the challenges posed by the mock responses.
The practicum students were separated into small groups for role-playing after
they demonstrated competency with the instrument. Brian Kowalski and I concluded that
role-playing activities in the early stages of the project were ineffectual for training
because the “inmate” usually did a poor job of providing responses that would actually be
given in an interview. I therefore revised the original exercises by adding respondent
scripts.
Developing scripts based on responses from inmates I had interviewed was an
improvement over our early role-playing activities because it gave the exercise the
semblance of an actual interview. Scripted answers kept role-players on task, provided
an opportunity to adjust to the flow of interviewing, and gave trainees a chance to
practice reading questions and entering data simultaneously. Role-playing also provided
75
us with an opportunity to experiment with different pairings of interviewers. Given that
interviewing is essentially social interaction (Jenkins 1995), it was important to establish
teams that were compatible. Fortunately, ideal matches clearly and naturally emerged.
The final step in the training process involved having each practicum student
shadow me in the field. In most cases shadows watched me complete two interviews
before attempting to conduct interviews on their own, though I allowed each student to
decide when to make the transition from passive observer to active interviewer. This was
an effective strategy because interviewers built up feelings of self-efficacy without being
forced out of their comfort zones prematurely. I observed, assisted, and provided
feedback when new interviewers conducted their first interviews.
The training regimen for the practicum students was designed to gradually
introduce each component of the interview. Social skills were also emphasized. Trainees
became progressively more involved with interviewing as their proficiency improved.
They also decided when to conduct their first interviews. Weekly group meetings were
held to discuss challenges and questions during the months the practicum students were
in the field. Training was therefore ongoing.
Mode of Administration
Conducting research in the field required several pieces of equipment. We took a
laptop computer, an external keyboard, a mouse and mouse-pad, a power chord, pens,
and a flash-drive into the prisons each day. We also brought a folder that contained a
copy of the federal Certificate of Confidentiality, an interviewing schedule, consent
forms, paper-based life-events calendars, a list of interview procedures, field notes forms,
76
a set of important phone numbers and contacts, and a paper-based version of the
instrument in case the computer failed.
Interviews began with the administration of two consent forms. The first consent
form was developed by the ODRC and signified an inmate’s agreement or refusal to meet
with us (see Appendix D). We created the second consent form, which included
statements about the basis for subject selection, the purpose of the study, an explanation
of procedures, the potential risks and benefits of participation, the rights of research
subjects, and assurances of confidentiality (see Appendix E). It usually took between five
and ten minutes to administer the consent forms, depending on how many questions the
respondents had.
During the administration of the consent forms inmates were told verbally and in
writing that participation in the study was voluntary. They were informed that if they
chose to participate they could skip questions they did not want to answer or withdraw
from the interview for any reason at any time. Respondents were also told they should
not disclose details of their current offense, plans for future offending, participation in
child abuse, or intent to harm themselves.
Administering consent forms presented a challenge because we needed to account
for illiteracy. However, reading the forms word for word would have potentially
offended literate prisoners. We developed a strategy that entailed summarizing each
paragraph of the consent forms and highlighting main points. Further elaboration was
then provided when deemed necessary, such as when respondents did not read the forms
on their own, gave nonverbal cues that they did not understand, asked questions, or
otherwise expressed confusion. Each consent form was signed and dated by the
77
respondent and an interviewer, and the respondent received a copy of the consent form to
keep for his records.
We established rapport with respondents by taking deliberate steps to distinguish
ourselves from prison staff. For instance, we shook hands with inmates, called them by
their first names, and introduced ourselves using our first names. Behaving in these ways
demonstrated we were not of the prison because staff members are prohibited from
engaging in these actions. More information on our presentation of self is provided in
Chapter 6.
Given that prisons are coercive environments we emphasized the voluntary nature
of the study and that participants could stop at anytime or skip questions they did not
want to answer. Prisoners are used to not having choices and being told what to do.
Providing respondents with the freedom to not participate, skip questions, or withdraw at
any time was therefore empowering and non-threatening.
We took a straightforward and honest approach to interacting with inmates. We
followed the advice of other researchers (Martin 2000; Newman 1958) and explicitly
stated that agreeing, or refusing, to participate in our study would neither hurt nor help
prisoners. Respondents could also see any notes that were entered into the notes screen
on the laptop, and all notes were additionally read back to ensure accuracy and
compensate for the possibility of illiteracy. This strategy was consistent with Newman’s
(1958) suggestion that inmates have access to any supplemental information recorded by
researchers.
King (2000) observed that most prisoners are willing to talk to researchers. We
also found this to be the case, and we suspect that our demeanor encouraged inmates to
78
participate in our research. Respondents who sat through the administration of the
consent forms rarely declined to be interviewed. Moreover, I never had an inmate choose
to stop participating once an interview began, and all of the interviews I conduced
yielded fewer than five instances of skipped questions.
Inmates agreed to be interviewed for a number of reasons. Having a break from
the normal routine (Jacobs 1977; Martin 2000), getting a chance to sit in an airconditioned room in the summer, being presented with an opportunity to help us learn
more about the lives of people in prison, and getting the chance to have opinions heard by
university researchers (Newman 1958) all proved to be attractive incentives to participate
in our study. Douglas (1972a) pointed out that people of disrepute often see participating
in research as an opportunity to correct society’s misperceptions. Many of our
respondents held this objective.
There were a number of tasks involved in conducting an interview. In most
instances one interviewer read the questions on the computer screen and entered data
while the other administered the consent forms, worked with the respondent to complete
the paper-based calendars, probed, and asked the open-ended questions. However, each
interviewing team ultimately came up with its own division of labor.
During an interview the laptop would be set up in the middle of the table with an
interviewer on one side and the respondent on the other (see Figure 5.1). The other
interviewer sat directly across from the screen next to the respondent, which enabled him
or her to help the respondent complete the consent forms and calendars. Having the
second interviewer across from the screen also served as a quality control measure
79
because he or she could point out data entry errors. The respondent would always be set
up at the opposite side from the door as a safety precaution.
Interviewer
#2 Sits Here
Interviewer
#1 Sits Here
External
Keyboard
Paper
Calendar
Respondent
Sits Here
Laptop (screen faces )
Mouse
Figure 5.1: Typical Interview Seating Chart and Equipment Layout
The conversational style of our interviews enabled us to regularly probe
respondents’ accounts. Questions also needed to be clarified or reworded at times. For
instance, we asked about how many drinks respondents consumed and learned that
“drinks” frequently needed to be specified. When designing the instrument we
considered a drink to be one twelve-ounce beer, shot, or glass of wine. However, our
probing of inmates’ responses revealed that prisoners sometimes considered a drink to be
a fifth of whiskey or a forty ounce bottle of beer.
Another issue that came up was that some of our questions had an urban bias. For
instance, when asking one respondent how safe his neighborhood was at night he
responded that people could get hit if they tried to cross the highway. Similarly, in
80
response to our question about how often gunshots were heard in the respondent’s
neighborhood we had several inmates report hearing hunters’ gunshots daily.
Criminologists typically neglect the urban bias that guides the meanings and wording of
their questions (Weisheit 1998). As these examples show there were urban biases in our
research. We intended to measure fear of crime and exposure to community violence
rather than highway safety and the prevalence of hunters.
A day of interviewing concluded with the completion of a standardized field notes
form (see Appendix F). Information recorded included the date and location of the
interviewing session, the interviewers’ names, the respondent’s case number, features of
the interview setting, the number of attempted and completed interviews for the day, the
duration of each interview, subjective assessments of respondents’ honesty and recall
ability, features of the setting that may have affected the interview, a step by step
chronological account of the day, the names of staff members who assisted us, notes on
any problems or delays with security screening, coding issues and decisions,
documentation of any computer problems encountered, and a breakdown of the
interviewing team’s division of labor. Having elaborate field notes for each interview
was crucial when scheduling retest interviews, diagnosing problems that stemmed from
the inefficiency of prison bureaucracies, and cleaning data.
The final step when interviewing was to back up the day’s data on a flash-drive
before leaving the prison. The flash-drive was then taken directly from the prison to
Professor Bellair’s campus office to be saved on the project’s secure computer drive in
the sociology department. During this time interviewers also met briefly with Professor
Bellair to rehash the day’s events and drop off consent forms and paper calendars. These
81
daily contacts kept project materials from becoming lost or disorganized, encouraged
ongoing communication between members of the research team, and facilitated the quick
resolution of any problems that emerged.
Summary
Cannell and Kahn (1968: 530) posited that an interview can be considered a
“conversation with a purpose,” and King (2000: 285) describes doing research with
prisoners as a “craft.” We took a conversational and interactive approach to collecting
quantitative data, which was aided by the ability of the life-events calendar method to
hold respondents’ interest in our survey (Horney 2001). Rather than taking the
interviewing process for granted we prepared, trained, and took deliberate steps to
constantly improve our procedures.
The inmates we spoke with seemed to take their participation in our study
seriously. Similar to Liebling (1999), we found that respondents often devoted
considerable effort to providing accurate information. For instance, respondents
frequently asked to go back and revise or add to a previous answer because they would
remember additional information later in the interview. At the conclusion of the retest
interviews we asked respondents if they were interested in continuing with the study
upon their release from prison. Nearly all agreed to be re-contacted, including several
who were eager to continue talking with us. Some suggested that having us check in
would help them keep straight, and one talked about making positive changes and
looking forward to “showing us” when we re-contacted him.
Interviewing respondents should be regarded as social interaction (Cannell and
Kahn 1968; Jenkins 1995). Moreover, the strength of survey data depends on the quality
82
of the interviewing process (Cannell and Kahn 1968). Social skills were incorporated
into each step of our research. The positive response and cooperation received from most
of the prisoners we interviewed suggests that this approach was appropriate and
successful.
Data
Survey Data
Previous sections of this chapter described the survey instruments used in this
research. Two interviews composed of a blend of traditional survey items and life-events
calendar questions were administered to 110 incarcerated respondents using a laptop
computer and paper-based calendars. Questions examined a range of topics including but
not limited to routine activities, peer relations, demographic information, family and
relationship background, employment history, educational background, treatment
program involvement, drug dealing, incarceration history, living arrangements, drug and
alcohol use, victimization, attitudes, legal and illegal income, access to weapons, criminal
involvement, self control, stress, and exposure to family, school, and community
violence. Descriptive statistics for these data are provided in chapter 7.
ODRC Data
Criterion validity data were collected from respondents’ official prison records.
These data were stored on microfiche housed in Columbus, Ohio at the administrative
offices of the Ohio Department of Rehabilitation and Correction. Brianne Hillmer visited
these offices daily for several months to collect the ODRC data for the project. These
official data are not as comprehensive as the survey data. Moreover, only 54 of the 110
83
respondents had presentence investigation (PSI) reports in their files. PSIs were crucial
because they provided data for some of the criminal history measures of interest.
Information collected from ODRC records included respondents’ total number of arrests
and convictions, admission and release dates for prior incarcerations, age at first arrest,
address at the time of admission, and ODRC official information on whether respondents
were arrested during each month of the study period. The arrest and criminal history
data were used in this dissertation to assess criterion validity. Other data collected from
official ODRC records were intended for geo-coding and future recidivism research that
is not part of this study.
Measures
This section contains information about the measures used in the data analyses
(see Chapter 8). Measures are grouped into six broad categories and are then briefly
described. In-depth descriptive statistics are provided in chapter 7.
Category #1: Demographic
Race. Respondents were asked the following question:
Q1: With which racial/ethnic group do you identify?
The response set and racial breakdown of the sample (N=110) is provided below. The
respondents who indicated “other” identified themselves as being of mixed race.
(1) African American (N=47)
(2) Hispanic/Mexican or Spanish-American (N=2)
(3) Caucasian (N=57)
(4) Native American (N=1)
(5) Asian (N=0)
(6) Other (if other, name) (N=3)
84
Category #2: Life Events
Three life events measures, Residential Moves, Job Changes, and School
Involvement, were used in kappa analyses of test-retest agreement. Respondents were
asked whether they experienced each of these life events during each month of the
calendar period. Responses for residential moves and job changes were originally coded
as either yes (1) or no (0). However, school involvement had to be recoded into a yes/no
variable in order to satisfy the scaling requirements for performing kappa tests [no = (0)
not in school, (1) dropped out, (2) expelled, (3) suspended; yes = (4) full time, (5) part
time]. The following questions were asked:
Q1: Did your residence change at any time during this calendar period?
Q2: Were you employed in the month before coming in? Did your employment
status change at any time during the calendar period? This includes times
when you were unemployed or changed jobs.
Q3: Were you in school during the calendar period? In which months did your
school status change?
Hours Worked, Job Commitment, Legal Income, and Illegal Income were
used in test-retest reliability correlations. Hours worked was a continuous measure, and
Job Commitment, Legal Income, and Illegal Income were ordinal. The following
questions were asked:
Q1: In total, how many hours per week did you work?
Q2: How committed (to your job) were you? In rating it from 1-5, with 1 being
hated it and 5 being very committed, would you say that you hated or were
very committed to this job?
Q3: In the last month on the calendar, did you earn any money from illegal
sources (yes/no)?
Q4: What was your annual income from your employment (before taxes)?
85
Laminated cards with scales that transformed estimated monthly income to annual figures
and vice versa were used when asking respondents questions about income to minimize
the potential for confusion.
Category #3: Substance Use
In both the test and retest interviews respondents were asked about their use of
alcohol, marijuana, crack cocaine, powder cocaine, heroin, speed, and prescription drugs
over the eighteen-month calendar period. The questions and response sets for each of
these substances were the same. For instance, respondents were asked the following
questions for each substance:
Q1: Did you use [substance] in the month before coming to prison?
Q2: Did this change during the calendar period?
When “yes” was answered for question number two, question number one was asked
again for each of the other months of the calendar period. The response set used for
answering question one is provided below:
(0) no/never
(1) once or twice this month
(2) at least once a week
(3) everyday or almost everyday
Two sets of substance use variables consisting of Alcohol, Marijuana, Crack
Cocaine, Powder Cocaine, Heroin, Speed, and Prescription Drugs were used in the
analyses. Variables in the first set comprise ordinal responses from the response set
above. These measures were used for test-retest correlations. The second set comprises
recoded substance use variables that were used in kappa analyses. For each substance,
responses were recoded to indicate whether or not the respondent reported using the
86
substance during each month of the calendar period [no = (0) no/never; yes = (1) once or
twice this month, (2) at least once a week, & (3) everyday or almost everyday]. These
recodes were necessary because kappa tests require variables with dichotomous response
sets.
Category #4: Justice System Involvement
Incarcerations, Treatment Program Involvement, Probation/Parole
Supervision, and Arrests comprise the measures of justice system involvement that were
derived from survey data and used in kappa analyses of test-retest agreement.
Respondents were asked whether they experienced any of these dispositions during each
month of the calendar period. Responses were coded as either yes (1) or no (0). The
following questions were asked:
Q1: Was there any time during the calendar period that you were incarcerated?
Q2: Were you in any type of treatment program during this calendar period?
Q3: Were you on probation or parole at any time during the calendar period?
Q4: Were you arrested at any time during the calendar period?
Official Data were also available for monthly arrests. Accordingly, an arrest dummy
variable (yes = 1; no = 0) constructed from respondents’ prison records was used in
kappa analyses of criterion validity.
Category #5: Criminal Activity
Three criminal activity measures, Violent Offenses, Property Offenses, and
Drug Dealing, were used in kappa analyses of test-retest agreement. Respondents were
asked whether they engaged in each of these forms of offending during each month of the
87
calendar period. Responses were coded as either yes (1) or no (0). The following
questions were asked:
Q1: During the months on the calendar, did you do/participate in any violent
crimes?
Q2: During the calendar period, did you participate in any property crimes
(excluding petty thefts)?
Q3: During the months on the calendar did you ever engage in drug dealing?
Category #6: Criminal History
Criminal history measures were used to examine the criterion validity of selfreports. Accordingly, there are four criminal history variables from the main survey
instrument and four corresponding measures from Ohio Department of Rehabilitation and
Correction records. Descriptions of the self-report measures are presented first in this
section. Descriptions of the ODRC measures then follow.
Total Lifetime Arrests and Total Lifetime Convictions were ordinal variables
in the self-report survey data. Each item was designed to capture juvenile and adult
experiences. Respondents answered the following questions:
Q1: Altogether in your life, how many times have you been arrested (don’t count
traffic violations)?
Q2: Altogether in your life, how many times have you been convicted of a
felony?
The response sets were as follows:
Total Lifetime Arrests (Q1)
(0) 0
(1) 1
(2) 2-3
(3) 4-6
(4) 7-10
(5) 11-15
(6) 16-25
(7) more than 25
88
Total Lifetime Convictions (Q2)
(0) never
(1) 1
(2) 2-3
(3) 4-6
(4) 7-10
(5) 11-15
(6) 16 or more
Age at First Arrest and Number of Prior Prison Terms were both continuous
measures. Respondents were asked to exclude traffic violations when reporting age at
first arrest and to focus on adult institutions when counting prior prison terms. The
following questions were asked:
Q1: How old were you when you were first arrested – that is, officially charged
by the police (an adult or juvenile arrest, other than a traffic violation)?
Q2: How many different terms have you served in an adult prison?
ODRC Criterion Variables. Four criterion variables that corresponded with the
criminal history measures in the survey data were constructed from ODRC official
records. Counts of Total Lifetime Arrests and Total Lifetime Convictions from the
official data were categorized into a scale that paralleled the organization of the survey
data. Age at First Arrest was constructed for all of the respondents whose official
records contained pre-sentence investigation reports (N=54). Juvenile onset dates were
unfortunately unavailable for the men in the sample whose records did not include PSIs.
Respondents’ dates of birth were collected at the beginning of their first survey
interviews. These dates were then subtracted from the juvenile onset dates in the official
records to derive respondents’ age at first arrest. Number of Prior Prison Terms was a
simple count of prior prison terms that were documented in the ODRC’s files.
89
CHAPTER 6
DEMYSTIFYING THE DATA COLLECTION PROCESS
Although this dissertation is based on quantitative data, this chapter focuses on
un-quantifiable features of the prison environment and the data collection process. As
Ryan and Golden (2006) found when studying Irish immigrants in London, many
interesting dynamics other than those captured by survey instruments emerge when
working in the field. For instance, two themes intertwined with our data collection that
went untapped by our instrument were researcher’s presentation of self and us/them
dichotomies among prisoners and staff.
Trulson, Marquart, and Mullings (2004) assembled a thorough guide for making
inroads into official agencies to collect and analyze data. Unfortunately, few prior
researchers have elaborated on the process of doing research in prisons once access has
been secured. Moreover, published works that do contain descriptions of the research
process have been based on qualitative (see Davidson 1974; Davies 2000; Jacobs 1974;
King 2000; Martin 2000; but also see Liebling 1999) and participatory (see Castellano
2007; Marquart 1986) studies of correctional settings. Accordingly, this chapter gives
voice to contextual dynamics that are likely to arise when collecting quantitative data
from prison inmates.
90
In the following sections I elaborate on the environment in which interviews were
conducted, provide accounts of my presence in the setting, and offer personal reflections
on interviewing offenders in prison. I have written this chapter with the goal of
illuminating some of the challenges and dilemmas that commonly emerge when doing
survey research with incarcerated respondents. I focus explicitly on my own experiences
in this reflective narrative. However, several of the dynamics, situations, and reactions I
reference were shared by other members of the research team.
Given my emphasis on the research process I am pointedly descriptive. I begin
with a thorough account of day-to-day dynamics I encountered in the field. I then offer
personal reflections. These sections are intended to demystify the process of collecting
survey data in prison. I go on to present implications of my reflections, and I conclude
by asserting that research settings and the data collection process are deserving of serious
consideration from survey researchers.
(A)Typical Day of Interviewing in Prison
Contrary to the experience of those who are incarcerated, my trips to prison
featured the pervading sense of never having enough time. On most days institutions
provided a maximum of two and a half hours to conduct two interviews. Interviews took
an average of an hour and fifteen minutes to complete, which depending on the day left
little to no time for delays. Human Subjects protections stipulated that interviews be
conducted in a private room with no staff or other inmates present, and laptop computers
were used to administer self-report surveys. Unfortunately, the time it took to find a
91
private room and set up equipment regularly cut into the interviewing period. I elaborate
more on these dynamics and other day-to-day challenges below.
Interviewing days began at least an hour and a half before the first interview
started because I had to allow for traffic and security screening. Upon arrival it took on
average anywhere from a few minutes to half an hour to actually get into the prison.
Ideally, entering institutions involved passing through a metal detector, showing
identification, receiving a visitor’s pass, and meeting an escort who took me to the
interviewing location and ensured that respondents showed up at their designated times.
The reality was that progressing through these steps smoothly was the exception rather
than the rule.
For instance, though I frequently dealt with staff members who knew who I was
and what I was doing, on some days the security screeners were filling in and did not
know about the project. One time the front desk officers were unable to find the book
that contained the gate passes, which required me to wait for over half an hour. Gate
passes list the date, time, interviewer’s name, and equipment approved to enter the
institution. They are authorized in advance by administrators and are required for visitors
to gain entry. Fortunately, there were only one or two times the front desk staff could not
locate the book of gate passes. Unfortunately, not being listed in the book when I should
have been was much more common.
There were many days when I arrived and learned the paperwork had not been
submitted for my gate pass. Moreover, there were other occasions when the wrong name
or incorrect interviewing times were listed. There was one day when I was listed on the
gate pass but my laptop was not. There were also several instances when the prison failed
92
to assign an escort to take me to my interviewing room and ensure respondents received
their passes. These errors were usually remedied with a call to an administrator, though
there were a few days when I was denied access and had to go home without ever
entering the prison. Getting through security in a timely manner, finding escorts, and
having inmates summoned were difficulties faced throughout the year I was in the field.
A new host of challenges was often presented once entrance to the prison was
gained. For instance, there were days when staff members had trouble finding a private
room for me to work in. These dilemmas were always resolved, though sometimes the
outcomes were less than ideal.
On one occasion I conducted interviews in a room that had flies buzzing around
and landing on the computer screen, the respondent, and me. There was another day
when I conducted interviews in a setting where country western music was playing
loudly. Other uncomfortable conditions included rooms with no air conditioning on
humid summer days with 90+ degree temperatures, poor lighting, uncomfortable
furniture, and large windows that created a fishbowl dynamic.
I often showed up and learned that respondents were in the hole for disciplinary
reasons. Other times inmates had been transferred to another institution, were
temporarily away for court, or had been released from prison early. There was an
instance where a respondent escaped before his interview could be conducted. I was
never told of his escape and waited for him to show up on two occasions. His status
came to my attention by chance when other members of the research team passed a wall
displaying his wanted flyer.
93
The prison environment posed a number of random distractions. There was once
a graduation ceremony occurring in the room next to where interviews were being
conducted. Interviewing also had to be postponed a few times due to housing unit
shakedowns that restricted inmate movement while staff searched for contraband. I had
to suspend interviewing on one occasion because the prison was testing its fire alarms.
Other unexpected challenges emerged from my equipment. There was an
intermittent problem with a USB port on my laptop that prevented use of the external
keyboard. This slowed down interviewing considerably. During the first two waves of
interviewing there were also programming problems that unexpectedly caused the
instrument to close down at times. Finally, there was a day when a frayed power chord
caused the computer to suddenly die while I was conducting an interview.
Equipment failures are not limited to prison research. However, in prison settings
they pose unique challenges. For instance, minor problems can result in canceled
interviews due to time constraints and the inability to contact technical support while in
prison. Moreover, computer illiterate respondents may get confused or become frustrated
when their responses disappear from the computer screen, and some may equate
equipment failure with being unprofessional. Finally, inmates and staff alike may
become suspicious of prolonged fidgeting with equipment and similar makeshift efforts
to address technological difficulties.
Interview settings varied across institutions and from one day to the next. I
typically met with respondents in unoccupied staff offices, classrooms, parole-board
rooms, visiting rooms, and conference rooms. Some settings presented incidental
challenges to privacy that were dealt with as they emerged.
94
For instance, staff members would sometimes need brief access to the rooms I
worked in to grab a file or make a photocopy. Occasionally they knocked first, but
usually they did not. Depending on the situation, when a staff member unexpectedly
entered during an interview I whispered or talked softly, silently pointed to questions and
answers on the computer screen, suspended interviewing, and talked about mundane
topics until the person left.
There were also staff members who attempted to remain in the room during
interviews. Corrections officers with no knowledge of who I was or what I was doing
were often assigned to assist me. Some of these officers assumed they were supposed to
provide protection, and their attempts to be present during interviewing were well
intended.
However, other times staff members wanted to observe an interview for personal
reasons. Usually these individuals were curious or bored, though some appeared to have
a genuine interest in the project or learning about the research process. Regardless of the
motivation, I explained that interviews had to be done alone and never had any problems
when asking staff members to grant me privacy.
A typical day of interviewing in prison was unpredictable and often frustrating.
Getting stalled at the front desk, having to wait for tardy inmates, dealing with computer
problems, and contending with other unpredictable challenges and distractions cut into
the limited window of time that was available to conduct interviews. Unanticipated
incidents often muddled a day’s interviewing schedule and prolonged the project’s time
in the field. Cannell and Kahn (1968: 575) noted the need for researchers to be
95
spontaneous when conducting interviews. This is especially true when interviewing in
prisons (Martin 2000).
Impacting the Setting
Qualitative researchers have noted that studying inmates interferes with the daily
activities of prisons (Hart 1995; King 2000; Martin 2000; Newman 1958). Taking up
office space was a fundamental way that my presence impacted the prison environment.
Moreover, there was one time when I overheard staff members in an adjacent room
complaining to a corrections officer that my respondents had been waiting in their area
for over two hours. These exaggerated complaints seemed to be territorially motivated
and served as a reminder that my presence was not always welcomed.
Some staff members may have unintentionally been slighted by my research.
Liebling (1999) noted that prison employees felt neglected when she interviewed
inmates. I encountered similar reactions. Several corrections officers complained that
researchers always talked to inmates and suggested I should have interviewed them if I
wanted to know about prisoners and prison. They made a valid point, and I told them I
would consider returning in the future to interview prison staff.
The project also created extra work for the administrators and front desk officers
who lined up escorts, issued gate passes, and secured approvals on a daily basis. Davies
(2000: 88) reflected that it sometimes seemed like the prison hoped she would “give up
and go away” when she did research in prisons. The problems I often had getting set up
each day gave me similar thoughts. There were also instances when irritable front desk
officers asked me how much longer I would be doing interviews.
96
As these examples show, some staff members were frank in expressing
resentment toward my presence. However, these reactions were not representative of
how most prison employees responded. Many staff members seemed indifferent to my
presence. Others took an active interest in the project and eagerly tried to help. For
instance, corrections officers offered to give me tours of their institutions so I could learn
more about prisons. Front desk officers were my best allies when paperwork was not
submitted and improvised plans for my entrance were needed. Finally, there were
employees who welcomed conversing with me because it offered a distraction from their
daily routines.
Inmates’ reactions also provided insight into my effects on the setting. Jacobs
(1977) received requests for legal advice from inmates who were aware of his legal
background. I had several prisoners ask me questions about college. Inmates with
educational aspirations saw me as a potential resource and were thankful for the chance
to interact with someone who was not another prisoner or staff member. Moreover, some
inmates perceived participating in the project as a chance to give voice to their
experiences. Several respondents felt important because someone from a well-known
university drove all the way to the institution just to talk with them. In general my
presence seemed to be viewed favorably. The main negative effect I had on inmates was
that sometimes they were abruptly woken up or taken away from an activity to be
interviewed.
Peripheral and Invisible
Accommodating researchers is not a primary concern of most prisons (Hart 1995).
Helping me was one of several responsibilities that staff members faced each day. The
97
frequencies in which gate passes were not prepared, escorts were not lined up,
respondents had not received passes, and prisons had not been expecting me suggested
my presence was more of a nuisance than a priority.
Subtle reminders that I was not the featured attraction in the prison environment
checked any proclivity I had toward egocentrism. For instance, I once interviewed an
inmate in a setting that had an adjoining toilet. A staff member entered the room at one
point and walked by oblivious to the interview that was taking place. He loudly urinated
and then left. There were also times when employees forgot to escort me from the
interviewing room back to the prison’s entrance when my time was up.
There was one day when an escort had not been lined up to assist me. After
passing through security I was ignored by front desk officers and left waiting for about
twenty-five minutes. Some volunteers extended an invitation to share their escort, which
I accepted because I had fallen behind schedule and was not receiving assistance. Later
in the week I got an apology for the institution’s lack of preparation on this day, during
which I realized the employees who regularly helped me assumed I had not entered the
institution or conducted interviews.
Prisons are large formal bureaucracies. Each day volunteers, lawyers, vendors,
teachers, and other visitors pass through their gates. In the flurry of activity I was
sometimes overlooked, and on one occasion I interviewed prisoners without my contacts
within the prison knowing about it. At times my presence clearly impacted the research
setting. However, emphasizing my effects on the prison environment would be selfindulgent because I was occasionally invisible and always peripheral.
98
Inmate-Staff Tension
Researchers who spend time in institutions will inevitably find themselves in the
middle of tense interactions. I often observed staff members who were polite with me
suddenly become gruff with inmates for reasons I could not discern. At times it seemed
some of these staff members harassed and inconvenienced inmates arbitrarily. Requiring
prisoners to wait for no apparent reason and performing unnecessary frisks and searchers
were two forms of capriciousness I perceived.
Inmates exhibited similar shifts in demeanor. Two interviews I conducted are
telling of how pronounced tension can be and how quickly behavior can change. In the
first incident, the person assigned to assist me strongly recommended I not meet with one
of the inmates on my schedule. He was sincere in his concern and described this inmate
as a “pill.” I thanked the staff member for his consideration but affirmed I needed to
interview each person on my list. He reluctantly agreed to bring the inmate in.
A few minutes later I saw the staff member enter the building with a large, flush
faced prisoner who was vocal and clearly being difficult. However, the respondent was
calm and respectful after entering the office and closing the door, though he adopted his
initial persona when going back to his housing unit after the interview. The inmate was
to be released in a few days. After he left the staff member said he would be back in
prison soon if I wanted to interview him again and lamented, “He’s not ready to leave
yet.”
The second example of inmate-staff tension is comparable. I interviewed an
inmate who was personable, polite, and respectful. The respondent was engaged in the
project and provided one of the better interviews I conducted. However, for reasons
99
unclear to me this same inmate scowled, made comments under his breath, and gave off a
perturbed vibe when staff escorted him to and from the interviewing area. During the
interview this otherwise jovial respondent put his head up and glared toward the window
each time staff members looked in to check on us. At one point I was told in a forceful
voice that “COs are all pricks!”
Inmates and staff clearly exhibited inconsistent demeanors. Prisoners were
cooperative and agreeable toward me and during interviews. However, some behaved
disrespectfully toward staff for no apparent reason. Moreover, some prison employees
appeared boorish when interacting with inmates, yet most of these individuals were
pleasant and considerate with me.
It is possible dynamics I did not see produced the inconsistent behaviors I
observed. Accordingly, what I perceived as arbitrary treatment by staff or unprovoked
attitude by inmates may have in fact been rooted in ongoing relational patterns.
Alternatively, the situational context of the prison environment may have given rise to
these behaviors, independent of the persons involved (Goffman 1964). Conclusions
about inmate and staff behavior would be dubious given the contradictory patterns
exhibited and limits on what I saw. These observations must therefore be interpreted
with discretion because behavior occurs within broader contexts and may be guided by
unobservable forces.
The Presentation of Self
Prison employees often separate themselves and prisoners into us/them
dichotomies that reduce inmates to managed objects (Goffman 1961). These divisions
between staff members and prisoners are reinforced by inmate codes of conduct that
100
forbid inmates from engaging in friendly relations with staff (Granack 2000; Sykes 1958;
Sykes and Messinger 1960). Within the dichotomized prison world, researchers
represent a third category that falls outside the inmate-staff social order (King 2000).
As the preceding examples demonstrate, occupying this position allowed me to
gain insights that might have otherwise been unavailable had either “us” believed I was
aligned with “them.” Neutrality and maintaining outsider status can therefore be
beneficial when conducting research in the prison environment.
Doing research in prison requires assistance and cooperation from prisoners and
employees (Newman 1958). I found that managing these dependencies entailed
balancing and putting forth an objective front. Following King’s (2000) advice, I
presented myself as being committed to learning about prisoners rather than as an
advocate for inmates or employees. Aside from being a truthful representation of my
motives this response satisfied staff members who asked why I was interviewing
prisoners.
Newman (1958) suggested researchers stress to inmates that they are not affiliated
with the prison system in any way, and King (2000) recommended that researchers
engage in visible actions that substantiate their outsider status, such as being seen waiting
for escorts from employees when entering and leaving the setting. Both of these tactics
were incorporated into my presentations of self. I repeatedly reminded inmates that I was
a university researcher who did not work for the prison system, and I deliberately acted
differently from prison employees. For instance, I called inmates by their first names,
shook their hands, and used my first name, which are all actions staff members are not
allowed to engage in. These behaviors facilitated my effectiveness as a researcher,
101
though first and foremost they were motivated by the desire to treat respondents with
respect.
Previous researchers have cautioned that prison employees will try to find out
what inmates disclosed in interviews (King 2000; Martin 2000) and prisoners will ask
what staff members have said (Martin 2000). I never had an inmate inquire about what
prison employees or other prisoners told me. However, there were instances when staff
members asked about particular respondents.
Employees also posed general questions such as, “What have they been telling
you?” and, “What have you learned so far?” My standard reply to these inquiries was, “I
have not had a chance to analyze the data yet.” Commonsense responses such as, “I’ve
talked to several inmates who had trouble finding a good job” satisfied curiosity and
deflected additional questioning when I was pressed further.
I’m a Researcher from Ohio State University
The way I introduced myself in prison was crucial and was not taken for granted.
Prisoners and non-prisoners alike often incorrectly assume that criminologists are
affiliated with law enforcement agencies. Most prisoners would have therefore been
suspicious of my motives had I presented myself in the prison environment as a
“criminologist.”
Adopting the “sociologist” label posed its own problems. For instance, I
frequently encountered staff members who thought I was a social worker, including a
corrections officer who once asked as I was leaving if I had made a difference that day.
Jacobs (1977) encountered similar misperceptions in his field research and learned that
prisoners perceived sociologists as mental health practitioners.
102
Being connected with a well-known university increases the likelihood of being
taken seriously by administrators, line staff, and inmates (see Jacobs 1977; King 2000;
Martin 2000; Newman 1958). Emphasizing my affiliation with Ohio State University
became central to my presentation of self, and I usually introduced myself as a
“Researcher from Ohio State University.” Props that lent credibility to this front (see
Goffman 1959) included recruitment and consent forms printed on official university
stationary, a federal Confidentiality Certificate, a visitor’s badge, and a laptop computer.
Being affiliated with Ohio State University was advantageous. Many staff
members and inmates were devoted fans of the University’s football team, which was not
surprising considering that prisons are hyper-masculine environments (Sabo, Kupers, and
London 2001) and football is a hyper-masculine activity (Messner 2002). The team
played well while I was in the field, and several inmates said they participated in the
study because they loved the Buckeyes. Moreover, upon hearing where I was from a
staff member once proudly lifted up his shirt to show off a tattoo of Brutus Buckeye, the
University’s mascot.
Aside from football, some staff members were alumni, some inmates had hopes of
attending Ohio State in the future, and both staff and inmates knew people who went to
the University. Several respondents said they volunteered because they had a family
member or friend who attended Ohio State, and more than one said, “Anything for OSU.”
I found it interesting that prisoners who would never have an opportunity to set foot on
the campus seemed to appreciate and respect the University more than many of its own
students.
103
From Presentation of Self to Self Examination
Up to this point I have shared how I navigated the field, presented myself,
impacted the setting, resolved emergent challenges, and otherwise went about the day-today process of doing survey research in prison. My goal has been to offer a glimpse into
the situations and challenges I encountered. I now take a more reflective turn toward
examination of how doing survey research in prison affected me personally and the
implications of my reactions.
Toward Reflective Quantitative Research
Individuals who spend time in penal institutions typically have a range of strong
reactions to the prison environment. For instance, a journalist recently disclosed
experiencing utter sadness when observing incarcerated juveniles (see Hubner 2005:
257). Moreover, volunteers in an adult prison in Washington (see Gabriel 2005) and a
juvenile hall in Los Angeles County (see Salzman 2003) unexpectedly bonded with
inmates and chose to renew their initial assignments. The word “adrenaline” has been
used to describe the rush of being a new corrections officer (Conover 2000) and teacher
(Gordon 2000: xix) in prison, and inmates have expressed myriad reactions to being
locked up (see Hassine 1999; Martin and Sussman 2002; McCall 1994; Rideau and
Wikberg 1992; Santos 2003; Zehr 1996). Qualitative researchers have also reported
being moved by prison environments (see Davies 2000; Fisher-Giorlando 2003; Jacobs
1977; King 2000; Pryor 1996: 16).
Taken together these examples suggest that prisons profoundly affect those who
enter them. It is therefore peculiar that quantitative researchers have avoided writing
about their own experiences in correctional settings (Liebling 1999), especially when
104
considering self-report surveys have been administered in prisons for several decades.
My review of the literature produced only one article that examined the effects of prison
environments on individuals who conducted quantitative research (see Liebling 1999).
Reflections on how doing research personally affected the researcher are routinely
found in qualitative publications (see Adler 1993; Davies 2000; Ferrell 1998; FisherGiorlando 2003; Gibson 1994; King 2000; Lyng 1998; Pryor 1996; Valentine 2007).
However, reflective sections are typically absent from quantitative works (Ryan and
Golden 2006). These omissions tacitly suggest that survey research and survey
researchers are objective and detached from emotion. This is unlikely, particularly for
those who study prisons (Liebling 1999). If quantitative researchers do in fact experience
emotions while doing research, why do they avoid sharing them? I propose that one
reason quantitative researchers do not write more reflectively is because they believe this
is what qualitative researchers do.
Researchers need to transcend qualitative/quantitative divisions when
conceptualizing their methods. Silverman (1998: 80) correctly pointed out that “…most
dichotomies or polarities in social science are highly dangerous. At best, they are
pedagogic devices for students to obtain a first grip on a difficult field: they help us to
learn the jargon. At worst, they are excuses for not thinking.” To the extent reflective
work is considered the exclusive domain of qualitative researchers, simplistic
quantitative/qualitative dichotomies will continue to be reinforced and invaluable insights
will go unshared.
Entering a correctional facility is a sensory experience (Liebling 1999). Although
quantitative researchers have not given voice to this feature of the research process,
105
recognizing experiential and contextual dynamics contributes to better science in two
ways. First, knowledge gained from attending to these aspects informs analyses of
quantitative data (see Jenkins 1995; Liebling 1999). Second, reflective pieces can help
other scholars anticipate obstacles they will likely encounter in their own research. In the
paragraphs below I present a sample of my reflections and draw implications for other
researchers.
Interviewing in Total Institutions
“The prisons we inherit are settings of pain” (Johnson 2002: 60) because
incarceration deprives inmates of privacy, agency, intimate relations, and feelings of
safety (Sykes 1958). Moreover, inmates’ daily lives are dictated by social controls that
ultimately foster their subservience and estrangement from broader society (Goffman
1961). The following excerpts from my research notes affirm that deprivation and pain
were acutely experienced by many of the prisoners I interviewed:
ƒ
Respondents spoke of difficulties stemming from being surrounded by criminals,
being disrespected by staff, not having any privacy, boredom, being away from
family, losing partners and homes, and having loved ones die while in prison.
Irwin (1985) noted it is hard to maintain a decent appearance while in jail.
Several respondents I spoke with looked unkempt and disheveled, suggesting
similar difficulties exist in prison. I also saw countless sores, rashes, and other
skin conditions, and I interviewed a few inmates with marks resembling cutting
scars on their arms.
ƒ
Goffman (1961) depicted total institutions as places where inmates are openly
mocked by staff and talked about like they are not present while they are
physically in the setting. I observed both of these dynamics often. I also
witnessed strip-searches of inmates on a handful of occasions. Strip-searching
prisoners in front of a university researcher reveals the salience and
shamelessness of the objectification of inmates in the prison environment.
106
ƒ
Some inmates seemed to be punished disproportionately by imprisonment when
compared to others I interviewed. For instance, a respondent from another state
happened to get arrested in Ohio on a drug charge. His impoverished family lived
in his home state hundreds of miles away from where he was serving his time. I
was therefore his only visitor while he had been incarcerated. The respondent
described being lonely in prison and asked me to come back again in the future.
In many cases affliction was apparent simply from looking at inmates. My own
observations of distress made interviewing challenging at times. Seeing the
objectification of inmates in prison and the additional problems posed by incarceration
made me feel powerless. Additional reactions I had include sorrow, chagrin, and anger:
ƒ
Respondents frequently revealed painful backgrounds containing addictions,
overdoses, victimization, stigmatization, unemployment, relationship and family
problems, illiteracy, and poverty. I often wondered how and if they would
overcome the obstacles they faced. I concluded many would not. These
realizations made me sad.
ƒ
Prisoners sometimes say offensive things during interviews (Davies 2000). I
spoke with inmates who made sexist comments and were self-proclaimed racists.
I also encountered prisoners and staff members who made homophobic jokes.
Some respondents committed acts I personally detest, such as domestic violence
and sex offenses. Moreover, staff members were also offensive at times,
including an employee who once bragged to all within earshot about shooting and
killing a neighbor’s cat. Seeing prideful expressions of these ideologies and
behaviors by individuals who had helped me was both disappointing and
awkward.
ƒ
Fleisher (1998) became outraged at the criminal justice system when seeing how
it negatively affected the gang members he studied. I was angered by
observations of how incarceration isolated and disrupted lives, stigmatized
offenders, and often presented new challenges to people who already faced
insurmountable problems. I was also frustrated at times because some
respondents should not have been sent to prison in my opinion. For instance, I
interviewed individuals who were locked up for what I considered to be minor
drug offending. Given the potential harmful effects of incarceration (see Elsner
2006), the average annual cost of over $20,000 to incarcerate prisoners in the
institutions I visited, and the increasingly high recidivism rates of drug offenders
(Hughes and Wilson 2002), I often scoffed at the wisdom of using prison to
sanction addiction.
107
Doing survey research in prison clearly exposes one to bothersome circumstances.
However, despite the negative ambience of correctional settings there were also
auspicious circumstances. Aside from lighthearted moments that spontaneously emerged
through social interactions, the positive situations I encountered typically pertained to
rehabilitative programs:
ƒ
Some respondents sought to make changes in their lives and were pleased to have
access to parenting classes, substance abuse treatment, and GED programs. One
respondent completed his GED while incarcerated and then became a tutor for
other prisoners. He exuded pride and planned to enter college upon his release.
Emphases on high recidivism rates and other failures of the prison system
typically overshadow the success stories (Johnson 2002; Maruna 2001). A sizable
minority of the inmates I interviewed told me their lives were out of control and
coming to prison had been good for them. These revelations surprised me.
ƒ
A few of the prisons had dog-training programs. Cell dogs have a pacifying effect
on prison environments and immediately attract attention in any room they enter.
One day an inmate stopped by the interviewing room to introduce me to his cell
dog. It was hot and humid throughout the institution, and the dog sullenly resisted
leaving the comfort of the air-conditioned office when it was time for him to go.
His inmate handler, a correctional officer, and I bonded for an empathic moment
that revealed a shared humanity typically obscured by blue and gray uniforms.
The word “prison” often conjures up images of overt oppression and monotony. My
observations do not refute these connotations. However, my year in the field showed me
that prison environments are more complex. For instance, I found that in many cases the
most pressing deprivations and pains were veiled and less visible to outsiders upon first
glance. Moreover, while conventional wisdom teaches that inmates resent being locked
up I found that a minority of the prisoners I interviewed spoke positively of their
incarcerations.
108
Methodological Implications
Four methodological implications can be drawn from my experiences. First,
those who do research in prison need to consistently evaluate how they present
themselves in the setting. I adopted objective and neutral fronts into my presentation of
self in order to avoid being assigned a position within inmate and staff us/them
dichotomies. I took deliberate efforts to sustain and affirm the fact that I was not of the
prison or the prisoners.
Moreover, toward this end I often engaged in emotion management (see
Hochschild 1983). For instance, there were times when I was frustrated by prison
policies, angered by the actions of corrections officers, annoyed by inmates, and
sympathetic to those who were incarcerated. I also met inmates and staff members with
whom I likely could have been friends had we met under different circumstances.
Regardless of how I felt I kept my opinions and emotions to myself to ensure that neither
prisoners nor staff had reason to associate me with “them.” I also chose to contain my
reactions when experiencing negative emotions and encountering language and behavior
I found offensive, though I would have reported egregious violations of regulations by
inmates or staff and incidents that were not protected by my protocol. Fortunately, I
never had to contend with these issues.
A second implication pertains to guarding against selective perceptions. When
spending consecutive days in the field it often seemed like the main people I spoke to
each week were the prisoners and staff members I encountered while interviewing.
Researchers may become susceptible to prison tunnel vision when their prison-related
interactions rival or exceed their interactions with free-society in frequency, duration, or
109
intensity. Accordingly, researchers must critically examine the perceptions they take
away from prison and maintain broader perspectives.
For instance, I previously referenced my chagrin when inmates and staff
expressed offensive sentiments. However, it is important to remember that people who
do not live or work in prison often hold similar beliefs. Accordingly, asserting that
people in prison are sexist, racist, and homophobic without also acknowledging the
prevalence of these ideologies in mainstream society would be skewed. I also made
reference to a subset of respondents who spoke of making improvements in their lives
while incarcerated. However, this does not necessarily mean that these inmates wanted
to be in prison. The extent to which some inmates expressed being positively affected by
imprisonment likely reflects how uncomfortable their lives were prior to prison and the
discomforts some people experience in a stratified society rather than the comforts and
desirability of prison life (see Ross and Richards 2002 for a thorough review of the
discomforts of prison).
Researchers should also critically assess perceptions they take into prison. For
instance, I spoke of my surprise upon observing prison success stories, which indicates
that I mainly expected to find inmate resentment and evidence of failed prison policies.
These presumptions were likely formed through living in a society that is becoming
increasingly critical of its prisons, my exposure to media images that sensationalize pain,
and my readings of academic works geared toward identifying and fixing problems in the
justice system. Accordingly, researchers need to place their observations and interactions
into proper context to avoid unfair or incomplete generalizations and erroneous
conclusions.
110
The third implication pertains to research ethics. As outlined in The Belmont
Report, prisoners must be capable of making informed decisions that are free from
“undue inducements” when they are recruited as research subjects (Department of Health,
Education, and Welfare 1978; Kiefer 2004; Martin 2000; Overholser 1987). I referred to
a respondent from another state that had not had any visitors and asked me to come back.
Of all the interviews I conducted his affected me most. Aside from feeling sympathetic, I
later wondered whether loneliness constitutes an undue inducement to participate in a
prisoner study. I am not sure, and I continue to reflect on this conundrum. I believe
others who do research with prisoners must also carefully weigh this concern. In the
interim I turn to the fact that I treated the respondent with respect and temporarily
relieved him from his isolation.
A fourth implication involves power and objectification. My reactions and
observations reflect my privileged positions as researcher and temporary guest in the
prison environment. They may also hint at voyeurism. Jacobs (1977) questioned
whether sidestepping inmates’ pain to focus on his research goals was voyeuristic and
pondered whether prison research should be done. I believe it should.
Recent increases in the prison population have been unprecedented (Austin and
Irwin, 2001; Elsner, 2006) and recidivism rates have been rising (Hughes and Wilson,
2002). Moreover, though prisons are fascinating places (King, 2000), most people have
misconceptions of what they are like (King, 2000; Martin, 2000). For instance,
corrections officers and prisoners are often negatively stereotyped, yet they were mostly
accommodating and helpful toward me (also see Liebling 1999). Research on prisons
111
and prisoners should therefore be conducted to correct misconceptions and encourage
wider dialogue on prison-related topics.
However, researchers must constantly and critically evaluate their motivations for
studying prisoners. There are an infinite number of potential research topics one can
pursue. I chose to interview prisoners, and the experience was captivating. Charges of
voyeurism are therefore difficult to deny. I instead propose conceptualizing voyeurism as
a continuum featuring the ideal types of exploitative voyeurism at one end and
sympathetic voyeurism at the other. Researchers should be reflective and regularly
determine where they fit on this voyeurism continuum. Those with exploitative leanings
should consider pulling out of the field or revising their agendas.
Concerns about recent shifts in corrections and the fates of people in prison drove
my initial participation in this project. Through ongoing reflections on my involvement
and discussions with colleagues I consistently reaffirmed that my motivations were
mostly sympathetic rather than exploitative. I also determined the project had more
positive than negative implications for prisons and inmates, and I would have terminated
my involvement had I concluded otherwise. Ideally, my research will contribute to the
betterment of offenders’ lives and the formulation of policies that reduce crime and
recidivism. Regardless, I listened and treated inmates and staff with respect in an
environment where dignity can be hard to come by.
Concluding Thoughts
Survey researchers are painstakingly thorough when describing their variables
and analyses, yet the process of collecting survey data in prisons is typically shrouded in
mystery. Moreover, descriptions of research settings are also conspicuously absent from
112
prison studies. The conventional practice of only focusing on what is done with data
after they have been collected is problematic because it sends the message that research
settings and the data collection process are not worthy of serious consideration.
However, research settings and data collection are crucial for at least two reasons.
First, insights gained from embracing these dynamics can improve quantitative data
analyses (see Jenkins 1995; Liebling 1999). Second, written accounts detailing research
settings and the data collection process sensitize other scholars to challenges they will
potentially face in their own research. The lack of attention to all that happens before
data are analyzed is a fundamental limitation of prior research.
Accordingly, this chapter extends the literature by examining un-quantifiable
features of doing survey research in prisons. I have described day-to-day scenarios, and I
have shared reflections. I have also outlined methodological implications. Although my
interview data were ultimately quantified and analyzed, the data collection process was
not taken for granted and the lives behind the numbers were not ignored.
An unanticipated challenge of writing this chapter has been sharing potentially
unflattering observations about the prisons, inmates, and staff members who
accommodated our project and me. I am reminded of King’s (2000) suggestion that
researchers be committed to learning about prisoners rather than advocating on behalf of
inmates and employees (also see Schubert et al. 2005: 639). I have attempted to remain
objective and fair, and I hope my efforts to eliminate sensationalism in the presentation of
my observations, reactions, and opinions have been successful.
113
CHAPTER 7
THE SAMPLE
This chapter provides a description of the sample. The following sections present
information about respondents’ backgrounds and life circumstances at their times of
arrest. As a group, prisoners in the sample featured substantial involvement with
substance use, multiple indictors of social disadvantage, and diverse backgrounds and
criminal histories.
The sampling frame comprised level one and level two male offenders between
the ages of 18 and 32. Approximately 73% of Ohio’s prisoners were level one or level
two inmates while we were in the field (ODRC 2006a). Accordingly, the sample closely
resembled Ohio’s general prison population on many dimensions.
For instance, the two most common committing offenses for respondents were the
same as those found in the Ohio Department of Rehabilitation’s 2005 Intake Study (see
Table 7.1). Moreover, the next three most common committing offenses in the Intake
Study were also among the most recurrent in our sample. The ODRC annually conducts
the Intake Study to provide an overview of inmates entering the prison system. It is
therefore an ideal basis for comparison given we restricted the sample to recently
admitted prisoners.
114
Committing Offense
Drug Offenses (Total):
Possession
Trafficking
Conveyance
Chemicals for Manufacture
of Drugs
Weapons Charges (Total):
Concealed Carry
Weapons under disability
Possession of a Firearm
Technical Violations
Violent Offenses (Total):
Robbery
Domestic Violence
Vehicular Homicide
Aggravated Assault
Simple Assault
Property Offenses (Total):
Breaking and Entering
Burglary
Receiving Stolen Property
Larceny/Theft
Motor Vehicle Theft
Vandalism
31.8 (total)
17.3
11.8
1.8
Percentage of All
Committing Offenses
for ODRC Intake
Sample (five most
common offenses)
18.4
10.5
-
.9
15.4 (total)
10.9
2.7
1.8
9.1
11.7 (total)
2.7
2.7
1.8
3.6
.9
28.0 (total)
4.5
7.3
4.5
9.9
.9
.9
5.4
4.7
7.2
-
Percentage of All
Committing Offenses
for Sample
Table 7.1 Most Common Committing Offenses: Respondents (N = 110) and Ohio
Department of Rehabilitation and Correction Intake Study Sample (ODRC 2006b).
115
Drug and property offenders were the most commonly represented inmates in the
sample. Property offenders feature the highest recidivism rates relative to other exprisoners (Langan and Levin 2002). Moreover, the rates of re-arrest for property and
drug offenders went up substantially between 1983 and 1994, and the percentage of
inmates released from prison who were drug and property offenders also increased
(Hughes and Wilson 2002). Accordingly, the types of offenders most affected by recent
incarceration and recidivism trends were well represented in the sample.
Respondents featured a range of committing offenses. Table 7.1 presents most of
the offenses that brought the inmates we interviewed into prison. A few respondents
were locked up for driving under the influence, sexual offending, abduction, and child
endangerment. However, these offenses are not listed in the table because they were
committed infrequently.
Respondent Characteristics and Backgrounds
It was previously noted that our request to interview female prisoners was
declined (see Chapter 5). The sample was therefore restricted to male inmates. Around
92% of Ohio’s prisoners were men when we were in the field (ODRC 2006a).
Moreover, it was noted that we set out to interview respondents between the ages
of 18 and 32 to increase variation in the sample and facilitate future recidivism research
(see Chapter 5). The average age of respondents was approximately 26 years old, which
was slightly younger than the mean age of 32 found in the ODRC’s 2005 Intake Study
(ODRC 2006b).
116
Purposive sampling techniques toward achieving racial parity among respondents
were not employed. However, the racial breakdown of the sample closely paralleled the
racial composition of Ohio’s general prison population (see Table 7.2). Information
about Latinos was not collected by the ODRC. It is therefore conceivable that Latinos
may have been categorized by the ODRC as either white or African American. The fact
that the sample was self-weighting suggests that recruitment and interviewing methods
did not produce systematic racial bias.
Race
White
African American
Latino
Other
Percentage of Sample
51.8
42.7
1.8
3.6
Percentage of ODRC
Inmate Population
51.24
47.84
0.92
Table 7.2 Respondents’ Racial Background: Respondents (N = 110) and Ohio
Department of Rehabilitation and Correction prison population (ODRC 2006a).
The family environments respondents grew up in break down roughly into three
categories (see Table 7.3). Both biological parents raised about one third of the prisoners
we spoke with, while single mothers reared another third. A variety of arrangements
mostly involving other family members made up the final category.
117
Who raised you during
most of your
childhood?
Both Parents
Single Mother
Mother and Step Father
Father and Step Mother
Grandparents
Other Relatives
Other Arrangements
Percentage
of Sample
32.7
33.6
10.0
1.8
10.9
6.4
4.5
Table 7.3 Respondents’ Childhood Family Environment (N = 110).
Respondents who grew up in single mother homes reported earlier ages at first
arrest than those in the other two categories. For instance, the mean age at first arrest for
those from single mother households was 14 years old, while respondents raised by both
parents or in other circumstances were first arrested at a mean age of 17.42 and 15.8
years old, respectively. Respondents were also asked if the person(s) who raised them
had ever served time in jail or prison. Approximately 72% said no. Of those who said
yes, nearly 12% reported mothers who had served time, 14.5% reported fathers who had
served time, and 1.8% reported that both parents had served time.
Social disadvantage is readily apparent when examining respondents’ educational
backgrounds. People in prison typically have lower levels of education than individuals
in free society (Western, Schiraldi, and Ziedenberg 2003). Moreover, our respondents
reported lower levels of education than the inmates in the 2005 Intake Sample (see Table
7.4). It is possible our respondents’ relative younger ages accounted for this slight
disparity in educational attainment.
118
Highest Level of
Education Completed
9th or less
10-11th
H.S. Graduate/GED
Some College/College
Graduate
Some College
College Graduate
Post Graduate
Percentage of
Sample
11.8
38.2
38.2
-
Percentage of ODRC
Intake Sample
7.0
36.5
39.5
16.9
10.9
0.0
0.9
-
Table 7.4 Respondents’ Level of Education (N = 110) and Ohio Department of
Rehabilitation and Correction Intake Study Sample (ODRC 2006b).
As shown in Table 7.4, one respondent had a post-graduate degree. He was also
the only prisoner in the sample who reported involvement in white-collar crime. For
instance, his offenses included money laundering, theft in office, and record tampering.
This case notwithstanding, the educational backgrounds and committing offenses found
in the sample underscore the fact that prisons primarily house street offenders and the
disadvantaged (Reiman 2004).
Many respondents who did not have a high school diploma or GED had acquired
vocational training. For instance, 41.8% of those with less than a high school degree had
received some form of job training. Included in this group were several respondents who
had participated in vocational programming while previously incarcerated.
This still leaves a significant portion of the sample with neither formal nor
vocational education. Moreover, 23.7% of respondents said they knew how to use a
computer either not well (16.4%) or not well at all (7.3%). The future employment
prospects for these respondents in a “credential society” (see Collins 1979) that is
119
becoming increasingly computer oriented appear dismal, particularly when considering
additional challenges encountered by job seekers with felony records (see Pager 2003).
Estimated Annual Legal Income
$0
$ 1-9,999
$ 10,000-14,999
$ 15,000-24,999
$ 25,000-34,999
$ 35,000-49,999
$ 50,000 and above
Percentage
of Sample
36.4
17.2
10.0
17.3
6.4
8.2
4.5
Table 7.5 Respondents’ Annual Legal Income (N = 110).
Table 7.5 reports the estimated annual legal income for offenders in the sample.
Respondents’ abject earnings likely stemmed from their disadvantaged structural
positions and deficiencies in cultural capital. Just over 36% of respondents reported no
legal annual income, while nearly half of the others hovered below or slightly above the
poverty threshold of $10,294 for a family of one (U.S. Census Bureau 2006). However,
rather than comprising families of one many respondents were tied to others and trying to
support dependents (see Tables 7.6, 7.7, and 7.8).
Marital Status
Married
Not Married
Percentage of Sample
18.2
81.8
Percentage of ODRC
Intake Sample
32.4
67.0
Table 7.6 Respondents’ Marital Status at Time of Arrest. Respondents (N = 110) and
Ohio Department of Rehabilitation and Correction Intake Study Sample (ODRC 2006b).
120
Parental Status
Has Kids
Does Not Have
Kids
Percentage
of Sample
66.4
33.6
Table 7.7 Respondents’ Parental Status at Time of Arrest (N = 110).
Number of dependents
at time of arrest
0
1
2
3
4 or more
Percentage
of Sample
32.7
20.0
15.5
13.6
18.1
Table 7.8 Respondents’ Number of Dependents at Time of Arrest (N = 110).
Approximately two thirds of the inmates in the sample were fathers. However,
only 23.3% of these fathers lived with their kids in the month leading up to arrest.
Moreover, half of the married respondents did not live with their wives when they were
arrested. Nonetheless, a majority of respondents were supporting at least one other
person at their time of arrest, suggesting that several family members were “left behind”
(see Travis et al. 2006). Comments made while answering questions about dependents
indicated that romantic partners, children, parents, and siblings were the individuals
respondents most frequently supported.
Self Reported Criminal Histories of Respondents
The sample featured offenders with diverse criminal histories. Respondents’ ages
at first arrest peaked in the mid- teenage years and then steadily declined (see Table 7.9),
121
which is consistent with the bell shaped age-crime distribution routinely found in
criminological research (see Hirschi and Gottfredson 1983; Moffitt 1997). Most
respondents reported multiple prior arrests (see Table 7.10).
Age at First Arrest
12 or under
13-16
17-19
20-23
24 or older
Never been arrested
Percentage of Sample
24.5
32.8
23.6
13.5
3.6
1.8
Table 7.9 Respondents’ Age at First Arrest (N = 110).
Number of Lifetime
Arrests
1
2-3
4-6
7-10
11-15
16-25
25+
Never
Percentage of Sample
5.5
17.3
19.1
13.6
10.0
13.6
19.1
1.8
Table 7.10 Respondents’ Number of Lifetime Arrests (N = 110).
A few prisoners said they had never been arrested. When further probed these
respondents explained they did not believe they had been arrested because they turned
themselves in. Taken together, the findings presented in Tables 7.9 and 7.10 suggest
wide variation and a blend of novice and experienced offenders within the sample.
122
One of the survey questions focused on the number of terms respondents had ever
served in jail. This item was designed to measure actual jail sentences rather than
temporary detentions, though it is possible some respondents failed to make this
distinction. Regardless, Table 7.11 indicates that most of the inmates in the sample had
seen the inside of a jail on multiple occasions before eventually coming to prison.
Number of
Jail
Terms Served
0
1
2-3
4-6
7-10
11+
Percentage
of Sample
15.5
16.4
29.1
22.7
8.2
8.1
Table 7.11 Respondents’ Number of Jail Terms Served (N = 110).
Previous stints in prison were less common (see Table 7.12). For instance,
approximately two thirds of respondents were serving their first prison terms when they
were interviewed. Accordingly, the respondents in the sample provide a stark contrast to
the more serious offenders examined in prior life-events calendar studies (see Horney,
Osgood, and Marshall 1995; Roberts et al. 2005).
123
Number of
Prison
Terms Served
1
2
3
4-6
Percentage
of Sample
66.3
18.2
8.2
7.3
Table 7.12 Respondents’ Number of Prison Terms Served (N = 110).
The Calendar Period: Examining Life Circumstances
This section provides insights into respondents’ life circumstances during the
calendar period. Table 7.13 shows the percentage of the sample that had experienced
various life events at least once during the eighteen months leading up to their arrest and
incarceration. Findings demonstrate that substantial portions of the sample faced
situations conducive to instability and crime.
Life Events Experienced
During 18 Month Period
Leading Up to Arrest
Incarcerated
On probation or parole
Arrested
Committed a violent crime
Committed a property crime
Dealt drugs
Was the victim of a property crime
Was the victim of a violent crime
Changed Residences
Percentage
of Sample
Yes
No
17.3
82.7
45.5
54.5
53.6
46.4
16.4
83.6
25.5
74.5
64.5
35.5
29.1
70.9
20.0
80.0
32.7
67.3
Table 7.13 Respondents’ Life Events During the Calendar Period (N = 110).
124
The figures presented in Table 7.13 are interesting for at least two reasons. First,
a substantial portion of the sample reported being the victim of a crime at least once
during the calendar period. Victimizations resulting from involvement in illegal activities
were frequently described, including having drugs or drug money stolen and getting
jumped by rivals. Accordingly, data provided by these offenders suggest a linkage
between crime victimization and having a criminal lifestyle (see Zhang, Welte, and
Wieczorek 2001).
Second, 53.6% of the sample reported additional arrests besides the one that
ultimately led to prison, while 45.5% had been on either probation or parole during the
calendar period, including 10% of the sample had been on probation or parole the entire
eighteen months. These data show that many respondents were not alien to the criminal
justice system when they were arrested and ultimately sent to prison. Though many were
serving their first prison terms, most were not first time offenders.
Moreover, several respondents reported active involvement in offending during
the calendar period. The majority of those who committed property or violent crimes did
so in only one of the eighteen months. For instance, only 2.7% of the sample reported
property offending and just 1.8% reported violent crimes throughout the entire calendar
period. These habitual property and violent offenders were burglars and robbers,
respectively. Consistent with patterns found by other researchers (see Miethe, McCorkle,
and Listwan 2006: 143), only a small subset of respondents specialized in committing
burglary or robbery.
Involvement in drug dealing was much more common than property or violent
crime. Nearly two thirds of respondents said they had dealt drugs during the calendar
125
period (see Table 7.14). This is interesting given only 11.8% of the sample was
imprisoned for drug distribution. When asked a hypothetical question about the
likelihood of being apprehended for selling drugs, nearly half (44.6%) of the sample
indicated that getting caught would be either unlikely (28.2%) or very unlikely (16.4). If
the information provided in this section is any indication, it appears they may have been
correct.
Number of Months Involved
in Drug Dealing During 18
Month Period Leading Up
to Arrest
Never dealt
Dealt 1-5 months
Dealt 6-10 months
Dealt 11-15 months
Dealt 17-18 months
Percentage
of Sample
35.5
12.6
11.7
8.1
31.8
Table 7.14 Respondents’ Drug Dealing During the Calendar Period (N = 110).
Forms of involvement in drug dealing broke down into three clear categories.
Just over a third of the sample did not engage in dealing, while another third did so
sporadically. The last group dealt drugs throughout the calendar months. Those who
dealt drugs intermittently often reported doing so as a response to financial hardship, as
did respondents who dealt steadily during the calendar period. These accounts were not
implausible given the life circumstances many offenders faced during the month leading
up to their arrests.
126
Month Eighteen: A Snapshot of Life Before Coming to Prison
Many respondents were socially disadvantaged during month eighteen. Fifty-six
percent of the inmates studied in the ODRC’s 2005 Intake Study were unemployed prior
to coming into prison (ODRC 2006b). Approximately 40% of our respondents were not
working when they were arrested (see Table 7.15). Compared to the inmates examined
by the ODRC, our respondents were more likely to be employed. However, estimated
annual incomes for those in the sample (previously presented in Table 7.5) suggest that
many of the respondents who did have jobs were members of the working poor.
Life Events Experienced
During Month Leading
Up to Arrest
Unemployed
Lived in public housing
Governmental Income
(including food stamps,
social security, and welfare)
Was a gang member
Gang in neighborhood
On probation/parole
Incarcerated
In treatment
Percentage
of Sample
Yes
No
39.8
60.2
7.3
92.7
11.8
3.6
20.0
38.2
4.5
4.5
88.2
96.4
80.0
61.8
95.5
95.5
Table 7.15 Month Eighteen (N = 110).
Nearly half of those who were employed during the month they were arrested
worked in jobs that could potentially be suspended by wintry weather, including roofing
(20%), construction (21.5%), and agriculture/landscaping (6.2%). Accordingly, these
127
respondents frequently experienced the types of short-term changes in life circumstances
that life-events calendar surveys were designed to capture.
The employment and income statistics presented in this chapter, along with
respondents’ passing comments during interviews, suggest that inmates often struggled
financially. Several respondents compensated by turning to crime. Table 7.16 reveals
that just over two-thirds of the sample received money from illegal sources during the
month they were arrested. Moreover, most of those who reported illegal income (84%)
earned it from drug dealing. Anecdotal comments made during interviews described both
lower and upper level involvement with drug distribution.
Estimated Illegal
Income in Month 18
None
1-1,000
1001-4,999
5,000-9,999
10,000-14,999
15,000-24,999
25,000+
Percentage
of Sample
31.8
8.2
23.6
10.9
13.6
3.6
5.6
Table 7.16 Estimated Illegal Income in Month 18 (N = 110).
It has been shown that respondents often contended with unemployment, little or
no legal income, poor education, and having to support dependents. Substance abuse
posed additional challenges. The majority of the inmates in the sample reported using
alcohol and other substances (see Table 7.17). Not surprisingly, respondents frequently
mentioned that their crimes were committed to subsidize the use of these substances.
128
Frequency of Use in Month 18 (Percentage of Sample)
Substances
Used
Alcohol
Marijuana
Heroin
Power Cocaine
Crack
Speed
Prescription
Never
13.6
20.9
86.4
69.1
86.4
87.3
54.5
1-2 Times
13.6
9.1
5.5
12.7
3.6
4.5
9.1
At least
Everyday or
once a week almost everyday
37.3
35.5
15.5
54.5
2.7
5.5
12.7
5.5
4.5
5.5
4.5
3.6
12.7
23.6
Table 7.17 Substances Used in Month 18 (N = 110).
Two observations related to respondents’ substance use are instructive. First,
nearly a quarter of the sample reported using prescription drugs everyday or almost
everyday. Oxycontin, Xanax, Percocet, and pain pills were among the prescription drugs
most commonly used by these respondents.
Approximately 65% of those who reported using prescription drugs did so under a
doctor’s suggestion. The remainder used them illegally for other reasons. Moreover,
several respondents who reported using prescription drugs per doctor’s suggestion
indicated that initial prescription resulted in subsequent addiction.
Second, just over a third of the sample reported drinking alcohol everyday or
almost everyday in month eighteen. It is therefore not surprising that 19% of the
respondents said they spent time hanging out in bars daily. Respondents were also asked
about the average number of drinks they typically consumed when they drank (see Table
7.18). Nearly half estimated an average of at least seven drinks when they used alcohol,
including many who said they drank either a twelve pack or a case of beer a day. Several
129
respondents described themselves as alcoholics. These data suggest their selfassessments were correct.
Average Number of
Drinks Typically Consumed
0
1-3
4-6
7-12
13+
Percentage
of Sample
13.6
20.0
19.1
20.0
27.3
Table 7.18 Average Number of Drinks Consumed (N = 110).
Many respondents said they engaged in crime to support their families. Others
offended to support drug habits. Unstable lives, substance abuse, and criminal activity
seemed to manifest themselves in a foreground, or present tense, orientation.
For instance, a subset of 84 respondents was asked if they thought they would
ever end up coming to prison. Fifty-six (66.6 %) said they never thought they would be
incarcerated, while only 28 (33.3%) believed they would. The observation that
financially motivated street offenders often focused on immediate circumstances rather
than long term consequences is consistent with findings from other research with similar
populations (see Jacobs and Wright 1999; Shover and Honaker 1992; Wright and Decker
1994).
Concluding Points
One of the strengths of this study is the sample. Respondents’ demographic
backgrounds generally matched the characteristics of Ohio’s prison population. The
130
sample was also representative of the offenders that have been disproportionately
affected by recent incarceration and recidivism trends.
Respondents were differentially embedded in lives of crime and substance use.
For instance, there were distinguishable groupings of inmates along dimensions such as
age at first arrest, number of prior arrests and incarcerations, types of crime committed,
and frequencies of offending and drug use. Respondents also exhibited variability in lifecircumstances.
However, there were four common themes indicative of social disadvantage or
instability. First, most respondents were poor. Second, most respondents were
attempting to support others. Third, many respondents were poorly educated. And
fourth, most respondents used drugs.
The inmates we spoke with frequently experienced short term changes in their
circumstances. Many adopted a foreground orientation as they responded to day-to-day
challenges such as supporting dependents or feeding drug addictions. The life-events
calendar method is particularly well suited for collecting data from individuals who lead
unstable lives (Engel, Keifer, and Zahm 2001; Zahm et al. 2001). Accordingly, these
respondents comprise an ideal sample for a study that assesses the quality of life-events
calendar data.
This chapter has drawn primarily from prisoners’ self-reports to present
information about respondents’ backgrounds and life circumstances leading up to arrest.
Some readers may therefore question the reliability and validity of these data.
Accordingly, the accuracy of respondents’ self-reported information is examined in the
next chapter.
131
CHAPTER 8
RESULTS: RELIABILITY AND VALIDITY
This chapter examines test-retest reliability and criterion validity for several selfreported measures. Test-retest reliability was assessed for all of the life-events calendar
questions that were administered in both test and retest interviews, and criterion validity
was evaluated for the self-report items that had corresponding measures in respondents’
official prison records. The hypotheses presented in chapter 4 were also examined.
Pearson and Spearman correlations and Cohen’s kappa coefficient of agreement were
used to test hypotheses and assess test-retest reliability and criterion validity. This
chapter therefore begins with a brief introduction to these techniques.
Next, three sections of findings are provided. First, reliability results for the
whole sample are presented for self-reported data on life events, substance use, contact
with the justice system, and criminal activity during the 18-month calendar period.
Second, validity findings for the whole sample are provided for self-reported data on
arrests during the 18-month calendar period and self-reported criminal history measures.
Third, results for Caucasians and African-Americans are examined to determine whether
reliability and validity differed by race. A brief discussion that summarizes main
findings concludes this chapter.
132
Measures of Agreement: Pearson’s r, Spearman’s rho, and Cohen’s k
Pearson product moment (r) and Spearman rank (rho) correlations are among the
most commonly used statistical options for examining association between two
quantitative variables (DeCoster and Claypool 2004; Siegel and Castellan 1988;
Tabachnick and Fidell 2007). Pearson’s r assesses correlation between interval- and
ratio- scaled variables (Decoster and Claypool 2004; Raymondo 217), while Spearman’s
rho measures correlation between variables with ordinal scales (Siegel and Castellan
1988: 235). Pearson’s r produces a more robust indicator of correlation than Spearman’s
rho, making it the statistic of choice for many researchers even when their data are rankordered and technically better suited for Spearman’s rho (Decoster and Claypool 2004).
Moreover, Tabachnick and Fidell (2007: 7) note “in practice, we often treat variables as if
they are continuous when the underlying scale is thought to be continuous but the
measured scale actually is ordinal.” Given the conventional use of Pearson’s r to assess
correlation between ordinal measures, correlations between rank-ordered variables in this
dissertation were assessed using both r and rho, while correlations between continuous
variables were assessed using r.
The bulk of the statistical analyses conducted in this dissertation examined
agreement between categorical variables. Unfortunately, Pearson’s r and Spearman’s rho
are inappropriate for this purpose (Sim and Wright 2005). Kappa (k) was proposed by
Cohen (1960) as a way to assess agreement between categorical measures. Accordingly,
kappa coefficients were employed to examine agreement between categorical variables in
this dissertation. The use of kappa for reliability and validity tests of categorical
measures is consistent with the designs of others who have conducted similar analyses on
133
related topics (see Knight et al. 1998; Roberts et al. 2005; Webster et al. 2006; Yacoubian
2001; Yacoubian 2003).
Kappa captures agreement between ratings made by two observers (Everitt 1998:
176; Sim and Wright 2005) while controlling for the possibility that the two observers
will produce the same ratings strictly by chance (Kraemer 1983; Siegel and Castellan
1988: 285). Everitt and Hay (1992) offer the following summary of the computation of
kappa:
If the observed proportion of agreement is Po and the chance agreement Pc then
the statistic kappa is defined as
Po - Pc
K=
1 – Pc
When there is complete agreement between the two observers, Po will take the
value 1, so that the maximum possible excess over chance agreement is 1 – Pc;
the observed excess over chance agreement is Po – Pc and kappa is simply the
ratio of the two differences (49).
Kappa results are typically presented with a 2 x 2 table that categorizes paired ratings
(Fleiss 1981: 213; see Figure 8.1). Kappa values can range from –1 to 1, with 0
representing chance agreement, negative values representing less agreement than would
be expected by chance, and positive values denoting stronger agreement than would be
expected by chance (Sim and Wright 2005). Landis and Koch’s (1977) criteria for
interpreting kappa coefficients are widely regarded as the benchmarks for assessing
kappa (see Figure 8.2). When applying Landis and Koch’s standards to the example
presented in Figure 8.1, one would conclude there is substantial agreement between the
two data sources.
134
Data Source #1
Data
Source #2
(no)
(yes)
Total
(no)
(yes)
Total
21
11
6
72
27
83
32
78
(N=110)
Kappa = .61
Figure 8.1: Sample Presentation of Kappa Results.
Several tables organized similarly to the example provided in Figure 8.1 are
presented in this chapter. The paired ratings values contained in the cells of these tables
represent months from the eighteen-month calendar period. Each respondent potentially
contributed eighteen different observations to the totals for each table, or one observation
per month. Accordingly, the total number of observations in each table exceeds the
number of respondents in the sample. When applying this structure to the example in
Figure 8.1, one would conclude there were twenty-one months in which both data sources
indicated “no” for a respondent on the item in question, seventy-two months in which
both data sources indicated “yes,” eleven months when Data Source #2 indicated “yes”
but Data Source #1 indicated “no,” and six months when Data Source #2 indicated “no”
but Data Source #1 indicated “yes.”
135
Kappa Value
Strength of
Agreement
Below 0.0
Poor
.00-.20
Slight
.21-.40
Fair
.41-.60
Moderate
.61-.80
Substantial
.81-1.00
Almost Perfect
Figure 8.2: Benchmark Kappa Coefficient Interpretations (Landis and Koch 1977).
Reliability
Test-retest reliability findings are now presented. Separate tests using the whole
sample, Caucasians, and African-Americans were conducted for most analyses. The
results for each of these groupings are noted in each table. However, this section’s focus
is on results for the whole sample. Targeted attention to racial comparisons is provided
in the last section of this chapter.
Life Events
A purported strength of incorporating life-events calendars into self-report
research is that they help establish temporal ordering (see Belli, Shay, and Stafford 2001;
Caspi et al. 1996; Freedman et al. 1988). Ideally, mapping out the various life
circumstances in respondents’ lives will establish context and reference points to aid
recall of other items. The reliability of life-events measures is therefore an important
136
consideration. Toward this end, the reliability of respondents’ self-reported life events
over the eighteen-month calendar period is examined in this section.
Respondents were asked if they had changed jobs or had moved during
each of the months of the calendar period. Kappa analyses found that the strength of
agreement between respondents’ test and retest interview responses for these questions
was slight (see Tables 8.1 and 8.2). The reliability of respondents’ recollections of the
months in which they moved or changed jobs was only slightly better than would be
expected by chance. As indicated in chapter 7, collectively respondents reported many
short-term changes in their life circumstances. Accordingly, one possible reason for
these findings is that respondents had difficulty remembering the exact month in which
finite events such as moves or job changes occurred. These findings support other
research that suggests respondents have difficulty remembering dates (see Henry et al.
1994).
Test-retest agreement was moderate when respondents were asked about their
school involvement over the calendar period (see Table 8.3). Moreover, test-retest
correlations for first and second interview responses about hours worked and job
commitment were significant and moderate for each of the months of the calendar period
(see Table 8.4). Accordingly, respondents did better with continuous and ordinal
measures than they did with dates.
Based on previous research (see Anglin, Hser, and Chou 1993), it was
hypothesized in chapter 4 that self-reported legal income would be more reliable than
self-reported illegal income. On the contrary, test-retest correlations were high for illegal
income across the calendar period but moderate for legal income (see Table 8.4).
137
Moreover, kappa results found substantial agreement in reports of illegal income in test
and retest interviews (see Table 8.5). One possible explanation for these findings is that
opportunities to earn illegal income were steady and familiar for many respondents, while
legal employment was more unstable and transitory.
138
Whole Sample.
Retest Data
Job Changes
Test Data
No Job Changes Reported
Job Changes Reported
Total
No Job Changes Reported
1916
32
1948
Job Changes Reported
32
0
32
Total
1948
32
1980
Kappa = -.02
Caucasians.
Retest Data
Job Changes
Test Data
Job Changes Reported
Job Changes Reported
Total
Job Changes Reported
990
18
1008
Job Changes Reported
18
0
18
Total
1008
18
1026
Kappa = -.02
African-Americans.
Retest Data
Job Changes
Test Data
Job Changes Reported
Job Changes Reported
Total
Job Changes Reported
819
14
833
Job Changes Reported
13
0
13
Total
832
14
846
Kappa = -.02
Table 8.1. Reports of any job changes for each month during the reference period.
139
Whole Sample.
Retest Data
Residential Moves
No Moves Reported
Moves Reported
Total
Test Data
No Moves Reported
Moves Reported
Total
1910
45
1955
22
3
25
1932
48
1980
Kappa = .07
Caucasians.
Retest Data
Residential Moves
Test Data
No Moves Reported
Moves Reported
Total
No Moves Reported
989
25
1014
Moves Reported
11
1
12
Total
1000
26
1026
Kappa = .04
African-Americans.
Retest Data
Residential Moves
Test Data
No Moves Reported
Moves Reported
Total
No Moves Reported
813
20
833
Moves Reported
11
2
13
Total
824
22
846
Kappa = .10
Table 8.2. Reports of any residential moves for each month during the reference period.
140
Whole Sample.
Retest Data
School
Test Data
No School Reported
School Reported
Total
No School Reported
School Reported
1678
44
1722
84
63
147
Total
1762
107
1869
Kappa = .46
Caucasians.
Retest Data
School
Test Data
No School Reported
School Reported
Total
No School Reported
School Reported
918
9
927
12
29
41
Total
930
38
968
Kappa = .72
African-Americans.
Retest Data
School
Test Data
No School Reported
School Reported
Total
No School Reported
School Reported
666
29
695
70
34
104
Total
736
63
799
Kappa = .34
Table 8.3. Reports of any school involvement for each month during the reference
period.
141
Illegal
Income
Month
r
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
.720**
.723**
.692**
.700**
.708**
.704**
.688**
.626**
.656**
.633**
.690**
.699**
.684**
.658**
.657**
.641**
.535**
.633**
rho
.759**
.761**
.720**
.732**
.749**
.741**
.719**
.645**
.676**
.654**
.728**
.744**
.726**
.690**
.686**
.669**
.626**
.659**
Legal
Income
r
.488**
.523**
.521**
.527**
.505**
.537**
.598**
.533**
.515**
.518**
.472**
.500**
.410**
.486**
.382**
.483**
.504**
.502**
Hours
Worked
Job
Commitment
rho
r
r
.430**
.462**
.471**
.485**
.470**
.514**
.575**
.509**
.483**
.476**
.429**
.451**
.335**
.432**
.376**
.430**
.458**
.486**
.529**
.531**
.571**
.569**
.530**
.635**
.618**
.545**
.557**
.567**
.538**
.583**
.428**
.538**
.475**
.464**
.501**
.570**
.346**
.484**
.403**
.413**
.448**
.437**
.513**
.498**
.432**
.507**
.472**
.480**
.455**
.500**
.496**
.505**
.507**
.566**
rho
.342**
.481**
.406**
.420**
.443**
.431**
.531**
.511**
.447**
.518**
.481**
.488**
.460**
.492**
.487**
.519**
.535**
.574**
Table 8.4. Test-Retest Correlations of Life Events Across the Eighteen-Month Calendar
Period.
142
Overall, the reliability findings for life events are mixed. Respondents did much
better with attitudinal and continuous measures than they did with dates. For instance,
the reliability of respondents’ self-reports of hours worked, job commitment, and legal
and illegal income across the calendar period was moderate to high. The reliability of
respondents’ reports of illegal income is of particular note. Prior researchers found that
offenders’ self-reported legal income is more reliable than self-reported illegal income
(Anglin, Hser, and Chou 1993). The opposite was the case for the incarcerated offenders
in our sample.
143
Whole Sample.
Retest Data
Illegal Income
Test Data
No Illegal Income Reported
Illegal Income Reported
Total
No Illegal Income Reported
Illegal Income Reported
Total
96
935
1031
741
935
1924
Illegal Income Reported
Total
74
371
445
472
534
1006
Illegal Income Reported
Total
645
248
893
Kappa = .64
Caucasians.
Retest Data
Illegal Income
Test Data
No Illegal Income Reported
Illegal Income Reported
Total
No Illegal Income Reported
398
163
561
Kappa = .53
African-Americans.
Retest Data
Illegal Income
Test Data
No Illegal Income Reported
Illegal Income Reported
Total
No Illegal Income Reported
212
85
297
21
492
513
233
577
810
Kappa = .71
Table 8.5. Reports of any illegal income for each month during the reference period.
144
Substance Use
Descriptive statistics (see Chapter 7) showed widespread alcohol and drug use by
the men in the sample. Previous research has found that the reliability of self-reported
substance use varies by frequency of use and type of drug used. For instance,
respondents in other studies have shown high reliability when reporting marijuana use
(Fendrich and Vaughn 1994; Golub et al. 2002), while reports of cocaine (Fendrich and
Vaughn 1994; Golub et al. 2002) and heroin (Golub et al. 2002; but see Day et al. 2004)
use have been found to be less reliable.
Consistent with prior research, this study’s test-retest correlations show moderate
to high reliability for respondents’ self-reports of marijuana use and less reliable reports
of cocaine use (see Table 8.6). However, contrary to Golub and colleagues’ (2002)
findings, self-reported heroin use showed moderate to high test-retest correlations that
paralleled reliability correlations for marijuana use.
145
Month
Alcohol
r
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
.491*
.464*
.415*
.441*
.464*
.465*
.468*
.509*
.479*
.441*
.412*
.436*
.412*
.372*
.336*
.340*
.395*
.389*
rho
.515*
.498*
.444*
.460*
.476*
.480*
.476*
.501*
.485*
.449*
.428*
.453*
.411*
.389*
358*
.343*
.410*
.409*
Marijuana
Crack
Cocaine
Powder
Cocaine
r
r
r
.708*
.715*
.696*
.690*
.686*
.677*
.690*
.687*
.732*
.722*
.700*
.688*
.707*
.681*
.685*
.677*
.711*
.701*
rho
.714*
.721*
.701*
.694*
.689*
.671*
.685*
.682*
.727*
.722*
.700*
.701*
.715*
.689*
.692*
.644*
.716*
.704*
.399*
.407*
.464*
.407*
.398*
.418*
.399*
.468*
.588*
.479*
.448*
.371*
.478*
.478*
.478*
.428*
.452*
.600*
rho
.382*
.404*
.409*
.404*
.377*
.380*
.323*
.350*
.453*
.386*
.385*
.334*
.388*
.388*
.388*
.319*
.388*
.536*
.574*
.574*
.567*
.575*
.581*
.557*
.532*
.537*
.652*
.632*
.587*
.621*
.640*
.589*
.615*
.582*
.587*
.593*
rho
.615*
.615*
.600*
.619*
.631*
.595*
.556*
.567*
.650*
.630*
.601*
.602*
.594*
.556*
.561*
.563*
.589*
.598*
Heroin
r
.655*
.655*
.655*
.655*
.655*
.655*
.655*
.655*
.798*
.763*
.798*
.722*
.899*
.909*
.837*
.850*
.931*
.870*
S
rho
peed
.740*
.740*
.740*
.740*
.740*
.740*
.740*
.740*
.844*
.804*
.844*
.784*
.869*
.906*
.848*
.870*
.922*
.853*
r
.709*
.709*
.759*
.706*
.809*
.809*
.778*
.778*
.778*
.778*
.770*
.778*
.770*
.691*
.745*
.777*
.712*
.644*
Prescription
Drugs
r
rho
.769*
.769*
.826*
.763*
.813*
.813*
.777*
.777*
.777*
.777*
.750*
.777*
.750*
.728*
.790*
.738*
.733*
.618*
.448*
.429*
.448*
.469*
.457*
.457*
.469*
.479*
.479*
.479*
.462*
.467*
.458*
.476*
.475*
.465*
.470*
.476*
rho
.435*
.416*
.435*
.447*
.435*
.435*
.451*
.463*
.463*
.463*
.453*
.450*
.438*
.468*
.457*
.445*
.455*
.464*
*Sig. at .01 (2 tailed); N=110
Table 8.6. Test-Retest Correlations of Frequency of Substance Use Across the EighteenMonth Calendar Period.
146
Other researchers have found that items related to activities that happen more
infrequently show lower reliability than those that occur more often (Anglin, Hser, and
Chou 1993; Day et al. 2004; Engel, Keifer, and Zahn 2001) and that offenders have
trouble remembering sporadic drug use (Weis 1986: 11). In light of these findings some
might suggest that self-reports for marijuana use are more reliable than those for cocaine
use because respondents used marijuana more frequently than other drugs (see chapter 7).
However, this supposition is not supported by this study’s findings. Despite the fact that
heroin and speed were among the least used substances, self-reports of their use show
moderate to high test-retest reliability correlations across the calendar period (see Table
8.6). Moreover, test-retest reliability correlations were lower for alcohol and prescription
drug use even though alcohol and prescription drugs were some of the most frequently
used substances by inmates in the sample.
Respondents were asked if they had used alcohol, marijuana, crack cocaine,
powder cocaine, heroin, speed, and prescription drugs during each of the months of the
calendar period. Kappa coefficients measuring the strength of agreement between
respondents’ test and retest interview responses to these questions are presented in Tables
8.7-8.13. The level of test-retest agreement in self-reporting across the calendar period
was at least fair for every one of these drugs, which is a striking finding given the length
of the calendar period, the potential detrimental effects that substance use may have on
cognitive functioning, and that respondents frequently were users of more than one drug.
Consistent with the Pearson correlations previously presented in Table 8.6, less
frequently used substances including heroin and speed were reported with a higher level
of agreement than substances such as alcohol and prescription drugs that were used more
147
frequently. An exception to this pattern is crack cocaine, which was used less frequently
and also reported less reliably than most of the other substances.
In chapter 4 it was hypothesized that self-reports of marijuana use would be more
reliable than self-reports of other types of drug use. This study found that prison inmates’
self-reports of past marijuana use are moderate to high in reliability. Compared to crack
cocaine, alcohol, and prescription drug use, self-reports of marijuana use are much more
reliable. Moreover, they are slightly more reliable than self-reports of powder cocaine
use. However, self-reports of marijuana use were less reliable than self-reports of heroin
and speed use. Accordingly, findings lend partial support to the hypothesis that
marijuana use is reported more reliably than the use of other drugs.
148
Whole Sample.
Retest Data
Alcohol
Test Data
No Alcohol Use Reported
Alcohol Use Reported
Total
No Alcohol Use Reported
Alcohol Use Reported
129
152
281
182
1517
1699
Total
311
1669
1980
Kappa = .34
Caucasians.
Retest Data
Alcohol
Test Data
No Alcohol Use Reported
Alcohol Use Reported
Total
No Alcohol Use Reported
Alcohol Use Reported
72
54
126
81
819
900
Total
153
873
1026
Kappa = .44
African-Americans.
Retest Data
Alcohol
Test Data
No Alcohol Use Reported
Alcohol Use Reported
Total
No Alcohol Use Reported
Alcohol Use Reported
44
96
140
88
618
706
Total
132
714
846
Kappa = .19
Table 8.7. Reports of any alcohol use for each month during the reference period.
149
Whole Sample.
Retest Data
Marijuana
Test Data
No Marijuana Use Reported
Marijuana Use Reported
Total
No Marijuana Use Reported
Marijuana Use Reported
362
207
569
103
1308
1411
Total
465
1515
1980
Kappa = .60
Caucasians.
Retest Data
Marijuana
Test Data
No Marijuana Use Reported
Marijuana Use Reported
Total
No Marijuana Use Reported
Marijuana Use Reported
130
147
277
58
691
749
Total
188
838
1026
Kappa = .44
African-Americans.
Retest Data
Marijuana
Test Data
No Marijuana Use Reported
Marijuana Use Reported
Total
No Marijuana Use Reported
Marijuana Use Reported
232
31
263
45
538
583
Total
277
569
846
Kappa = .79
Table 8.8. Reports of any marijuana use for each month during the reference period.
150
Whole Sample.
Retest Data
Crack Cocaine
Test Data
No Crack Use Reported
Crack Use Reported
Total
No Crack Use Reported
Crack Use Reported
1705
91
1796
109
75
184
Total
1814
166
1980
Kappa = .37
Caucasians.
Retest Data
Crack Cocaine
Test Data
No Crack Use Reported
Crack Use Reported
Total
No Crack Use Reported
Crack Use Reported
849
74
923
39
64
103
Total
888
138
1026
Kappa = .47
African-Americans.
Retest Data
Crack Cocaine
Test Data
No Crack Use Reported
Crack Use Reported
Total
No Crack Use Reported
Crack Use Reported
766
0
766
70
10
80
Total
836
10
846
Kappa = .21
Table 8.9. Reports of any crack cocaine use for each month during the reference period.
151
Whole Sample.
Retest Data
Powder Cocaine
No Powder Cocaine Use Reported
Test Data
No Powder Cocaine Use Reported
Powder Cocaine Use Reported
Total
1228
110
1338
Powder Cocaine Use Reported
240
402
642
Total
1468
512
1980
Kappa = .57
Caucasians.
Retest Data
Powder Cocaine
No Powder Cocaine Use Reported
Test Data
No Powder Cocaine Use Reported
Powder Cocaine Use Reported
Total
477
107
584
Powder Cocaine Use Reported
133
309
442
Total
610
416
1026
Kappa = .52
African-Americans.
Retest Data
Powder Cocaine
No Powder Cocaine Use Reported
Test Data
No Powder Cocaine Use Reported
Powder Cocaine Use Reported
Total
700
0
700
Powder Cocaine Use Reported
107
39
146
Total
807
39
846
Kappa = .38
Table 8.10. Reports of any powder cocaine use for each month during the reference
period.
152
Whole Sample.
Retest Data
Heroin
No Heroin Use Reported
Test Data
No Heroin Use Reported
Heroin Use Reported
Total
Heroin Use Reported
1753
60
1813
11
156
167
Total
1764
216
1980
Kappa = .80
Caucasians.
Retest Data
Heroin
No Heroin Use Reported
Test Data
No Heroin Use Reported
Heroin Use Reported
Total
Heroin Use Reported
817
60
877
11
138
149
Total
828
198
1026
Kappa = .76
African-Americans.
Retest Data
Heroin
No Heroin Use Reported
Test Data
No Heroin Use Reported
Heroin Use Reported
Total
Heroin Use Reported
846
0
846
0
0
0
Total
846
0
846
Kappa = too few rating categories
Table 8.11. Reports of any heroin use for each month during the reference period.
153
Whole Sample.
Retest Data
Speed
Test Data
No Speed Use Reported
Speed Use Reported
Total
No Speed Use Reported
Speed Use Reported
1721
72
1793
24
163
187
Total
1745
235
1980
Kappa = .75
Caucasians.
Retest Data
Speed
Test Data
No Speed Use Reported
Speed Use Reported
Total
No Speed Use Reported
Speed Use Reported Total
821
54
875
21
130
151
842
184
1026
Kappa = .73
African-Americans.
Retest Data
Speed
Test Data
No Speed Use Reported
Speed Use Reported
Total
No Speed Use Reported
828
0
828
Speed Use Reported
3
15
18
Total
831
15
846
Kappa = .91
Table 8.12. Reports of any speed use for each month during the reference period.
154
Whole Sample.
Retest Data
Prescription Drug
No Prescription Drug Use Reported
Test Data
No Prescription Drugs Use Reported
Prescription Drugs Use Reported
Total
940
321
1261
Prescription Drug Use Reported Total
223
496
719
1163
817
1980
Kappa = .42
Caucasians.
Retest Data
Prescription Drug
No Prescription Drug Use Reported
Test Data
No Prescription Drugs Use Reported
Prescription Drugs Use Reported
Total
328
184
512
Prescription Drug Use Reported Total
103
411
514
431
595
1026
Kappa = .44
African-Americans.
Retest Data
Prescription Drug
No Prescription Drug Use Reported
Test Data
No Prescription Drugs Use Reported
Prescription Drugs Use Reported
Total
582
96
678
Prescription Drug Use Reported Total
104
64
168
686
160
846
Kappa = .24
Table 8.13. Reports of any prescription drug use for each month during the reference
period.
155
Justice System Involvement
A primary goal of this dissertation is to assess how reliable life-events calendar
data are. The survey instruments contained calendar questions for a number of items
related to correctional supervision and police involvement. Accordingly, this section
examines findings related to respondents’ self-reported interactions with the justice
system across the eighteen-month calendar period. Kappa coefficients measuring testretest agreement of respondents’ self-reported incarcerations, treatment program
involvement, probation/parole supervision, and arrests during each of the months of the
calendar period are presented in Tables 8.14-8.17.
Kappa coefficients indicate moderately strong agreement for respondents’ selfreported incarcerations, treatment program involvement, and probation/parole
supervision. As previously noted in chapter 7, nearly half of the respondents said they
had been on probation or parole during at least one of the months of the calendar period,
and approximately 17% reported incarcerations at some point during this time. These
stints were often relatively brief. For instance, several respondents reported
incarcerations of a month or less in jail and probation sentences of less than six months.
Overall, these findings suggest that respondents did a decent job of remembering which
months of the calendar period they had been under some kind of justice system
supervision.
156
Whole Sample.
Retest Data
Incarcerations
Test Data
No Incarcerations Reported
Incarcerations Reported
Total
No Incarcerations Reported
Incarcerations Reported
1569
135
1704
103
173
276
Total
1672
308
1980
Kappa = .52
Caucasians.
Retest Data
Incarcerations
Test Data
No Incarcerations Reported
Incarcerations Reported
Total
No Incarcerations Reported
Incarcerations Reported
780
63
843
62
121
183
Total
842
184
1026
Kappa = .59
African-Americans.
Retest Data
Incarcerations
Test Data
No Incarcerations Reported
Incarcerations Reported
Total
No Incarcerations Reported
Incarcerations Reported
694
71
765
40
41
81
Total
734
112
846
Kappa = .35
Table 8.14. Reports of any incarcerations for each month during the reference period.
157
Whole Sample.
Retest Data
Treatment
Test Data
No Treatment Reported
Treatment Reported
Total
No Treatment Reported
Treatment Reported
1792
84
1876
40
64
104
Total
1832
148
1980
Kappa = .48
Caucasians.
Retest Data
Treatment
Test Data
No Treatment Reported
Treatment Reported
Total
No Treatment Reported
Treatment Reported
927
36
963
26
37
63
Total
953
73
1026
Kappa = .52
African-Americans.
Retest Data
Treatment
Test Data
No Treatment Reported
Treatment Reported
Total
No Treatment Reported
Treatment Reported
784
22
806
14
26
40
Total
798
48
846
Kappa = .57
Table 8.15. Reports of any treatment program involvement for each month during the
reference period.
158
Whole Sample.
Retest Data
Probation/Parole
Test Data
No Supervision Reported
Supervision Reported
Total
No Supervision Reported
Supervision Reported
1294
175
1469
172
339
511
Total
1466
514
1980
Kappa = .54
Caucasians.
Retest Data
Probation/Parole
Test Data
No Supervision Reported
Supervision Reported
Total
No Supervision Reported
Supervision Reported
648
121
769
77
180
257
Total
725
301
1026
Kappa = .51
African-Americans.
Retest Data
Probation/Parole
Test Data
No Supervision Reported
Supervision Reported
Total
No Supervision Reported
Supervision Reported
545
54
599
94
153
247
Total
639
207
846
Kappa = .56
Table 8.16. Reports of any probation/parole supervision for each month during the
reference period.
159
Whole Sample.
Retest Data
Arrests
Test Data
No Arrests Reported
Arrests Reported
Total
No Arrests Reported
Arrests Reported
1826
74
1900
Total
58
22
80
1884
96
1980
Kappa = .22
Caucasians.
Retest Data
Arrests
Test Data
No Arrests Reported
Arrests Reported
Total
No Arrests Reported
Arrests Reported
942
39
981
Total
31
14
45
973
53
1026
Kappa = .25
African-Americans.
Retest Data
Arrests
Test Data
No Arrests Reported
Arrests Reported
Total
No Arrests Reported
Arrests Reported
779
33
812
Total
26
8
34
805
41
846
Kappa = .18
Table 8.17. Reports of any arrests for each month during the reference period.
160
Respondents’ self-reported arrests in test and retest interviews showed less
agreement when compared to their reports of other formal involvement with the justice
system. The kappa coefficient indicates only slight agreement beyond what might be
expected by chance. Upon examination of the results for the whole sample in Table 8.17
it is interesting to note that respondents reported 96 total arrests in first interviews but
only 80 total arrests in retest interviews.
When interviewing I observed several instances in which respondents mixed
arrests up with traffic stops and being temporarily detained by police. Interviewers
sometimes preemptively clarified what was meant by arrest when administering the
survey. Other times respondents sought clarification. One potential explanation for the
disparity in reported arrests between test and retest interviews could therefore be that
respondents lacked a solid and consistent understanding of the differences between being
arrested, being cited, and being temporarily detained.
Criminal Activity
Individuals who engage in certain types of offenses may provide poorer selfreport data when compared to other types of offenders. For instance, drug dealers have
been found to be less accurate respondents (Weis 1986: 28), while burglars seem to be
among the most accurate (Chaiken and Chaiken 1982). Two hypotheses geared toward
comparing the reliability of self-reports from different types of offenders were presented
in chapter 4. First, it was hypothesized that drug dealers would provide less reliable selfreports relative to other offenders. Second, it was hypothesized that relative to other
offenders those who perpetrated property crimes would provide more reliable selfreports.
161
As indicted in tables 8.18-8.20, this study did not find support for either of these
hypotheses. On the contrary, drug dealers featured the strongest test-retest agreement
when asked about their offending during each of the months of the calendar period,
followed closely by violent offenders. Moreover, whereas the strength of test-retest
agreement for property offenders was moderate, the strength of test-retest agreement for
both drug dealers and violent offenders was substantial.
162
Whole Sample.
Retest Data
Violent Offenses
Test Data
No Violent Offenses Reported
Violent Offenses Reported
Total
No Violent Offenses Reported
Violent Offenses Reported
1897
32
1929
12
39
51
Total
1909
71
1980
Kappa = .63
Caucasians.
Retest Data
Violent Offenses
Test Data
No Violent Offenses Reported
Violent Offenses Reported
Total
No Violent Offenses Reported
Violent Offenses Reported
989
12
1001
5
20
25
Total
994
32
1026
Kappa = .69
African-Americans.
Retest Data
Violent Offenses
Test Data
No Violent Offenses Reported
Violent Offenses Reported
Total
No Violent Offenses Reported
Violent Offenses Reported
800
20
820
7
19
26
Total
807
39
846
Kappa = .57
Table 8.18. Reports of any violent offenses for each month during the reference period.
163
Whole Sample.
Retest Data
Property Offenses
Test Data
No Property Offenses Reported
Property Offenses Reported
Total
No Property Offenses Reported
1796
72
1868
Property Offenses Reported
53
59
112
Total
1849
131
1980
Kappa = .45
Caucasians.
Retest Data
Property Offenses
Test Data
No Property Offenses Reported
Property Offenses Reported
Total
No Property Offenses Reported
886
64
950
Property Offenses Reported
26
50
76
Total
912
114
1026
Kappa = .48
African-Americans.
Retest Data
Property Offenses
Test Data
No Property Offenses Reported
Property Offenses Reported
Total
No Property Offenses Reported
805
7
812
Property Offenses Reported
25
9
34
Total
830
16
846
Kappa = .34
Table 8.19. Reports of any property offenses for each month during the reference period.
164
Whole Sample.
Retest Data
Drug Dealing
Test Data
No Drug Dealing Reported
Drug Dealing Reported
Total
No Drug Dealing Reported
Drug Dealing Reported
917
172
1089
180
711
891
Total
1097
883
1980
Kappa = .64
Caucasians.
Retest Data
Drug Dealing
Test Data
No Drug Dealing Reported
Drug Dealing Reported
Total
No Drug Dealing Reported
Drug Dealing Reported
584
88
672
122
232
354
Total
706
320
1026
Kappa = .54
African-Americans.
Retest Data
Drug Dealing
Test Data
No Drug Dealing Reported
Drug Dealing Reported
Total
No Drug Dealing Reported
Drug Dealing Reported
296
73
369
58
419
477
Total
354
492
846
Kappa = .68
Table 8.20. Reports of any drug dealing for each month during the reference period.
165
Others have suggested that self-reports will be more reliable for activities that
happen more rather than less frequently (see Anglin, Hser, and Chou 1993; Day et al.
2004; Engel, Keifer, and Zahm 2001). As previously noted, this study’s reliability results
for self-reported substance abuse did not support this relationship. However, partial
support for a relationship between reliability and frequency of events may come from this
study’s comparisons of different types of offenders. For instance, when examining
respondents’ offending it was clear that drug dealing occurred much more frequently than
property offending. Nonetheless, the case of violent offending challenges the notion of a
relationship between reliability and frequency because violent offenders featured
substantial test-retest agreement, yet their offending was among the most infrequent.
These results indicate that the relationship between reliability and the frequency in which
self-reported events occur is complicated and not always direct.
Summary: Reliability
This section has examined the reliability of respondents’ self-reported information
for a number of items related to life events, substance use, justice system involvement,
and criminal activity during the eighteen-month calendar period. Respondents’ selfreported residential moves, job changes, and arrests featured low test-retest agreement.
Pearson correlations and kappa coefficients suggest higher reliability for the other
indicators examined. For instance, the reliability of inmates’ self-reports of legal and
illegal income, use of alcohol and six other substances, incarcerations, treatment program
involvement, probation/parole supervision, violent offending, property offending, and
drug dealing was typically moderate to high. Taken together, the findings in this section
166
suggest that the incarcerated men in the sample provided reliable self-reports for most
indicators, which is impressive given many of their lives were unstable (see Chapter 7).
Validity
Criterion validity findings from comparisons of self-report and official Ohio
Department of Rehabilitation and Correction inmate records are presented in this section.
Similar to the previous section on reliability, separate tests examining the whole sample,
Caucasians, and African-Americans were conducted for most analyses. Results for the
whole sample will be focused on here, while racial comparisons will be presented in the
following section.
Arrests During the Calendar Period
It has already been noted that test-retest reliability of self-reported arrests over the
eighteen-month calendar period was low. Criterion validity tests comparing interview
responses to official data were conducted for respondents whose official Ohio
Department of Rehabilitation and Correction files included a pre-sentence investigation
(PSI) report. Probation officers prepare PSIs when an offender enters the justice system
(Bartollas 2002). PSIs typically contain information about the offender’s alleged crime,
demographic information, offending background, and needs. They are designed to aid
judges during the adjudication process, and in many instances they go on to become a
part of an offender’s file upon incarceration. Nearly half (N=54) of the prisoners in the
sample had PSIs included in their official ODRC records.
167
Table 8.21 presents kappa coefficients of agreement for first interview responses
and official data. Other researchers found that retest data were more valid than test data
(Anglin, Hser, and Chou 1993). Kappa coefficients for retest interviews and official data
are therefore presented in Table 8.22. In both comparisons the strength of agreement
between self-reported data and official records is slight. Accordingly, these findings
suggest that respondents’ self-reports of arrests during the calendar period feature low
validity.
An interesting finding from prior research is that respondents sometimes report
arrests that do not show up in their official records (Marquis 1981; Maxfield, Weiler, and
Widom 2000). This phenomenon is known as “positive bias” (Marquis 1981). It was
hypothesized in chapter 4 that positive bias would be present in prisoners’ self-reports.
The results for the whole sample in Tables 8.21 and 8.22 clearly indicate the
presence positive bias. For instance, respondents reported 44 arrests in their first
interviews and 46 arrests in retest interviews that did not show up in official records.
When compared to 21 arrests in the first interview comparison and 24 arrests in the retest
tabulation, it appears that over-reporting was more common than underreporting. As
noted in the discussion of reliability, one reason positive bias exists may be that
respondents confuse arrests with being cited or temporarily detained. Another possibility
is that respondents’ official records did not contain arrest information for all jurisdictions
or were otherwise incomplete.
168
Whole Sample (with PSIs).
ODRC Data
Arrest
Self-Report (Test)
No Arrest Reported
Arrest Reported
Total
No Arrest Reported
Arrest Reported
Total
21
5
26
977
49
1026
No Arrest Reported
Arrest Reported
Total
487
22
509
12
1
13
499
23
522
No Arrest Reported
Arrest Reported
Total
434
21
455
9
4
13
443
25
468
956
44
1000
Kappa = .10
Caucasians (with PSIs).
ODRC Data
Arrest
Self-Report (Test)
No Arrest Reported
Arrest Reported
Total
Kappa = .03
African-Americans (with PSIs).
ODRC Data
Arrest
Self-Report (Test)
No Arrest Reported
Arrest Reported
Total
Kappa = .18
Table 8.21. Reports of any arrests for each month during the reference period.
169
Whole Sample (with PSIs).
ODRC Data
Arrest
Self-Report (Retest)
No Arrest Reported
Arrest Reported
Total
No Arrest Reported
Arrest Reported
954
46
1000
24
2
26
Total
978
48
1026
Kappa = .02
Caucasians (with PSIs).
ODRC Data
Arrest
No Arrest Reported
Arrest Reported
Total
487
22
509
13
0
13
500
22
522
Arrest Reported
Total
Self-Report (Retest)
No Arrest Reported
Arrest Reported
Total
Kappa = -.03
African-Americans (with PSIs).
ODRC Data
Arrest
Self-Report (Test)
No Arrest Reported
Arrest Reported
Total
No Arrest Reported
431
24
455
11
2
13
442
26
468
Kappa = .18
Table 8.22. Retest reports of any arrests for each month during the reference period.
170
Criminal History
Validity was also assessed using correlations between self-reported criminal
history information and data contained in official ODRC records. Correlations for
respondents’ total lifetime arrests are presented in Table 8.23, and Table 8.24 contains
correlations for respondents’ total convictions. Correlations for respondents’ age at first
arrest are provided in Table 8.25. Whereas the other criminal history comparisons are
conducted with the whole sample, correlations for age at first arrest were limited to
respondents with PSIs in their ODRC files because this information is not otherwise
available. Table 8.26 presents correlations for respondents’ number of prior prison terms
served.
Total Arrests
Whole Sample
Caucasians
African-Americans
* Sig. at .05 (2 tailed); ** Sig. at .01 (2 tailed)
r
.318**
.333*
.293*
N
110
57
47
Table 8.23. Validity Correlations of Self-Report and ODRC Data for Total Lifetime
Arrests.
Total Convictions
Whole Sample
Caucasians
African-Americans
* Sig. at .05 (2 tailed); ** Sig. at .01 (2 tailed)
r
.378**
.321*
.462**
N
110
57
47
Table 8.24. Validity Correlations of Self-Report and ODRC Data for Total Lifetime
Convictions.
171
Age at 1st Arrest
Whole Sample
Caucasians
African-Americans
* Sig. at .05 (2 tailed); ** Sig. at .01 (2 tailed)
r
.608**
.550**
.660**
N
54
27
25
Table 8.25. Validity Correlations of Self-Report and ODRC Data for Age at First Arrest.
# of Prior Prison Terms
Whole Sample
Caucasians
African-Americans
* Sig. at .05 (2 tailed); ** Sig. at .01 (2 tailed)
R
.884**
.869**
.930**
N
110
57
47
Table 8.26. Validity Correlations of Self-Report and ODRC Data for Number of Prior
Prison Terms.
As indicated in Tables 8.23-8.26, the criminal history correlations are all
significant. However, correlations for total lifetime arrests and total lifetime convictions
indicate moderate validity of respondents’ self-reports. Once again, it is probable that
confusion over what constitutes an arrest and conviction and the likelihood that official
records are incomplete affected these figures. Correlations of self-reported age at first
arrest and ODRC data indicate moderate to high validity. Moreover, strong validity is
found when examining offenders’ self-reported number of prior prison terms served.
Taken together, these findings suggest moderate to high validity for offenders’ selfreports of criminal history information.
Summary: Validity
The prisoners examined in this research provided interesting responses in terms of
validity. Comparisons of their self-reports of arrests during each of the months of the
172
calendar period with ODRC data resulted in levels of agreement that were only slightly
higher than would be expected by chance. These results are consistent with finding from
prior research (Marquis 1981). Respondents’ self-reports of the timing of previous
arrests showed weak validity. Positive bias also exists in respondents’ self-reports. Two
explanations for the presence of positive bias are that respondents were confused over
what constitutes an arrest and that official records may be incomplete.
Self-reported information for items examining criminal history featured moderate
to high validity. The criterion validity correlations for age at first arrest and number of
prior prison terms served were particularly impressive. These findings suggest that
respondents’ self-reports of their criminal histories feature good validity.
Race, Reliability, and Validity
A number of studies have found that African-Americans underreport involvement
in offending when compared to white respondents (Hindelang, Hirschi, and Weis 1981;
Fendrich and Vaughn 1994; Mensch and Kandel 1988). However, other research
suggests that reporting behavior of African-Americans does not differ from that of other
racial and ethnic groups (Jolliffe et al. 2003; Webb, Katz, and Decker 2006). Given that
findings on the relationship between race and self-reporting behavior are inconclusive
(Thornberry and Krohn 2000: 58) and often contentious (Weis 1986), this section further
examines race, reliability, and validity.
In chapter 4 it was hypothesized that self-reports of offending provided by
African-American prisoners would be less reliable and less valid than those provided by
173
Caucasian prisoners. Tables provided in the previous sections of this chapter have
included results for African-American and Caucasian respondents. A summary of these
results is presented in Table 8.27.
174
Strength of Agreement (Kappa)
Measure
Caucasian
African-American
School
Life Events Measures
(Reliability)
Substance Abuse
Measures
(Reliability)
Justice System
Involvement Measures
(Reliability)
Criminal Activity
Measures (Reliability)
Substantial (.72)
Fair (.34)
Residential Moves
Slight (.04)
Slight (.10)
Job Changes
Poor (.02)
Poor (.02)
Alcohol Use
Moderate (.44)
Slight (.19)
Marijuana Use
Moderate (.44)
Substantial (.79)
Crack Cocaine Use
Moderate (47)
Fair (.21)
Powder Cocaine Use
Moderate (.52)
Fair (.38)
Heroin Use
Substantial (.76)
Too few to rate
Speed Use
Substantial (.73)
Almost Perfect (.91)
Prescription Drug Use
Moderate (.44)
Fair (.24)
Incarceration
Moderate (.59)
Fair (.35)
Treatment Program
Moderate (.52)
Moderate (.57)
Probation/Parole
Moderate (.51)
Moderate (.56)
Violent Crime
Substantial (.69)
Moderate (.57)
Property Crime
Moderate (.48)
Fair (.34)
Drug Dealing
Moderate (.54)
Substantial (.68)
Illegal Income
Moderate (.53)
Substantial (.71)
Slight (.18)
Arrest Measures
(Validity)
Arrest (test interview)
Poor (.03)
Arrest (retest interview)
Poor (-.03)
Slight (.18)
Table 8.27. Summary of Reliability and Validity Kappa Results for African-American
and Caucasian Respondents.
175
Generally speaking, kappa coefficients show similar test-retest agreement across
race for life events measures. The exception to this pattern was self-reported school
involvement. Caucasian respondents’ self-reports of school involvement showed
substantial agreement, while the agreement in African-Americans’ self-reports was fair.
Overall, there were no clear racial differences across all life events measures.
Substance abuse measures showed that Caucasians were slightly more reliable
than African-Americans when reporting alcohol, crack cocaine, powder cocaine, and
prescription drug use. However, African-American’s responses for marijuana and speed
use were more reliable than those of Caucasians. Racial comparisons of self-reported
heroin use were precluded by a lack of African-American respondents who had used
heroin. Given these mixed findings it is difficult to conclude that one racial group
provided more reliable self-reports of drug use than the other.
Reliability findings for self-reports of justice system involvement during the
months of the calendar period also show mixed results. The strength of agreement
between test and retest responses to questions about treatment program participation and
probation/parole supervision was moderate for both Caucasians and African-Americans.
Self-reported incarcerations during the months of the calendar period featured the only
notable racial difference for the justice system involvement measures. Caucasians
showed moderate agreement in their test-retest responses, while agreement for AfricanAmericans’ responses was fair. Once again, it is difficult to conclude that one racial
group’s self-reports of involvement with the justice system were overall more reliable
than the others’.
176
Comparisons of African-Americans’ and Caucasians’ self-reported criminal
activity follow a slightly different pattern. On one hand, the strength of test-retest
agreement for violent crime was substantial for Caucasians but only moderate for
African-Americans. Moreover, Caucasians also showed stronger test-retest agreement
than African-Americans when self-reporting property crime.
On the other hand, African-Americans featured substantial test-retest agreement
for drug dealing and illegal income, while Caucasians’ test-retest agreement for these
items was only moderate. Generally speaking it is impossible to conclude that one racial
group’s self-reported offending is more or less valid than the others’. Definitive
conclusions about racial differences can only be reached when examining specific forms
of crime.
Criterion validity analyses found that each racial group’s self-reports of arrests
during the months of the calendar period suffered from poor validity. Tables 8.23-8.26
presented validity correlations for total lifetime arrests, total lifetime convictions, age at
first arrest, and number of prior prison terms served. Upon examination of these findings
it is apparent that Caucasians and African-Americans feature different correlations. To
determine whether these differences were significant I calculated Z values for each
measure.
The following formula is used for examining whether one group’s Pearson results
are significantly different from another’s (Pallant 2005: 134):
Zobs =
Z1
–
Z2
1
+
1
√ N1 – 3
177
N2 - 3
Working through this formula produces the Z value for testing significance. Accordingly,
Z values for testing whether there are significant racial differences in the validity of selfreports of criminal history measures are presented in Table 8.28.
Criminal History Measure
Total Lifetime Arrests
Total Lifetime Convictions
Age at First Arrest
# of Prior Prison Terms
Z Value
2.1185
-.8179
-.5926
-1.6160
Table 8.28. Z Values for Criminal History Measures.
To interpret significance Z values need to be compared to –1.96 and +1.96
(Pallant 2005). Z values that are smaller than –1.96 or larger than +1.96 indicate that a
significant difference exists between the two groups being analyzed. Accordingly,
findings from this study suggest there are no significant differences in the validity of
Caucasians’ and African-Americans’ self-reported total lifetime convictions, age at first
arrest, or number of prior prison terms. However, there is a significant difference in selfreports of total lifetime arrests. The results presented in Table 8.28 suggest that
Caucasian’s self-reports of total lifetime arrests are significantly more valid than those
provided by African-Americans.
It is possible that racial differences in interactions with the police may account for
some of this disparity. Previous research has found that police may harass AfricanAmerican males and detain them arbitrarily (see Anderson 1990). To the extent this
happened to African-Americans in the sample, it is possible that contacts with the police
178
that did not result in arrest muddied respondents’ recollections of their interactions with
police, including total lifetime arrests.
Respondents were asked a set of questions about their perceptions of the police.
When asked if they felt the police had treated them fairly or unfairly, fifty-seven percent
of African-Americans in the sample believed the police had treated them either unfairly
(38%) or very unfairly (19%). The numbers for Caucasians were much lower, with 39%
believing the police treated them either unfairly (35%) or very unfairly (4%).
Moreover, respondents were asked if they felt the police had ever stopped them
due to their racial or ethnic background. Seventy-nine percent of African-American
respondents believed they had been racially profiled, which is noticeably high when
compared to the 16% of white respondents who said “yes.” To the extent respondents’
perceptions are accurate, it is possible that African-Americans’ self-reports of lifetime
arrests were significantly less valid than Caucasians’ self-reports because of increased
contacts with the police that at times may have been cognitively misclassified as arrests.
Conclusion
Descriptive statistics presented in chapter 7 showed the diversity of the sample on
the dimensions of life events, substance use, justice system involvement, criminal
activity, and criminal history. This chapter has shown that overall the incarcerated men
in this dynamic sample provided self-reports with moderate to high reliability and
validity for most of the indicators measured. Moreover, findings indicate more similarity
than difference when comparing the responses of Caucasians and African-Americans.
179
Accordingly, the collective findings from the analyses examined in this chapter suggest
that the life-events calendar method can be used to collect quality retrospective data from
incarcerated offenders.
180
CHAPTER 9
DISCUSSION AND CONCLUSION
The previous chapters have examined the reliability and validity of life-events
calendar data collected from incarcerated offenders. They have also chronicled the
evolution and implementation of an original data collection project. Contributions,
implications, and limitations of this research are outlined in this final chapter.
Findings and Contributions
This dissertation makes three important contributions. First, it demystifies much
of what happens before survey data are eventually analyzed. A pointedly descriptive
account of this project’s inception, development, and time in the field was provided, with
special sections and attention paid to instrument construction, training, scheduling,
navigating tense situations, presentation of self, emotion, and other features of the
research process that are typically taken for granted. Together these sections reveal a
broader methodological forest that criminologists may fail to see when they focus their
scopes on the statistical trees.
Second, this dissertation examines data that were collected from a sample that
was intentionally designed to be more representative of the current prison population than
those used in previous life-events calendar research. Over the past two decades it has
increasingly become the case that a majority of inmates are being imprisoned for what
181
many believe are less serious offenses (Austin and Irwin 2001; Elsner 2006; Mauer
1999). However, previous assessments of life-events calendar data in criminology have
typically examined samples of more serious offenders (see Horney et al. 1995; Roberts et
al. 2005) and others (see Yacoubian 2003) that may not generalize to broader offending
and prisoner populations. Accordingly, this study employed data from level one and
level two male inmates between the ages of 18 and 32 who had been in prison for less
than one year. Collectively, prisoners in the sample featured substantial involvement
with substance use, multiple indictors of social disadvantage, and diverse backgrounds
and criminal histories, and as a group they were representative of those most affected by
recent incarceration trends.
Third, this dissertation advances what is known about the life-events calendar
method in criminological research. Prior studies on the reliability and validity of lifeevents calendar data are limited (Caspi et al. 1996) and have mostly been done in
disciplines other than criminology. Accordingly, this study’s attention to the reliability
and validity of life-events calendar data collected from prisoners is timely and relevant.
Self-reports of residential moves, job changes, and arrests during the eighteenmonth calendar period had low test-retest reliability, and self-reports of arrests had low
criterion validity. Moderate to high test-retest reliability was found for self-reported use
of alcohol and six other drugs, legal and illegal income, drug dealing, violent offending,
property offending, and three different forms of involvement with the justice system.
Moreover, moderate validity was found for self-reports of total lifetime arrests and
convictions, and self-reports of age at first arrest and number of prior prison terms
featured strong validity.
182
Low reliability and validity for self-reported arrests during the study period may
have stemmed from respondent confusion about what constituted an arrest and the
likelihood that official records were incomplete bases for criterion comparisons.
Moreover, low reliability for self-reports of residential moves and job changes may have
been the result of respondents’ inability to accurately remember dates (Henry et al. 1994).
These findings aside, for most of the indicators examined incarcerated offenders’ selfreports featured moderate to high test-retest reliability and criterion validity.
Implications
Several implications follow from the contributions outlined above. Practical
advice for resolving myriad challenges and considerations inherent to collecting original
data is presented in chapters 5 and 6. Most basically, this dissertation suggests
researchers need to engage in more reflective quantitative inquiry. Attention to mundane
topics such as instrument construction and mode of administration and to subjective
dynamics such as presentation of self, impact on the setting, and one’s emotional
reactions to research may improve quantitative data analyses (see Jenkins 1995; Liebling
1999). Reflective research also sensitizes other scholars to challenges they will
potentially face in their own work.
The contributions outlined above also imply that researchers need to use samples
that better reflect the populations most affected by contemporary corrections practices.
Current incarceration rates in the United States are unprecedented (see Austin and Irwin
2001; Elsner 2006; Mauer 1999) and have led to increases in the number of ex-prisoners
struggling to reenter society (Petersilia 2003). In addition to studying serious offenders
criminologists need to focus more on lower level offenders. These populations have
183
recently shown the most discouraging and pronounced increases in recidivism relative to
other offenders (Hughes and Wilson 2002; Langan and Levin 2002). They now also
make up the majority of those who are being sent to prison in the first place (Austin and
Irwin 2001; Elsner 2006; Mauer 1999).
The main implication of this study is that future researchers should feel confident
about the life-events calendar method’s ability to collect self-report data from
incarcerated offenders that have moderate to high reliability and validity. The life-events
calendar method helps reduce the ambiguities in respondents’ stories by addressing
patterned memory problems and establishing temporal ordering. These strengths likely
account for why items such as illegal income, drug dealing, and heroin use showed better
reliability and validity in this study than in previous research that relied on traditional
self-report surveys.
Limitations and Future Research
There are four limitations of this study that future research should improve upon.
First, this project relied on test-retest assessments of reliability. The test-retest method is
the conventional strategy for assessing reliability in criminology (Huizinga and Elliott
1986; Thornberry and Krohn 2000). However, a fundamental shortcoming of this
approach is that test and retest responses can never be entirely independent due to the fact
that the same respondents are interviewed twice (Sim and Wright 2005). The potential
for testing effects is inherent when conducting this type of analysis (DeCoster and
Claypool 2004: 45; Litwin 1995). It is therefore difficult to rule out the possibility that
consistent responses were due in part to familiarity with the interview instrument. Steps
were taken to reduce the likelihood that testing effects would influence these data,
184
including the use of a complicated survey instrument and a one- to four- week interval
between contacts. Future researchers should devote further attention to developing
strategies for minimizing the potential influence of testing effects in test-retest reliability
research.
Second, this study relied on official Ohio Department of Rehabilitation and
Correction records to assess validity. Criminologists examining validity have
traditionally taken this approach (Thornberry and Krohn 2000; see Hindelang, Hirschi,
and Weis 1981). However, there are notable shortcomings to using official data as
validity criteria. One potential problem is that the data and variables in each source may
not be congruent. Another challenge is that official data sources may not be accurate or
complete. Accordingly, future researchers should continue to work toward the lofty ideal
of developing an infallible criterion for validity tests because one currently does not exist
(Thornberry and Krohn 2000: 52).
Third, the sample’s greatest strength is by default the source of its weaknesses.
The respondents interviewed for this research were representative of the majority of those
who are now being put behind bars and those who have been disproportionately affected
by current incarceration and recidivism trends. However, whether they generalize to
broader offending populations that are able to avoid detection by the justice system, or
prison upon detection, is unclear. Moreover, these findings may not generalize to female
inmates, prisoners with high security classifications, or juvenile delinquents. Finally,
though respondents came from rural, urban, and suburban parts of Ohio, it is possible that
Ohio’s prisoners feature qualitative differences from those who are incarcerated in other
states or different regions of the country. Future assessments of the life-events calendar
185
method should therefore draw from samples of female offenders, youthful offenders, and
inmates from jurisdictions outside of Ohio.
Fourth, potential shortcomings may extend from recruitment and participation
rates. We were prohibited from compensating respondents for their effort and time. We
also lost respondents due to the inefficiency of the prisons we interviewed in (see
Chapters 5 and 6). Anecdotal evidence strongly suggested that prison employees’ (in)
actions deterred at least some of those who were counted as refusals from participating in
the study. Ultimately, we had no way of knowing whether the inmates we counted as
refusals were in fact actively refusing, nor do we know if they were qualitatively different
from those who participated.
The limitations outlined in this section posed challenges that were at times
difficult to resolve. Other researchers have likely dealt with similar issues, yet attention
to these features of the research process is conspicuously absent from the literature. This
dissertation has thoroughly described how our research was carried out in order to give
voice to the taken for granted complexities of conducting original research in prison. We
did the best we could when faced with less than ideal circumstances, which hopefully is
clear when reading this detailed story of our project.
Conclusion
“Some people have “good memories” and can remember many things in great
detail: others have “poor memories.” (Sudman et al. 1996: 171). How well do survey
respondents remember previous experiences? What can be done to assess and improve
the reliability and validity of their self-reports? As social scientists we often learn about
186
people through interviewing (Cannell and Kahn 1968; Fontana and Frey 2003), making
the answers to these questions crucial.
This dissertation has answered these questions. Do prison inmates provide
researchers with reliable and valid information? For the most part my findings suggest
they do. Moreover, the life-events calendar method encouraged accuracy in selfreporting by facilitating respondent-researcher interaction, providing respondents with
visual and verbal cues, tapping into the cognitive structures in which respondents’
memories were stored, and utilizing reference points to establish temporal ordering.
The methodological implications of these findings are clear. The life-events
calendar method is a useful tool for collecting reliable and valid information from
incarcerated offenders. Future researchers will therefore benefit from utilizing the lifeevents calendar method to study the experiences of those who are incarcerated and
subsequently released from prison.
Members of mainstream society often assume that prisoners and others with
negative social statuses embody undesirable “auxiliary” traits (Hughes 1945). The
likelihood that prisoners will be honest and sincere in social research is therefore often
questioned (Anglin, Hser, and Chou 1993; Maruna 2001; Rhodes 1999: 60; see Sorensen
1950). Consistent with other scholars (Chaiken and Chaiken 1982; Horney and Marshall
1992; Lewis and Mhlanga 2001; Liebling 1999; Marquis 1981), my field experiences and
findings suggest that most prisoners were decent respondents who provided reliable and
valid information. Considering that “criminal behavior is human behavior” (Sutherland
and Cressey 1970: 73), it is likely that the inmates examined in this research were
187
motivated by the same dynamics that would lead conventional others to provide good
data.
Accordingly, the humanistic implications of these findings are also clear. The
contemporary United States is incarcerating and therefore officially discrediting more of
its citizens than it ever has in the past. Political rhetoric that dehumanizes street
offenders and minority youth supports these practices (Glassner 2000). However, my
findings and field observations suggest that the human differences between “us” and
“them” are less pronounced than popular stereotypes and imagery would imply. This is a
critical insight given that public support for more severe punishments is often rooted in
perceived in-group/out-group differences that result from increased heterogeneity (Tyler
and Boeckmann 1997). To the extent more people can come to recognize commonalities
between us and them, there is potential for our criminal justice practices to become more
empathetic.
188
APPENDIX A
INITIAL RECRUITMENT LETTER
189
Department of Sociology
300 Bricker Hall
190 N Oval Mall
Columbus, Ohio 43210
I am a professor at Ohio State University, and I would like you to participate in my research
study. My goal is to better understand the lives of prison inmates, especially during the 18
months right before coming to prison. If you are willing to discuss your past experiences, and if
you can be straightforward with us, we are very interested in talking with you. Your answers will
be kept strictly confidential. The interview will only last about one hour. We will not be asking
you about the details of your current offense.
You will have the right to ask questions, and you can change your mind if you decide you don’t
want to participate. By law we can’t interview you unless you sign a consent form stating that
you are voluntarily participating. It is not a legal document, and once again you are free to
change your mind at any time after you sign the form.
The topics we will ask you about are of great interest to sociologists. If you are interested write
your name, inmate #, and housing unit lock on the bottom of this form and return it to Melody
Haskin. We will then schedule you for an interview. I hope you will participate in our study,
and sincerely thank you for your time.
Paul E. Bellair
Associate Professor
Name ______________________________________________
Inmate # ____________________________________________
Housing unit lock _____________________________________
190
APPENDIX B
REVISED RECRUITMENT LETTER
191
Department of Sociology
300 Bricker Hall
190 N Oval Mall
Columbus, Ohio 43210
Dear Madison Inmate:
I am a professor at Ohio State University, and I would like you to participate in my research study.
My goal is to better understand the backgrounds of prison inmates, especially during the 18 months
right before coming to prison. We will not be asking you about the details of your current offense.
If you are willing to discuss your past experiences, and if you can be straightforward with us, we are
very interested in talking with you. We are not affiliated with the prison system, and your answers
will be kept strictly confidential. Members of our research team will be the only ones who have
access to the information you provide.
Your participation in this study is voluntary. By law, we can’t interview you unless you sign a
consent form stating that you are voluntarily participating. If you are interested in participating,
and/or if you would like to learn more, please check the appropriate box below. We will then
schedule you for an interview. We will go over the consent form with you at this time, and you will
have an opportunity to ask questions. You will also be able to change your mind if you decide that
you no longer want to participate.
The interview will last about one hour, and the topics we will ask you about are of great interest to
sociologists. I hope you will participate in our study, and sincerely thank you for your time.
Paul E. Bellair
Associate Professor
*Please fill out the following information
Name ______________________________________________
Inmate # ____________________________________________
Housing unit lock _____________________________________
Yes, I am interested in participating and/or learning
more about the study. Please sign me up for an interview.
No, I am not interested in participating in the study.
192
APPENDIX C
LIFE EVENTS CALENDAR
193
194
MONTH:
YEAR:
INTERVIEWER
COMMENTS:
PROBATION/PAROLE:
ARRESTED:
SUBSTANCE USE:
MARIJUANA:
CRACK:
POWDER COCAINE:
HERION:
SPEED/METH:
ACID:
INHALENTS:
OTHER:
ALCOHOL:
EMPLOYED:
WHO LIVE WITH:
ADDRESS:
WHERE LIVE:
TREATMENT PROGRAM:
INCARCERATED:
BIRTHDAY:
RECORD #:
DATE:
04
04
2
Sep
1
Aug
3
04
Oct
4
04
Nov
5
04
Dec
6
05
Jan
7
05
Feb
8
05
Mar
9
05
Apr
10
11
12
05
05
05
May June July
13
05
Aug
14
05
Sep
15
05
Oct
16
05
Nov
17
05
Dec
18
06
Jan
APPENDIX D
ODRC CONSENT FORM
195
Ohio Department of Rehabilitation and Correction
Consent/Refusal To Meet With Researcher
I, ______________________________ hereby grant permission to the Ohio Department
Print Staff or Inmate Name/Number
of Rehabilitation and Correction to allow me to meet with ________________________
Researcher Name
To decide if I would like to participate in a research study. I understand that my consent
to meet with the researcher does not require me to consent to participate I the research
study.
I understand that a description of the study will be provided by the researcher when we
Meet. If I want to participate in the research, I will sign the participate consent form that
the researcher will provide for me. If I do not want to participate in the research, I will
Not sign the consent form provided by the researcher. Even if I sign the research consent
form, I under4stnad that I may end my participation in the research at any time.
The above consent to meet with the researcher is given by me freely and voluntarily
without any promises, threats, or duress.
I, _______________________ agree to meet with __________________________.
Staff or Inmate Signature
Researcher
on this date ______________________.
Or
I, _______________________ decline to meet with _________________________.
Staff or Inmate Signature
Researcher
196
APPENDIX E
OSU CONSENT FORM
197
Department of Sociology
341 Bricker Hall
190 N Oval Mall
Columbus, OH 43210
IRB #
ADULT CONSENT FORM
Title of the Research Study
A test of the reliability and validity of the “life history calendar” interview method
Invitation to Participate
You are invited to participate in this research study. The following informat ion is
provided in order to help you to make an informed decision whether or not to participate. If you
have any questions please do not hesitate to ask.
Basis For Subject Selection
You are eligible to participate because you are an inmate between the a ges of eighteen
and thirty-two at a correctional institution of the Ohio Department of Rehabilitation and
Corrections.
Purpose of the Study
The purpose of this project is to collect information from you using a “life history
calendar” interview method. It will help us understand whether changes in your circumstances in
the eighteen months before you came to prison created problems for you.
Explanation of Procedures
This study consists of a one-hour interview and a thirty-minute interview. One
interviewer will ask you questions and enter your responses into a laptop computer, and the other
interviewer will keep track of some of your answers on a piece of paper we call a “life history
calendar.” We want you to look at this to help you remember the timing of life events in the past.
You will be asked many questions about your personal background, including you family
relations, criminal history, and substance abuse. For some of the questions we ask we will want
to know if you have had any changes in the eighteen months prior to your incarceration. In
addition, if you choose to participate in the study, we want to check your record for five years
after you get out to find out how you made out over the long term.
After we interview you today, we would like to interview you again two to three weeks
from now. Our purpose is to see whether you remember things that you’ve done the same way
when you are interviewed the second time.
Initials______
198
Potential Risks and Discomforts
It is possible that some of the questions we ask may embarrass you or make you feel
uncomfortable. A staff member from psychological services can talk to you if this becomes a big
problem. You do not have to answer any question that makes you feel uncomfortable. The only
other risk would be accidental disclosure of confidential information, but we will make sure that
does not happen. We explain the procedures we will use to protect against that risk below.
Potential Benefits to Subject
There are no direct benefits to the subjects. However, because of your involvement you
may help criminologists to better understand the problems that inmates have. We really do
appreciate your taking the time to help us.
Assurance of Confidentiality
The researchers on this project are not associated with the Ohio Department of
Rehabilitation and Corrections, and no one in the Department will have access to any of the
information we ask you about. Your confidentiality will be protected by procedures that insure
that information you give cannot be linked to you. The information entered into the laptop
computer will have a personal code that you will be assigned just for this project. The only way
that code can be linked to you is through a form that has your project code number listed next to
your name and inmate number. That code sheet, which will be kept in a locked file cabinet at the
Ohio State University in my office, is only needed so that information from your prison records
can be matched to your interview, and it will be destroyed as soon as we discontinue our
research. There will then be no way of associating your name with any of the information
obtained in this study. The results of this study may be published in scientific journals or
presented at scientific meetings but never in a way that allows any individual to be identified.
We have a Confidentiality Certificate (CC) from the US government that adds special protection
for the research information about you. It says we do not have to identify you or disclose your
information even under a court order or subpoena. Still, we will report if you tell us that you are
going to hurt yourself or someone else in the future, or if you tell us about child abuse. But
remember, we are not going to ask you about things you are planning in the future or about child
abuse during our interview. The Department of Health and Human Services may see your
information if they audit us, but only for audit or evaluation purposes. They can’t report
anything that would harm the confidentiality of our research subjects. Finally, the Certificate
does not mean that the government either approves or disapproves of our project.
Rights of Research Subjects
Your rights as a research subject have been explained to you. Your voluntary
participation in our study and your signature below do not constitute a waiver of your
constitutional rights nor do they release the investigator from liability for negligence. You do
not lose any of your constitutional rights by agreeing to participate in our study and we are
obligated to protect your confidentiality and to make sure that you are treated fairly.
Initials______
199
Voluntary Participation and Withdrawal
Participation is voluntary. Your decision whether or not to participate will not affect
your present or future treatment by the Department of Corrections, the Ohio Parole Board, or the
Ohio State University. If you decide to participate, you are free to withdraw from this study at
any time.
Documentation of Informed Consent
YOU ARE VOLUNTARILY MAKING A DECISION WHETHER OR NOT TO
PARTICIPATE IN THIS RESEARCH STUDY. YOUR SIGNATURE CERTIFIES THAT
THE CONTENT AND MEANING OF THE INFORMATION ON THIS CONSENT
FORM HAVE BEEN FULLY EXPLAINED TO YOU AND THAT YOU HAVE
DECIDED TO PARTICIPATE HAVING READ AND UNDERSTOOD THE
INFORMATION PRESENTED. YOUR SIGNATURE ALSO CERTIFIES THAT YOU
HAVE HAD ALL YOUR QUESTIONS ANSWERED TO YOUR SATISFACTION. IF
YOU THINK OF ANY QUESTIONS DURING THIS STUDY PLEASE CONTACT THE
RESEARCHERS. YOU WILL BE GIVEN A COPY OF THIS CONSENT FORM TO
KEEP. BY SIGNING BELOW YOU ARE NOT WAIVING ANY OF YOUR LEGAL
RIGHTS.
____________________________
SIGNATURE OF SUBJECT
___________________________
DATE
IN MY JUDGEMENT THE SUBJECT IS VOLUNTARILY AND KNOWINGLY GIVING
INFORMED CONSENT AND POSSESSES THE LEGAL CAPACITY TO GIVE
INFORMED CONSENT TO PARTICIPATE IN THIS RESEARCH STUDY.
____________________________
SIGNATURE OF RESEARCHER
___________________________
DATE
RESEARCHER CONTACT: Professor Bellair, The Ohio State University, 341 Bricker
Hall, 190 N. Oval Mall, Columbus, OH 43210
200
APPENDIX F
FIELD NOTES FORM
201
Prison Interview Field-Note Sheets
Given that we have multiple interviewers and research teams, this form has been
devised to help standardize our field notes. One of these forms should be filled out
to summarize each day of interviewing. You are encouraged to record additional
notes on items not listed on this sheet when relevant. Finally, please take
thorough notes – when in doubt, write it down!
Today’s Date: _______________
Interviewers: _______________
_______________
Institution:
_______________
Room in which interview was conducted:______________
Were there any features of the setting that affected
the interview (i.e. noise, inmate/staff activity, etc.)?
No
Yes (please describe)
Number of Interviews:
Scheduled ____
Attempted ____
Completed ____
[Over]
202
Summary of Interview Session
Prior To Interviewing
‰
Did you arrive on time?
‰
Did anyone try to talk to you in the parking lot or lobby?
‰
In terms of prison staff, who came to meet you when you arrived?
o Was this person on time?
‰
Did you have any problems or delays with security?
‰
Did it seem like the prison was expecting you?
During Interview
‰
If your interview(s) did not begin on
time, please describe what happened:
‰
Did the administration of the consent forms go smoothly?
‰
Please describe any coding issues that came up
o Interview #1
o Interview #2
Please write down the names & positions of
any staff members who were especially helpful:
203
Interview #1
Duration of Interview: _______
Respondent Information
Race: _______
Does Respondent Have Kids? _______
Demeanor:
‰
Cooperativeness
‰
Comfort in setting
‰
Interest in participating
Ability:
‰
Intelligence
‰
Recall ability
‰
Understanding of questions/language
Data & The Instrument
‰
Did you have any computer problems?
‰
Please note any questions you skipped
‰
Please note any questions the inmate chose to skip
‰
Please describe any coding decisions that you made:
[Over]
204
[Interview #1 – Continued]
Additional Comments & Observations:
Interviewer Division of Labor
‰
Which of you administered the consent form?
‰
Who did the computer portion?
‰
Which interviewer asked the qualitative questions toward the end?
205
Interview #2
Duration of Interview: _______
Respondent Information
Race: _______
Does Respondent Have Kids? _______
Demeanor:
‰
Cooperativeness
‰
Comfort in setting
‰
Interest in participating
Ability:
‰
Intelligence
‰
Recall ability
‰
Understanding of questions/language
Data & The Instrument
‰
Did you have any computer problems?
‰
Please note any questions you skipped
‰
Please note any questions the inmate chose to skip
‰
Please describe any coding decisions that you made:
[Over]
206
[Interview #2 – Continued]
Additional Comments & Observations:
Interviewer Division of Labor
‰
Which of you administered the consent form?
‰
Who did the computer portion?
‰
Which interviewer asked the qualitative questions toward the end?
207
BIBLIOGRAPHY
Adler, Patricia A. 1993. Wheeling and Dealing: An Ethnography of an Upper-Level
Drug Dealing and Smuggling Community. New York, NY: Columbia University
Press.
Akers, Ronald L., Krohn, Marvin D., Lanza-Kaduce, Lonn, and Marcia Radosevich.
1979. "Social Learning and Deviant Behavior: A Specific Test of a General
Theory." American Sociological Review 44(4): 636-655.
Anderson, Elijah. 1990. Streetwise. Chicago, IL: University of Chicago Press.
Anglin, M. Douglas, Yih-Ing Hser and Chih-Ping Chou. 1993. “Reliability and Validity
of Retrospective Behavioral Self-Report by Narcotics Addicts.” Evaluation
Review 17(1): 91-108.
Athens, Lonnie H. 1992. The Creation of Dangerous Violent Criminals. Urbana, IL:
University of Illinois Press.
Austin, James and John Irwin. 2001. It’s About Time: America’s Imprisonment Binge,
Third Edition. Belmont, CA: Wadsworth.
Axinn, William G., Lisa D. Pearce and Dirgha Ghimire. 1999. “Innovations in Life
History Calendar Applications.” Social Science Research 28: 243-264.
Bartollas, Clemens. 2002. Invitation to Corrections. Boston, MA: Allyn and Bacon.
Beck, Allen J. and Darrell K. Gilliard. 1995. Prisoners in 1994. NCJ-151654.
Washington, D.C.: U.S. Department of Justice, Bureau of Justice Statistics.
Becker, Howard S. 1966. “Introduction.” In Clifford R. Shaw, The Jack-Roller: A
Delinquent Boy’s Own Story (pp. v-xviii). Chicago, IL: University of Chicago
Press.
Belli, Robert F. 2000. Computerized Event History Calendar Methods: Facilitating
Autobiographical Recall. Presentation, Proceedings of the Survey Research
Methods Section: American Statistical Association.
208
Belli, Robert F. 1998. “The Structure of Autobiographical Memory and the Event
History Calendar: Potential Improvements in the Quality of Retrospective
Reports in Surveys.” Memory 6(4): 383-406.
Belli, Robert F., Shay, William L., and Frank P. Stafford. 2001. “Event History
Calendars and Question List Surveys: A Direct Comparison of Interviewing
Methods.” Public Opinion Quarterly 65: 45-74.
Blokland, Arjan A.J. and Paul Nieuwbeerta. 2005. “The Effects of Life Circumstances
on Longitudinal Trajectories of Offending.” Criminology 43(4): 1203-1240.
Blumer, Herbert. 1969. Symbolic Interactionism: Perspective and Method. Berkeley,
CA: University of California Press.
Blumstein, Alfred, Cohen, Jacqueline, Roth, Jeffrey A., and Christy A. Visher. 1986.
“Methodological Issues in Criminal Career Research.” In Alfred Blumstein et al.
(eds.) Criminal Careers and “Career Criminals,” Volume I (pp. 96-108).
Washington D.C.: National Academy Press.
Bourgois, Philippe. 2003. In Search of Respect: Selling Crack in El Barrio, 2nd Edition.
New York, NY: Cambridge University Press.
Bradburn, Norman M., Rips, Lance J. and Steven K. Shevell. 1987. “Answering
Autobiographical Questions: The Impact of Memory and Inference on Surveys.”
Science 236: 157-161.
Cannell, Charles F. and Robert L. Kahn. 1968. “Interviewing.” In Lindzey Gardner and
Elliot Aronson (eds.). The Handbook of Social Psychology, 2nd Edition (pp. 526595). Reading, MA: Addison-Wesley Publishing Company.
Carmines, Edward G. and Richard A Zeller. 1979. “Reliability and Validity
Assessment.” Sage University Paper Series on Quantitative Applications in the
Social Sciences, 07-017. Newbury Park, CA: Sage.
Caspi, Avshalom, Moffitt, Terrie E, Thornton, A., Freedman, D., Amell, JW, Harrington,
H., et al. 1996. “The Life History Calendar: A Research and Clinical
Assessment Method for Collecting Retrospective Event-History Data.”
International Journal of Methods in Psychiatric Research 6: 101-114.
Castellano, Ursula. 2007. “Becoming a Nonexpert and Other Strategies for Managing
Fieldwork Dilemmas in the Criminal Justice System.” Journal of Contemporary
Ethnography 36(6): 704-730.
Chaiken, Jan M. and Marcia R. Chaiken. 1982. Varieties of Criminal Behavior. Santa
Monica, CA: Rand Corporation.
209
Chesney-Lind, Meda and Lisa Pasko. 2004. The Female Offender: Girls, Women, and
Crime, 2nd Edition. Thousand Oaks, CA: Sage.
CIIC. 2006a. Evaluation and Inspection Report: London Correctional Institution.
Correctional Institution Inspection Committee. Retrieved June 14, 2007 from the
World Wide Web: http://www.ciic.state.oh.us/reports/londonci.pdf.
CIIC. 2006b. Inspection of the Madison Correctional Institution. Correctional
Institution Inspection Committee. Retrieved June 14, 2007 from the World Wide
Web: http://www.ciic.state.oh.us/reports/mcir.pdf.
Clemmer, Donald. 1940. The Prison Community. Boston, MA: Christopher Publishing
House.
Cohen, Jacob. 1960. “A Coefficient of Agreement for Nominal Scales.” Educational
and Psychological Measurement 20(1): 37-46.
Collins, Randall. 1979. Credential Society: A Historical Sociology of Education and
Stratification. New York, NY: Academic Press.
Connell, Robert W. 1992. “A Very Straight Gay: Masculinity, Homosexual Experience,
and the Dynamics of Gender.” American Sociological Review 57(6) 735-751.
Conover, Ted. 2000. New Jack: Guarding Sing Sing. New York, NY: Vintage Books.
Copes, Heith and Andy Hochstetler. 2006. “Why I’ll Talk: Offenders’ Motives for
Participating in Qualitative Research.” In Paul Cromwell (ed.) In Their Own
Words: Criminals on Crime, Fourth Edition (pp. 19-28). Los Angeles, CA:
Roxbury Publishing Company.
Davidson, R. Theodore. 1974. Chicano Prisoners: The Key to San Quentin. Prospect
Heights, IL: Waveland Press.
Davies, Pamela. 2000. “Doing Interviews with Female Offenders.” In Victor Jupp,
Pamela Davies, and Peter Francis (eds.) Doing Criminological Research (pp. 8296). Thousand Oaks, CA: Sage.
Day, Carolyn, Collins, Linette, Degenhardt, Louisa, Thetford, Clare, and Lisa Maher.
2004. “Reliability of Heroin Users’ Reports of Drug Use Behaviour Using a 24
Month Timeline Follow-Back Technique to Assess the Impact of the Australian
Heroin Shortage.” Addiction Research and Theory 12(5): 433-443.
DeCoster, Jamie and Heather Claypool. 2004. Data Analysis in SPSS. Retrieved July
12, 2008 from the Word Wide Web: http://www.stat-help.com/notes.html
210
DeFrancesco, John J., Armstrong, Sandra S., and Patrick J. Russolillo. 1996. “On the
Reliability of the Youth Self-Report.” Psychological Reports 79: 322.
Department of Health, Education, and Welfare. 1978. The Belmont Report: Ethical
Principles and Guidelines for the Protection of Human Subjects of Research.
DHEW-(OS)78-0012. Washington, D.C.: Department of Health, Education, and
Welfare.
Dex, Shirley. 1991. “Life and Work History Analyses.” In Shirley Dex (ed.). Life and
Work History Analyses: Qualitative and Quantitative Developments (pp. 1-19).
New York, NY: Routledge.
Douglas, Dorothy J. 1972b. “Managing Fronts in Observing Deviance.” In Jack D.
Douglas (ed.) Research on Deviance (pp. 93-115). New York, NY: Random
House.
Douglas, Jack D. 1972a. “Observing Deviance.” In Jack D. Douglas (ed.) Research on
Deviance (pp. 3-34). New York, NY: Random House.
Elliott, Delbert S., Dunford, Franklyn W. And David Huizinga. 1987. “The
Identification and Prediction of Career Offenders Utilizing Self-Reported and
Official Data.” In John D. Burchard and Sara N. Burchard (eds). Prevention of
Delinquent Behavior (pp. 90-121). Thousand Oaks, CA: Sage.
Elsner, Alan. 2006. Gates of Injustice: The Crisis in America’s Prisons. Upper Saddle
River, NJ: Prentice Hall.
Engel, Lawrence S., Keifer, Matthew C., Thompson, Mary Lou, and Shelia H. Zahm.
2001. "Test-Retest Reliability of an Icon/Calendar-Based Questionnaire Used to
Assess Occupational History." American Journal of Industrial Medicine 40:
512-522.
Engel, Lawrence S., Keifer, Matthew C., and Shelia H. Zahm. 2001. “Comparison of a
Traditional Questionnaire with an Icon/Calendar-Based Questionnaire to Assess
Occupational History.” American Journal of Industrial Medicine 40: 502-511.
Ensel, Walter M., Peek, Kristen M, Lin, Nan, and Gina Lai. 1996. “Stress in the Life
Course: A Life History Approach.” Journal of Aging and Health 8(3): 389-416.
Everitt, Brian S. 1998. The Cambridge Dictionary of Statistics. New York, NY:
Cambridge University Press.
Everitt, Brian and Dale Hay. 1992. Talking About Statistics: A Psychologist’s Guide to
Design and Analysis. New York, NY: Halsted Press.
211
Farrington, David P. 2003. “Developmental and Life-Course Criminology: Key
Theoretical and Empirical Issues – The 2002 Sutherland Award Address.”
Criminology 41(2): 221-255.
Farrington, David P., Loeber, Rolf, Stouthamer-Loeber, Magda, Van Kammen, Welmoet
B., and Laura Schmidt. 1996. “Self-Reported Delinquency and a Combined
Delinquency Seriousness Scale Based on Boys, Mothers, and Teachers:
Concurrent and Predictive Validity for African-Americans and Caucasians.”
Criminology 34(4): 493-517.
Fendrich, Michael and Connie M. Vaughn. 1994. “Diminished Lifetime Substance Use
Over Time: An Inquiry into Differential Underreporting.” Public Opinion
Quarterly 58: 96-123.
Ferrell, Jeff. 1998. “Criminological Verstehen: Inside the Immediacy of Crime.” In
Jeff Ferrell and Mark S. Hamm (eds.) Ethnography at the Edge: Crime,
Deviance, and Field Research (pp. 20-42). Boston, MA: Northeastern University
Press.
Fisher-Giorlando, Marianne. 2003. “Why I Study Prisons: My Twenty-Year Personal
and Professional Odyssey and an Understanding of Southern Prisons.” In Jeffrey
Ian Ross and Stephen C. Richards (eds.) Convict Criminology (pp. 59-76).
Belmont, CA: Wadsworth.
Fleisher, Mark S. 1998. “Ethnographers, Pimps, and the Company Store.” In Jeff Ferrell
and Mark S. Hamm (eds.) Ethnography at the Edge: Crime, Deviance, and Field
Research (pp. 44-64). Boston, MA: Northeastern University Press.
Fleiss, Joseph L. 1981. Statistical Methods for Rates and Proportions, 2nd Edition.
New York, NY: John Wiley & Sons.
Fontana, Andrea and James H. Frey. 2003. “The Interview: From Structured Questions
to Negotiated Text” In Norman K. Denzin and Yvonna S. Lincoln (eds.).
Collecting and Interpreting Qualitative Materials (pp. 645-672). Thousand Oaks,
CA: Sage.
Fox, James Alan and Jack Levin. 2005. Extreme Killing: Understanding Serial and
Mass Murder. Thousand Oaks, CA: Sage.
Freedman, Deborah, Thornton, Arland, Camburn, Donald, Alwin, Duane, and Linda
Young-DeMarco. 1988. “The Life History Calendar: A Technique for
Collecting Retrospective Data.” Sociological Methodology 18: 37-68.
212
Gabriel, Craig. 2005. Prison Conversations: Prisoners at the Washington State
Reformatory Discuss Life, Freedom, Crime and Punishment. Arroyo Grande,
CA: Teribooks.
Gardner, Howard. 1983. Frames of Mind: The Theory of Multiple Intelligences. New
York, NY: Basic Books.
Gibson, James William. 1994. Warrior Dreams: Paramilitary Culture in Post-Vietnam
America. New York, NY: Hill and Wang.
Glassner, Barry. 2000. The Culture of Fear: Why Americans Are Afraid of the Wrong
Things. New York, NY: Basic Books.
Glueck, Sheldon and Eleanor Glueck. 1950. Unraveling Juvenile Delinquency. New
York, NY: Commonwealth Fund.
Goffman, Erving. 1964. “The Neglected Situation.” American Anthropologist 66:
133-36.
Goffman, Erving. 1961. Asylums: Essays on the Social Situation of Mental Patients and
Other Inmates. New York, NY: Doubleday.
Goffman, Erving. 1959. The Presentation of Self in Everyday Life. New York, NY:
Doubleday.
Golub, Andrew, Johnson, Bruce D., Taylor, Angela, and Hilary James Liberty. 2002.
“The Validity of Arrestees’ Self-Reports: Variations Across Questions and
Persons.” Justice Quarterly 19(3): 477-502.
Gordon, Robert Ellis. 2000. “Introduction.” In Robert Ellis Gordon (ed.) The Funhouse
Mirror: Reflections on Prison (xi-xxii). Pullman, WA: Washington State
University Press.
Gottfredson, Don. 1971. “Assessment of Methods.” In Leon Radzinowicz and Marvin
E. Wolfgang (eds.) The Criminal in Confinement (pp. 343-375). New York, NY:
Basic Books.
Granack, TJ. 2000. “Welcome to the Steel Hotel: Survival Tips for Beginners.” In
Robert Ellis Gordon (ed.) The Funhouse Mirror: Reflections on Prison (6-10).
Pullman, WA: Washington State University Press.
Hagan, John and Bill McCarthy. 1998. Mean Streets: Youth Crime and Homelessness.
New York, NY: Cambridge University Press
213
Harris, Deborah A. and Domenico “Mimmo” Parisi. 2007. “Adapting Life History
Calendars for Qualitative Research on Welfare Transitions.” Field Methods
19(1): 40-58.
Harrison, Paige M. and Allen J. Beck. 2006. Prisoners in 2005. NCJ-215092.
Washington, D.C.: U.S. Department of Justice, Bureau of Justice Statistics.
Hart, Cynthia Baroody. 1995. “A Primer in Prison Research.” Journal of Contemporary
Criminal Justice 11: 165-75.
Hassine, Victor. 1999. Life Without Parole: Living in Prison Today. Los Angeles, CA:
Roxbury Publishing Company.
Henry, Bill, Moffitt, Terrie E., Caspi, Avshalom, Langley, John, and Phil A. Silva. 1994.
“On the “Remembrance of Things Past”: A Longitudinal Evaluation of the
Retrospective Method.” Psychological Assessment 6(2): 92-101.
Hindelang, Michael J., Travis Hirschi and Joseph G. Weis. 1981. Measuring
Delinquency. Beverly Hills, CA: Sage.
Hirschi, Travis. 1969. Causes of Delinquency. Berkeley, CA: University of California
Press.
Hirschi, Travis and Michael Gottfredson. 1983. “Age and the Explanation of Crime.”
American Journal of Sociology 89(3): 552-584.
Hochschild, Arlie Russell. 1983. The Managed Heart: Commercialization of Human
Feeling. Berkeley, CA: University of California Press.
Horney, Julie. 2001. “Criminal Events and Criminal Careers: An Integrative Approach
to the Study of Violence.” In Robert F. Meier, Leslie W. Kennedy, and Vincent
F. Sacco (eds.). The Process and Structure of Crime (pp. 141-167). Brunswick,
NJ: Transaction.
Horney, Julie, D. Wayne Osgood and Ineke Haen Marshall. 1995. “Criminal Careers in
the Short-Term: Intra-Individual Variability in Crime and Its Relation to Local
Life Circumstances.” American Sociological Review 60: 655-673.
Horney, Julie and Ineke Haen Marshall. 1992. “An Experimental Comparison of Two
Self-Report Methods For Measuring Lambda.” Journal of Research in Crime and
Delinquency 29(1): 102-121.
Horowitz, Ruth. 1986. “Remaining an Outsider: Membership as a Threat to Research
Rapport.” Urban Life 14(4): 409-430.
214
Hser, Yih-Ing, Maglione, Margaret and Kathleen Boyle. 1999. “Validity of Self-Report
of Drug Use Among STD Patients, ER Patients, and Arrestees.” American
Journal of Drug and Alcohol Abuse 25(1): 81-91.
Hubner, John. 2005. Last Chance in Texas: The Redemption of Criminal Youth. New
York, NY: Random House.
Hughes, Everett Cherrington. 1945. “Dilemmas and Contradictions of Status.”
American Journal of Sociology 50(5): 353-359.
Hughes, Timothy and Doris James Wilson. 2002. Reentry Trends in the United States:
Inmates Returning to the Community After Serving Time in Prison. Washington,
D.C.: U.S. Department of Justice, Bureau of Justice Statistics.
Huizinga, David and Delbert S. Elliot. 1986. “Reassessing the Reliability and Validity
of Self-Report Delinquency Measures.” Journal of Quantitative Criminology
2(4): 293-327.
Irwin, John. 1985. The Jail: Managing the Underclass in American Society. Berkeley,
CA: University of California Press.
Irwin, John, Schiraldi, Vincent, and Jason Ziedenberg. 1999. America’s One Million
Non-Violent Prisoners. Washington, D.C.: Justice Policy Institute.
Jacobs, Bruce A. and Richard Wright. 1999. “Stick-Up, Street Culture, and Offender
Motivation.” Criminology 37(1): 149-173.
Jacobs, James B. 1977. Stateville: The Penitentiary in Mass Society. Chicago, IL:
University of Chicago Press.
Jacobs, James B. 1974. “Participant Observation in Prison.” Urban Life and Culture
3(2): 221-240.
Jenkins, Richard. 1995. “Social Skills, Social Research Skills, Sociological Skills:
Teaching Reflexivity?” Teaching Sociology 23: 16-27.
Johnson, Alan. 2004. “Inmate-Staff Relationships: Efforts Under Way to
End Illegal Sex, Prisons Chief Says.” Columbus Dispatch (March 18th).
Johnson, Bruce D., Taylor, Angela, Golub, Andrew, and John Eterno. 2002. How
Accurate are Arrestees in Reporting their Criminal Justice Histories ?:
Concordance and Accuracy of Self-Reports Compared to Official Records. NCJ196657. Washington, D.C.: U.S. Department of Justice, National Institute of
Justice.
215
Johnson, Robert. 2002. Hard Time: Understanding and Reforming the Prison.
Belmont, CA: Wadsworth.
Jolliffe, Darrick, Farrington, David P., Hawkins, J. David, Catalano, Richard F., Hill,
Karl G., and Rick Kosterman. 2003. “Predictive, Concurrent, Prospective and
Retrospective Validity of Self-Reported Delinquency.” Criminal Behaviour and
Mental Health 13: 179-197.
Junger-Tas, Josine and Ineke Haen Marshall. 1999. “The Self-Report Methodology in
Crime Research.” Crime and Justice 25: 291-367.
Kiefer, Stephan. 2004. “Research with Prisoners as Subjects.” In the Collaborative IRB
Training Initiative (CITI) Basic Course in the Protection of Human Research
Subjects for Social / Behavioral Research. Coral Gables, FL: University of
Miami.
King, Roy D. 2000. “Doing Research in Prisons.” In Roy D. King and Emma Wincup
(eds.) Doing Research on Crime and Justice (pp. 285-312). New York, NY:
Oxford University Press.
Kinnear, Paul R. and Colin D. Gray. 2006. SPSS 14 Made Simple. New York, NY:
Psychology Press.
Knight, Kevin, Hiller, Matthew L., Simpson, D. Dwayne, and Kirk M. Broome. 1998.
“The Validity of Self-Reported Cocaine Use in a Criminal Justice Treatment
Sample.” American Journal of Drug and Alcohol Abuse 24(4): 647-660.
Kraemer, H.C. 1983. “Kappa Coefficient.” In Samuel Katz and Norman L. Johnson
(eds.) Encyclopedia of Statistical Sciences, Volume 4 (pp. 352-354). New York,
NY: John Wiley & Sons.
Kruttschnitt, Candace and Kristin Carbone-Lopez. 2006. “Moving Beyond the
Stereotypes: Women’s Subjective Accounts of their Violent Crime.”
Criminology 44(2): 321-352.
Landis, J. Richard and Gary G. Koch. 1977. “The Measurement of Observer Agreement
for Categorical Data.” Biometrics 33(1): 159-174.
Langan, Patrick A. and David J. Levin. 2002. Recidivism of Prisoners Released in
1994. NCJ-193427. Washington, D.C.: U.S. Department of Justice, Bureau of
Justice Statistics.
Laub, John H. and Robert J. Sampson. 2003. Shared Beginnings, Divergent Lives:
Delinquent Boys to Age 70. Cambridge, MA: Harvard University Press.
216
La Vigne, Nancy G., Thomson, Gillian L., Visher, Christy, Kachnowski, Vera, and
Jeremy Travis. 2003. A Portrait of Prisoner Reentry in Ohio. Washington,
D.C.: Urban Institute, Justice Policy Center.
Lewis, Darren and Bonny Mhlanga. 2001. “A Life of Crime: The Hidden Truth About
Criminal Activity.” International Journal of Market Research 43(2): 217-240.
Liebling, Alison. 1999. Doing Research in Prison: Breaking the Silence? Theoretical
Criminology 3: 147-73.
Lin, Nan, Ensel, Walter M. and Wan-foon Gina Lai. 2006. “Construction and Use of the
Life History Calendar: Reliability and Validity of Recall Data.” In Ian H. Gotlib
and Blair Wheaton (eds.). Stress and Adversity Over the Life Course (pp. 249272). New York, NY: Cambridge University Press.
Litwin, Mark S. 1995. How to Measure Survey Reliability and Validity. Thousand
Oaks, CA: Sage.
Lyng, Stephen. 1998. “Dangerous Methods: Risk Taking and the Research Process.”
In Jeff Ferrell and Mark S. Hamm (eds.) Ethnography at the Edge: Crime,
Deviance, and Field Research (pp. 221-251). Boston, MA: Northeastern
University Press.
MacKenzie, Doris Layton and Spencer De Li. 2002. “The Impact of Formal and
Informal Social Controls on the Criminal Activities of Probationers.” Journal of
Research in Crime and Delinquency 39(3): 243-276.
Marquart, James W. 1986. “Doing Research in Prison: The Strengths and Weaknesses
of Full Participation as a Guard.” Justice Quarterly 3(1): 15-32.
Marquis, Kent H. 1981. Quality of Prisoner Self-Reports: Arrest and Conviction
Response Errors. Santa Monica, CA: Rand Corporation.
Martin, Carol. 2000. “Doing Research in a Prison Setting.” In Victor Jupp,
Pamela Davies, and Peter Francis (eds.) Doing Criminological Research (pp. 215233). Thousand Oaks, CA: Sage
Martin, Donnie M. and Peter Y. Sussman. 2002. “Report from an Overcrowded Maze.”
In Leanne Fiftal Alarid and Paul F. Cromwell (eds.) Correctional Perspectives:
Views from Academics, Practitioners, and Prisoners. Los Angeles, CA: Roxbury
Publishing Company.
Maruna, Shadd. 2001. Making Good: How Ex-Convicts Reform and Rebuild their
Lives. Washington, D.C.: American Psychological Association.
217
Mauer, Marc. 1999. Race to Incarcerate. New York, NY: The New Press.
Maxfied, Michael G., Barbara Luntz Weiler and Cathy Spatz Widom. 2000.
“Comparing Self-Reports and Official Records of Arrests.” Journal of
Quantitative Criminology 16(1): 87-110.
McCall, Nathan. 1994. Makes Me Wanna Holler: A Young Black Man in America.
New York, NY: Vintage Books.
McCarthy, John. 2004. “Department Targets Sexual Abuse of Inmates.”
Associated Press (March 8th).
McPherson, J. Miller, Pamela A. Popielarz and Sonia Drobnic. 1992. “Social Networks
and Organizational Dynamics.” American Sociological Review 57(2): 153-170.
Mensch, Barbara S. and Denise B. Kandel. 1988. "Underreporting of Substances Use in
a National Longitudinal Youth Cohort: Individual and Interviewer Effects.”
Public Opinion Quarterly 52: 100-124.
Messerschmidt, James W. 2000. Nine Lives: Adolescent Masculinities, the Body, and
Violence. New York, NY: Perseus Books.
Messner, Michael A. 2002. Taking the Field: Women, Men, and Sports. Minneapolis,
MN: University of Minnesota Press.
Miethe, Terance D., McCorkle, Richard C., and Shelley J. Listwan. 2006. Crime
Profiles: The Anatomy of Dangerous Persons, Places, and Situations, 3rd Edition.
Los Angeles, CA: Roxbury Publishing Company.
Miller, Jody. 2001. One of the Guys: Girls, Gangs, and Gender. New York, NY:
Oxford University Press.
Moffitt, Terrie E. 1997. “Adolescence-Limited and Life-Course-Persistent Offending:
A Complementary Pair of Developmental Theories.” In Terence P. Thornberry
(ed.). Developmental Theories of Crime and Delinquency (pp. 11-54). New
Brunswick, NJ: Transaction.
Newman, Donald J. 1958. “Research Interviewing in Prison.” The Journal of Criminal
Law, Criminology, and Police Science 49: 127-32.
Northrup, David A. 1997. The Problem of the Self-Report in Survey Research. North
York, Ontario, Canada: Institute for Social Research.
218
Nye, F. Ivan and James F. Short. 1957. “Scaling Delinquent Behavior.” American
Sociological Review 22(3): 326-331.
O’Brien, Robert M. 2000. “Crime Facts: Victim and Offender Data.” In Joseph F.
Sheley (ed.), Criminology (pp. 59-83). Belmont, CA: Wadsworth.
O’Brien, Robert M. 1985. Crime and Victimization Data. Beverly Hills, CA: Sage
Publications.
ODRC. 2006a. April 2006 Facts. Ohio Department of Rehabilitation and Correction.
Retrieved April 10, 2008 from the World Wide Web:
http://www.drc.state.oh.us/web/Reports/FactSheet/April%202006.pdf
ODRC. 2006b. 2005 Intake Study. Bureau of Research, Office of Policy and
Offender Reentry. Ohio Department of Rehabilitation and Correction. Retrieved
April 10, 2008 from the World Wide Web:
http://www.drc.state.oh.us/web/Reports/intake/Intake%202005.pdf
Overholser, James C. 1987. “Ethical Issues in Prison Research: A Risk/Benefit
Analysis.” Behavioral Sciences & the Law 5: 187-202.
Pager, Devah. 2003. "The Mark of a Criminal Record." American Journal of Sociology
108(5): 937-975.
Pallant, Julie. 2005. SPSS Survival Manual, 2nd Edition. New York, NY: Open
University Press.
Peters, Candace. 2004. “Corrections.” In State of Crime and Justice in Ohio (pp 161201). Columbus, Ohio: Ohio Office of Criminal Justice Services
Petersilia, Joan. 2003. When Prisoners Come Home: Parole and Prisoner Reentry.
New York, NY: Oxford University Press.
Pino, Nathan W. 2005. “Serial Offending and the Criminal Events Perspective.”
Homicide Studies 9(2): 109-148.
Popielarz, Pamela A. and J. Miller McPherson. 1995. “On the Edge or In Between:
Niche Position, Niche Overlap, and the Duration of Voluntary Association
Memberships.” American Journal of Sociology 101(3): 698-720.
Pryor, Douglas W. 1996. Unspeakable Acts: Why Men Sexually Abuse Children. New
York, NY: New York University Press.
Raymondo, James C. 1999. Statistical Analysis in the Behavioral Sciences.
MA: McGraw-Hill.
219
Boston,
Rhodes, Richard. 1999. Why They Kill: The Discoveries of a Maverick Criminologist.
New York, NY: Vintage Books.
Rideau, Wilbert and Ron Wikberg. 1992. “The Sexual Jungle.” In Wilbert Rideau and
Ron Wikberg (eds.) Life Sentences: Rage and Survival Behind Bars (pp. 73-107).
New York, NY: Times Books.
Reiman, Jeffrey. 2004. The Rich Get Richer and the Poor Get Prison, 7th Edition.
Boston, MA: Allyn and Bacon.
Roberts, Jennifer, Mulvey, Edward P., Horney, Julie, Lewis, John, and Michael L. Arter.
2005. “A Test of Two Methods of Recall for Violent Events.”
Journal of Quantitative Criminology 21(2): 175-193.
Ross, Jeffrey Ian and Stephen C. Richards. 2002. Behind Bars: Surviving Prison.
Indianapolis, IN: Alpha Books.
Ryan, Louise and Anne Golden. 2006. ‘Tick the Box Please’: A Reflexive Approach to
Doing Quantitative Social Research. Sociology 40(6): 1191-1200.
Sabo, Don, Kupers, Terry A., and Willie London. 2001. “Gender and the Politics of
Punishment.” In Don Sabo, Terry A. Kupers, and Willie London (eds.). Prison
Masculinities (pp. 3-18). Philadelphia, PA: Temple University Press.
Salzman, Mark. 2003. True Notebooks. New York, NY: Knopf.
Santos, Michael G. 2004. About Prison. Belmont, CA: Wadsworth.
Santos, Michael G. 2003. Profiles from Prison: Adjusting to Life Behind Bars.
Westport, CT: Praeger.
Sampson, Robert J. and John H. Laub. 1993. Crime in the Making: Pathways and
Turning Points Through Life. Cambridge, MA: Harvard University Press.
Sampson, Robert J. and John H. Laub. 1992. “Crime and Deviance in the Life Course.”
Annual Review of Sociology 18: 63-84.
Schubert, Carol A., Mulvey, Edward P., Lidz, Charles W., Gardner, William P., and
Jennifer L. Skeem. 2005. “Weekly Community Interviews with High-Risk
Participants.” Journal of Interpersonal Violence 20(5): 632-646.
Shaw, Clifford R. 1951 (c1930). The Jack-Roller: A Delinquent Boy’s Own Story.
Philadelphia, PA: Albert Saifer.
220
Shaw, Clifford R. 1931. The Natural History of a Delinquent Career. Chicago, IL: The
University of Chicago Press.
Shaw, Clifford R., McKay, Henry D., and James F. McDonald. 1938. Brothers in
Crime. Chicago, IL: The University of Chicago Press.
Short, James F. and F. Ivan Nye. 1957. “Reported Behavior as a Criterion of Deviant
Behavior.” Social Problems 5(3): 207-213.
Shover, Neal and David Honaker. 1992. “The Socially Bounded Decision Making of
Persistent Property Offenders.” Howard Journal of Criminal Justice (31(4): 276293.
Siegel, Sidney and N. John Castellan, Jr. 1988. Nonparametric Statistics for the
Behavioral Sciences, 2nd Edition. Boston, MA: McGraw-Hill.
Silverman, David. 1998. “Qualitative/Quantitative.” In Chris Jenks (ed.) Core
Sociological Dichotomies (pp. 78-98). Thousand Oaks, CA: Sage.
Sim, Julius and Chris C. Wright. 2005. “The Kappa Statistic in Reliability Studies: Use,
Interpretation, and Sample Size Requirements.” Physical Therapy 85(3): 257268.
Singleton, Royce and Bruce C. Straits. 1999. Approaches to Social Research, Third
Edition. New York, NY: Oxford University Press.
Sobell, Linda C., Sobell, Mark B., Leo, Gloria I., and Anthony Cancilla. 1988.
“Reliability of a Timeline Method: Accessing Normal Drinkers’ Reports of
Recent Drinking and a Comparative Evaluation Across Several Populations”
British Journal of Addiction 83(4): 393-402.
Sorensen, Robert C. 1950. “Interviewing Prison Inmates.” Journal of Criminal Law and
Criminology 41(2): 180-182.
Steinberg, Jonny. 2004. The Number: One Man’s Search for Identity in the Cape
Underworld and Prison Gangs. Johannesburg, South Africa: Jonathan Ball
Publishers.
Stop Prisoner Rape. 2003. The Sexual Abuse of Female Inmates in Ohio. Los Angeles,
CA: Stop Prisoner Rape Report.
Sudman, Seymour and Norman M. Bradburn. 1974. Response Effects in Surveys.
Chicago, IL: Aldine Publishing Company.
221
Sudman, Seymour, Bradburn, Norman M., and Norbert Schwarz. 1996. Thinking About
Answers: The Application of Cognitive Processes to Survey Methodology. San
Francisco, CA: Jossey-Bass Publishers.
Sullivan, Mercer L. 1998. “Integrating Qualitative and Quantitative Methods in the
Study of Developmental Psychopathology in Context.” Development and
Psychopathology 10: 377-393.
Sutherland, Edwin H. 1937. The Professional Thief, by a Professional Thief. Chicago,
IL: The University of Chicago Press.
Sutherland, Edwin H. and Donald R. Cressey. 1970. Criminology, 8th Edition.
Philadelphia, PA: J. B. Lippincott Company.
Sykes, Gresham M. 1958. The Society of Captives: A Study of a Maximum Security
Prison. Princeton, NJ: Princeton University Press.
Sykes, Gresham M. and Sheldon L. Messinger. 1960. “The Inmate Social System.”
Theoretical Studies in the Social Organization of the Prison 15: 5-19. New York,
NY: Social Science Research Council Pamphlet.
Tabachnick, Barbara G. and Linda S. Fidell. 2007. Using Multivariate Statistics, 5th
Edition. Boston, MA: Pearson.
Terry, Charles M. 2003. The Fellas: Overcoming Prison and Addiction. Belmont, CA:
Wadsworth/Thomson Learning.
Thornberry, Terence P. and Marvin Krohn. 2000. “The Self-Report Method for
Measuring Delinquency and Crime.” Measurement and Analysis of Crime and
Justice, v. 4. Washington D.C.: National Institute of Justice
Toch, Hans. 1992. Living in Prison: The Ecology of Survival. Washington, D.C.:
American Psychological Association.
Travis, Jeremy, McBride, Elizabeth Cincotta, and Amy L. Solomon. 2006. Families Left
Behind: The Hidden Costs of Incarceration and Reentry. Washington, D.C.:
The Urban Institute.
Trulson, Chad R., Marquart, James W., and Janet L. Mullings. 2004. “Breaking In:
Gaining Entry to Prisons and Other Hard-To-Access Criminal Justice
Organizations.” Journal of Criminal Justice Education 15(2): 451-478.
222
Tyler, Tom R. and Robert J. Boeckmann. 1997. “Three Strikes and You Are Out, but
Why? The Psychology of Public Support for Punishing Rule Breakers.” Law &
Society Review 31(2): 237-265.
Uggen, Christopher. 2000a. “Class, Gender, and Arrest: An Intergenerational Analysis
of Workplace Power and Control.” Criminology 38(3): 835-862.
Uggen, Christopher. 2000b. “Work as a Turning Point in the Life Course of Criminals:
A Duration Model of Age, Employment, and Recidivism.” American
Sociological Review 65(4): 529-546.
U.S. Census Bureau. 2006. Poverty Thresholds 2006. Housing and Household
Economic Statistics Division. Retrieved April 6, 2008 from the World Wide
Web: http://www.census.gov/hhes/www/poverty/threshld/thresh06.html.
Valentine, Christine. 2007. “Methodological Reflections: Attending and Tending to the
Role of the Researcher in the Construction of Bereavement Narratives.”
Qualitative Social Work 6(2): 159-176.
Visher, Christy A. and Jeremy Travis. 2003. “Transitions from Prison to Community:
Understanding Individual Pathways.” Annual Review of Sociology 29: 89-113.
Webb, Vincent J., Katz, Charles M. and Scott H. Decker. 2006. “Assessing the Validity
of Self-Reports by Gang Members: Results From the Arrestee Drug Abuse
Monitoring Programs.” Crime & Delinquency 52(2): 232-252.
Webster, Stephen D., Mann, Ruth E., Carter, Adam J., Long, Julia, Milner, Rebecca J.,
O’Brien, Matt D. et al. (2006). “Inter-Rater Reliability of Dynamic Risk
Assessment with Sexual Offenders.” Psychology, Crime & Law (12(4): 439-452.
Weis, Joseph G. 1986. “Issues in the Measurement of Criminal Careers.” In Alfred
Blumstein et al. (eds.), Criminal Careers and “Career Criminals,” Volume II (pp.
1-51). Washington D.C.: National Academy Press.
Weisheit, Ralph A. 1998. “Marijuana Subcultures: Studying Crime in Rural America.”
In Jeff Ferrell and Mark S. Hamm (eds.) Ethnography at the Edge: Crime,
Deviance, and Field Research (pp. 178-203). Boston, MA: Northeastern
University Press.
Western, Bruce, Schiraldi, Vincent, and Jason Ziedenberg. 2003. Education &
Incarceration. Washington, D.C.: Justice Policy Institute.
223
Wheaton, Blair and Ian H. Gotlib. 1997. “Trajectories and Turning Points Over the Life
Course: Concepts and Themes.” In Ian H. Gotlib and Blair Wheaton (eds.),
Stress and Adversity Over the Life Course: Trajectories and Turning Points (pp.
1-25). New York, NY: Cambridge University Press.
Whitbeck, Les B., Hoyt, Dan R. and Kevin A. Yoder. 1999. “A Risk Amplification
Model of Victimization and Depressive Symptoms Among Runaway and
Homeless Adolescents.” American Journal of Community Psychology 27(2):
273-296.
Williams, Marian R., King, William R., and Jefferson E. Holcomb. 2001. Criminal
Justice in Ohio. Boston, MA: Allyn and Bacon.
Williams, Frank P. and Marilyn D. McShane. 2004. Criminological Theory, Fourth
Edition. Upper Saddle River, NJ: Pearson Prentice Hall.
Wittebrood, Karin and Paul Nieuwbeerta. 2000. “Criminal Victimization During One’s
Life Course: The Effects of Previous Victimization and Patterns of Routine
Activities.” Journal of Research in Crime and Delinquency 37(1): 91-122.
Wolf, Leslie E., Zandecki, Jola, and Bernard Lo. 2004. “The Certificate of
Confidentiality Application: A View from the NIH Institutes.” IRB: Ethics &
Human Research 26(1): 14-18.
Wright, Richard, Decker, Scott H., Redfern, Allison K., and Dietrich L. Smith. 1992. “A
Snowball’s Chance in Hell: Doing Fieldwork with Active Residential Burglars.”
Journal of Research in Crime and Delinquency 29(2): 148-161.
Wright, Richard T. and Scott H. Decker. 1994. Burglars on the Job: Streetlife and
Residential Break-Ins. Boston, MA: Northeastern University Press.
Yacoubian, George S. 2003. “Assessing the Efficacy of the Calendar Method with
Oklahoma City Arrestees.” Journal of Crime & Justice 26(1): 117-131.
Yacoubian, George S. 2001. “Exploring the Temporal Validity of Self-Reported
Marijuana Use Among Juvenile Arrestees.” Journal of Alcohol and Drug
Education 46(3): 34-42.
Yoder, Kevin A., Whitbeck, Les B. and Dan R. Hoyt. 2003. “Gang Involvement
and Membership Among Homeless and Runaway Youth.” Youth & Society
34(4): 441-467.
224
Yoshihama, Mieko, Gillespie, Brenda, Hammock, Amy C., Belli, Robert F., and Richard
M. Tolman. 2005. “Does the Life History Calendar Method Facilitate the Recall
of Intimate Partner Violence? Comparison of Two Methods of Data Collection.”
Social Work Research 29(3): 151-163.
Yoshihama, Mieko, Clum, Kimberly, Crampton, Alexandra, and Brenda Gillespie. 2002.
“Measuring the Lifetime Experience of Domestic Violence: Application of the
Life History Calendar Method.” Violence and Victims 17(3): 297-317.
Zahn, Shelia Hoar, Colt, Joanne S., Engel, Lawrence S., Keifer, Matthew C., Alvarado,
Andrew J., Burau, Ketih et al. 2001. “Development of a Life Events/Icon
Calendar Questionnaire to Ascertain Occupational Histories and Other
Characteristics of Migrant Farmworkers.” American Journal of Industrial
Medicine 40: 490-501.
Zehr, Howard. 1996. Doing Life: Reflections of Men and Women Serving Life
Sentences. Intercourse, PA: Good Books.
Zhang, Lening, Welte, John W., and William F. Wieczorek. 2001. “Deviant Lifestyle
and Crime Victimization.” Journal of Criminal Justice 29: 133-143.
225