Avoidable Ignorance and the Role of Cochrane and

Invited Article
Avoidable Ignorance and the Role of
Cochrane and Campbell Reviews
Research on Social Work Practice
1-17
ª The Author(s) 2014
Reprints and permission:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/1049731514533731
rsw.sagepub.com
Eileen Gambrill1
Abstract
The Campbell and Cochrane Collaborations were created to reveal the evidentiary status of claims focusing especially on the
effectiveness of specific interventions. Such reviews are constrained by the population of studies available and biases that may
influence this availability such as preferred framing of problems. This highlights the importance of attending to how problems are
framed and the validity of measures used in such reviews, as well as the importance of reviews focusing on questions concerning
problem framing and the accuracy of measures. Neglecting such questions, both within reviews of effectiveness and in separate
reviews concerning related claims, results in lost opportunities to decrease avoidable ignorance. Domains of avoidable ignorance
are suggested using examples of Cochrane/Campbell reviews. Without attention to problem framing, systematic reviews may
contribute to maintaining avoidable ignorance.
Keywords
evidence-based practice, problem-framing, ethics, avoidable ignorance, Campbell reviews, Cochrane reviews, decisions
The Cochrane and Campbell Collaborations were created to
help involved parties to make well-informed decisions about
the effects of interventions. There were gaps between what
clinicians and policy makers drew on and what would be desirable to draw on, both in terms of evidentiary and ethical concerns (e.g., informed consent).
Using systematic reviews of studies of interventions (programs,
practices, and policies), C2 helps policymakers, practitioners,
researchers, and the public identify what works. Systematic
reviews synthesize available high quality evidence on interventions. After a thorough search of the literature to screen available
studies for quality, reviewers identify the least equivocal evidence available on an intervention, describe what the evidence
says about the intervention’s effectiveness, and explore how that
effectiveness is influenced by variations in process, implementation, intervention components, participants, and other factors.
(Boruch, 2005, p. 1)
There is an interest in revealing uncertainties about the effects
of interventions. The importance of thinking carefully about
claims of knowledge and ignorance, and the means used to
forward them, is highlighted by harming in the name of helping (Chalmers, 2003). Rather than using resources to identify,
describe, expose, and advocate for use of knowledge to reveal
and minimize avoidable suffering, some use scarce resources
to do the opposite: hide, distort, and increase avoidable ignorance and its consequences (Gambrill, 2012a). Newspapers
daily highlight the prevalence of avoidable ignorance and its
negative consequences including continued use of procedures
that do more harm than good and failure to use interventions
that help people. Variations in interventions used may be
related to ignorance; the study of variations in interventions
used for the same problem was key in the press for greater
scrutiny of interventions and related effects (Wennberg,
2002).
The Cochrane and Campbell Collaborations have played a
vital role in advocating for the exposure of ignorance, especially about the effects of interventions. Are there ways in
which related reviews maintain or increase avoidable ignorance, for example, by ignoring questions, such as how problems are framed, both within reviews of effectiveness and in
separate reviews devoted to such questions? The purview of
C1 and C2 reviews is limited by the availability of studies, published and unpublished. The concern with ‘‘garbage in–garbage
out,’’ long noted, typically refers to methodological limitations;
concerns about ‘‘propaganda-in propaganda-out’’ in terms of
problem framing and related measures have been neglected.
First, I review kinds of ignorance and their sources. Then, I discuss how this applies to Cochrane and Campbell reviews using
two examples of recent reviews. Finally, I suggest additional
opportunities for systematic reviews to reveal and to decrease
avoidable ignorance.
1
University of California at Berkeley, Berkeley, CA, USA
Corresponding Author:
Eileen Gambrill, University of California at Berkeley, 120 Haviland Hall,
Berkeley, CA 94720, USA.
Email: [email protected]
Downloaded from rsw.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
2
Research on Social Work Practice
What Is Ignorance?
What is ignorance? Is some avoidable? Are there valid measures of ignorance, avoidable or not? What are key sources,
both avoidable and not? What accounts for the making and
unmaking of certain kinds of ignorance (e.g., whistle-blowers)?
Who knows what, when, and who does not, with what consequences? Is it always a good idea to reveal ignorance? What
is the relationship between group ignorance and individual
ignorance? Who are the purveyors of ignorance and of what
kinds and by what methods? Would a focus on ignorance benefit clients more than a focus on ‘‘what we know?’’ Ignorance
can be defined as a lack of knowledge that may be useful in
making decisions and solving problems. Proctor and Schiebinger (2008) argue that attention to ignorance is as important as
attention to ‘‘what we know.’’ The term ‘‘agnotology’’ refers
to the ‘‘making and unmaking of ignorance’’ (Proctor & Schiebinger, 2008). Given the ambiguity of the word ‘‘know,’’ the
phrase ‘‘I don’t know’’ can be equally ambiguous.
There are many kinds of ignorance. Avoidable ignorance
refers to ignorance that we can avoid, either in the present
and/or the future. Much ignorance is unavoidable reflecting the
inevitable uncertainty involved in making decisions. This may
be due to technological limitations in investigating related phenomenon. Much is avoidable and may result in harm (e.g.,
Gøtzsche, 2013). Ravetz (1993) argues that the sin of current
day science is a reluctance to acknowledge our ignorance of our
ignorance. In Cancer Wars: How Politics Shapes What We
Know and Don’t Know About Cancer, Proctor (1995) describes
strategies used by the tobacco industry to create doubt concerning the association between smoking and lung cancer. Promoting ignorance is key in fraud and quackery throughout the ages;
our hope for relief from misery or a happier love life meets promises of satisfaction.
Berry (2005) uses the phrase ‘‘for-profit and for-power
ignorance’’ to refer to deliberate obscuring or withholding of
knowledge, as occurs in advertising and propaganda. Smithson
(1989, 1993, 2010) argues that ignorance and uncertainty play
key roles in cognition, social relations, organizations, culture,
and politics and that it is not always negative. He argues that
these roles are not just the opposite of those played by knowledge: ‘‘People have vested interests in ignorance and uncertainty . . . People have reasons for not knowing and not wanting to
know. People get things done with ignorance and uncertainty,
and they trade them for other things . . . Knowledge is power,
but so is ignorance’’ (Smithson, 2010, p. 84). He, as do others
in the area of agnotology (e.g., McGoey, 2012), argues that
ignorance is socially constructed; people are motivated to create and maintain ignorance, often systematically. Exposing
avoidable ignorance is the very purpose of the Campbell and
Cochrane Collaborations, especially concerning the effectiveness of interventions. Reviewers accomplish this by separating
the wheat from the chaff. The amount of avoidable ignorance is
suggested by the very extent of the chaff reflected in flowcharts
illustrating the number of studies deemed not sound enough to
contribute to answering a specific question. For example, in a
review of the quality and content of 2,000 controlled trials
regarding schizophrenia over 50 years, only 16 were of high
quality (Thornley & Adams, 1998). Ioannidis (2005) argues
that most published research findings are false. Many studies
cannot be replicated (e.g., Lewandowsky, Ecker, Seifert,
Schwartz, & Cook, 2012). Djulbegovic and Hozol (2007)
argue that our unwillingness to tolerate incorrect results
depends on ‘‘how much we care about being wrong’’ (p. 215).
Ignorance can be a benefit, especially if recognized. For
example, it is a spur to solving problems. It can create a
caution in rushing to judgment in moral dilemmas. Popper
(1992, 1994) views all of life as problem solving. He suggests that all knowledge starts from problems; ‘‘this means
that knowledge starts from the tension between knowledge
and ignorance. No problems without knowledge—no problems without ignorance’’ (p. 64). ‘‘Our ignorance is boundless and sobering’’ (p. 64). Kerwin and Witte (1983)
identified six types of ignorance. In each case, we can ask
‘‘Do they apply to an entire society (e.g., as in taboos)?’’
Known unknowns refer to all the things we know we do not
know. Unknown unknowns refer to all the things we do not
know that we do not know. Abbott (2010) highlights the
prevalence of ignorance of collateral literatures on the part
of professionals. In a recent publication by the American
Academy of Social Work and Social Welfare (2013), it is
claimed that ‘‘Social work has matured from a set of family
and community practices to an evidence-based profession,
relying on systemic data . . . ’’ (p. 1). There is no evidence
for this claim and considerable counterevidence (e.g.,
Pignotti & Thyer, 2009).
Unknown knowns refer to all the things we do not know we
know. Unknown knowns require that there are ‘‘known
knowns.’’ Known knowns refer to facts—assertions about the
world shown to be accurate such as water freezes at a certain
temperature. Ignoring known knowns, such as sources of bias
in research studies (e.g., Jadad & Enkin, 2007), is a major
source of avoidable ignorance. Hundreds of books describing
research methods are available and thousands of articles.
Required research courses are a staple of graduate professional
degree programs, yet we see thousands of reports each year
with fatal flaws. Lack of awareness (or ignoring) of known
knowns may result in avoidable errors such as selection of ineffective or harmful remedies. We may have acquired valuable
knowledge about a subject but forget that we have such knowledge. Such forgetting may influence quality of decisions. The
question, ‘‘What does it mean to ‘know’?’’ has been pursued
by philosophers throughout the centuries. The phrase ‘‘I know’’
is common in everyday language. You might say ‘‘I know CBT
is effective.’’ A psychologist may say ‘‘I am referring you to
Dr. Z for medication; she is an expert.’’ What does ‘‘expert’’
mean (e.g., Walton, 1999)? Examples of known knowns
include the following:
1.
2.
Anxiety and depression are related to social stressors.
Many potential biases may compromise results of randomized controlled trials (RCTs).
Downloaded from rsw.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
Gambrill
3.
4.
5.
6.
7.
8.
9.
10.
3
Well-designed RCTs control for more biases than do
pretest–posttest designs.
Reviews labeled as ‘‘systematic’’ may not reflect characteristics critical to a systematic review.
Surrogate measures may not represent vital outcomes
(such as mortality).
Every measure is error prone.
Conflicts of interest are common in the helping
professions.
Avoidable errors are common in the helping
professions.
Most clients are involved as uninformed or misinformed participants in service provision.
Foster children are medicated more often than children
not in foster care.
Errors refer to all the things we think we know but do not.
Taboos refer to ‘‘socially enforced irrelevance’’ (Smithson,
1989). This includes ‘‘what people must not know or even
inquire about’’ (p. 8)—things we are not supposed to know, but
which may be helpful to know. (See also Douglas, 1973.) For
instance, rarely is applied behavior analysis and related benefits to clients described in social work texts. If it is, it is often
misrepresented (e.g., Thyer, 2005). Lawn (2001) uses the term
‘‘closed ignorance’’ to refer to a society deliberately overlooking its ignorance—choosing to believe what has yet to be
shown to be true. Forbidden knowledge can be traced back to
Genesis in the Bible; the sin was eating from the tree of knowledge. And, there are denials (e.g., things too painful to know).
Many have noted that self-censorship may limit science more
than laws (e.g., Kempner, 2008; Moran, 1998).
Berry (2005) includes categories of self-righteous ignorance
(from a failure to know ourselves and our limitations), fearful
ignorance (resulting from a lack of courage to accept unpopular, unpleasant, or tragic knowledge), and lazy ignorance,
resulting from our unwillingness to exert the effort to understand what is complex. Research indeed shows that selfassessments are inflated, making recognition of ignorance a
challenge (Dunning, Heath, & Suls, 2004). And those who are
least competent are least likely to recognize this. Inflated selfassessments impede opportunities to take advantage of ‘‘surprises’’ and anomalies–to discover new ideas. The category
of ‘‘Know but Ignore’’ includes conducting studies that cannot
answer questions posed. It could also be called ‘‘Know but fool
self (and others).’’ Thus, even though I know that a pretest–
posttest ignores many potential sources of bias I state in my
conclusions: ‘‘This study shows that homeopathic intervention
cures depression’’ (making the post hoc ergo-proc error). I also
may be skilled in the use of distracting ploys such as asserting
that concerns with bias in my study are yet another manifestation of a ‘‘dehumanizing scientism.’’
What is the gap between an individual’s perception of
‘‘known knowns’’ and ‘‘actual known knowns?’’ What is the
gap between research and theory related to a question and
descriptions in professional sources including C1 and C2
reviews? A gap can be viewed as avoidable ignorance.
Avoidable ignorance consists not only in inaccurate appraisal
of various domains of ignorance discussed, including ‘‘known
knowns’’ but also actions based on this in applied settings. It
will not benefit clients if a professional knows something but
fails to use this knowledge to enhance quality of life for clients;
implementation is an ongoing challenge (Fixsen, Blase,
Naoom, & Wallace, 2009). We could examine the consequences of each ‘‘unknown known’’ on the part of an individual
in terms of potential consequences for clients over time. Consider the ‘‘known known’’: Anxiety, depression, and physical
health are related to social stressors (e.g., Adler & Stewart,
2010; Drury et al., 2012). The role of environmental factors
in depression has been demonstrated for decades (e.g., Brown
& Harris, 1978). Inaccurate problem framing is one consequence of lack of knowledge (or its use) concerning environmental factors related to personal distress (e.g., viewing
anxiety and depression as caused by a ‘‘brain disease’’ and
recommending medication). There is a focus on the individual
as the cause of distress; environmental contributors are overlooked (Gambrill, 2014).
Uncertainty and Ignorance
Ignorance and uncertainty are closely related. Information may
be available (or potentially available, including knowledge of
our ignorance), but not used to reduce or reveal uncertainties
about how to attain a valued outcome or other kind of question
of concern in the helping profession such as about risk and the
validity of measures. There are different kinds of uncertainty:
(1) uncertainty caused by lack of access to available knowledge—it is available but not to you—perhaps because of lack
of diligent searching and/or because it is available only to a
few, such as unshared knowledge of adverse drug effects on the
part of a drug company and (2) uncertainty caused by the fact
that no one knows. Perhaps no one will ever know or someone
may know in the future. For any given kind of information,
we can map who has access to it and who does not and with what
consequences. Thus, there are many causes of uncertainty, some
under our control, many not. Some matter in terms of consequences, both positive and negative, and some do not. Decreasing uncertainty may increase ignorance. Gross (2010) suggests
that extended knowledge related to ignorance refers to ‘‘knowledge about the limits of knowing in a certain area which
increases with every state of new knowledge’’ (p. 68). With
ignorance we do not know what we do not know, whereas with
uncertainty, we may be aware of domains of uncertainty but not
know the parameters of each and how they interact. Machlup
(1980) suggests the categories of negative knowledge (false
beliefs disproved), obsolete knowledge (it has lost relevance),
controversial claims, vague knowledge, and superstitions.
Individuals may opt for ignorance as illustrated by decisions
regarding a screening test, for example, for problems for which
there is no remedy. If, as Popper (1994) suggests, our ignorance
is vast compared with our knowledge and is destined to remain
so, uncertainty in making decisions will remain the rule.
Indeed, evidence-informed practice can be defined as a way
Downloaded from rsw.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
4
Research on Social Work Practice
to handle the inevitable uncertainty in making decisions about
the welfare of clients in an informed, ethical manner (Gambrill,
2006), where informed refers to being accurately appraised
concerning the degree of uncertainty regarding the evidentiary
status of interventions (including assessment measures)—recommended and not.
Kinds of Questions and Related Avoidable
Ignorance
The subject of ignorance, like the subject of knowledge, is
closely related to claim makers and their claims. Claims are
made about alleged risks, how problems should be framed,
which ones warrant attention, how they should be measured,
how they can (or cannot) be remedied, and how to measure outcomes. Most Cochrane and Campbell reviews focus on effectiveness questions. The essence of claims making is
constructing realities—realities that forward a particular view;
each claim entails a certain view of reality, for example, about a
problem. How should it be handled? Is it a problem? And for
whom? Consider controversies regarding use of marijuana.
Different policies include America’s ‘‘war on drugs,’’ the
decriminalization of all drugs as in Portugal in 2001 (Greenwald, 2009), and legalization of marijuana in Uruguay in
2013. We can examine avoidable ignorance and its consequences in relation to different kinds of claims.
About a problem: Who says it is a problem and on what
basis?
About the claimed prevalence and consequences
About how to frame a problem; about causes
About assessment (e.g., measures to use, how to assess
risk)
About intervention methods to use
About how to evaluate outcome (e.g., are potential
harms considered?)
About how to estimate costs and benefits.
Consider the nurse who recommended use of sensory integration to decrease hitting behavior on the part of a child (Kay
& Vyse, 2005). This example illustrates different kinds of
avoidable ignorance, different time periods in which it may
occur in a sequence of decisions, and related factors such as
quality of professional literature, training, supervision, and
administrative rules and regulations. Did the nurse conduct a
search of the literature regarding how best to decrease hitting?
If so, was she well trained in posing questions, searching for
answers, and evaluating what she found? Possession and use
of such skills may have avoided selection of sensory integration. Do staff in her agency receive timely corrective feedback?
Does the organization in which she works provide smartphones
with decision guides that coach staff to ask valuable questions
about an intervention being considered?
Examples of avoidable ignorance about problems and their
causes include the belief that it is easy to decide. Problem creation and framing is a social, political, and economic issue with
billions of dollars at stake, as shown by promotional activities of
the biomedical industrial complex. We tend to focus on individuals as the locus of problems, often ignoring or downplaying
environmental factors. Examples of avoidable ignorance about
risks include the belief that relative (rather than absolute) risk
is informative. Examples of avoidable ignorance about the validity of assessment/diagnostic measures include the assumption
that small changes on the Hamilton Depression Scale mean that
people are no longer depressed (e.g., Healy, 2000), and the belief
that diagnoses in the Diagnostic and Statistical Manual of Mental Disorders are valid (Kirk, Gomory, & Cohen, 2013). Examples of avoidable ignorance about remedies (effectiveness)
include the belief that Scared Straight programs are effective
(Petrosino, Turpin-Petrosino, Hollis-Peel, & Lavenberg, 2013)
and the claim that brief psychological debriefing is effective in
decreasing risks of posttraumatic stress disorder (PTSD; Rose,
Bisson, Churchill, & Wessely, 2002).
Cochrane and Campbell Reviews: Inevitably
Biased?
The purview of a systematic review is the population of related
studies, published and unpublished. Availability bias has long
been noted as a potential source of bias in meta-analyses and
systematic reviews. For example, material is more likely to get
published if it is compatible with popular frameworks as shown
by the history of science (e.g., Barber, 1961; Bauer, 2001;
Hook, 2002). Most publications describing major discoveries
were initially rejected by mainstream journals (Campanario,
2009). Planck (1949) speculated that promoters of a ruling
paradigm have to die off first before a new one ascends (see also
Kuhn, 1970). Unpublished as well as published studies are
sought by Cochrane and Campbell reviewers, but still, prevailing
views concerning problem framing, may bias these studies as
illustrated by the medicalization of life’s problems (Conrad,
2007; Gambrill, 2012a; Kirk et al., 2013; Szasz, 2007a). Examples include the assumption that (mis)behaviors reflect brain disorders and that anxiety in social situations is a mental disorder.
Until recently, research grants submitted to the National Institute
of Mental Health (NIMH) concerning problems such as anxiety
and depression had to be cast in the psychiatric terminology of
the American Psychiatric Association (APA, 2013). (Not so
now, as announced by Insel, 2013, of the NIMH who promotes
a brain-based approach to understanding behavior.) This highlights the importance of devoting greater attention to how problems are framed in systematic reviews. What do we find when
we closely examine a Campbell and/or Cochrane review?
Example 1
Consider the recent review by Regehr, Alaggia, Dennis, Pitts,
and Saini (2013): ‘‘Interventions to Reduce Distress in Adult
Victims of Sexual Violence and Rape: A Systematic Review.’’
How much attention is given to controversial issues regarding
problem framing? Is the problem framing compatible with
related research? Is the reliability and validity of measures used
Downloaded from rsw.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
Gambrill
5
in research studies reviewed clearly described? Are interventions selected compatible with problem framing and assessment/outcome measures used? Are claims about how
exposure works accurate (e.g., see Brewin, 2006). Considerable
attention is given to methodological/design concerns and statistical analysis but shouldn’t we be rigorous in critiquing all
important aspects of research reports? Only via such rigor
across all questions can availability biases be kept in check.
Problem framing. The authors state that ‘‘Evidence suggests that
trauma associated with rape or sexual assault differs from
trauma stemming from other experiences, in part due to the
strong element of self-blame, the individualized nature of the
type of trauma, social support and social acceptance factors,
and the higher incidence of concurrent depression. Therefore,
it is critical to examine the effectiveness of interventions specific to victims of sexual violence and rape’’ (p. 6). We need
a more detailed discussion of evidence regarding the claimed
unique character of trauma in rape, drawing on the extensive
related literature concerning high-level adverse experiences.
Furthermore, each client’s experience with severe trauma of
any kind may be different, calling for individualized interventions (e.g., Hallam, 2013). No mention is given to literature
describing potential positive consequences of traumatic experiences (Lindstrom & Triplett, 2010; Rendon, 2012; Tedeschi &
Calhoun, 1996) encouraging a pathological focus on negative
experiences. There is no recognition given to problem creep
(stretching the boundaries of experiences labeled ‘‘traumatic’’)
in use of the term PTSD (e.g., Summerfield, 2001).
Measures used. When we review the description of measures
used, rather than facts and figures and description of the kinds
of reliability and validity explored, we find phrases such as
‘‘has good reported reliability and validity’’ (p. 33), ‘‘has been
used in hundreds of studies’’ (p. 34), and ‘‘is widely used’’
(p. 34). References are included but no details are given about
what is in them. What kinds of reliability were investigated?
How good is test–retest reliability? What kinds of validity were
explored? Often we find that this is construct validity, usually
convergent, exploring the correlation of one self-report test
with another self-report test of the same construct. What about
divergent validity? What about concurrent and predictive
validity? The entire field of psychometrics assumes that psychological attributes are quantifiable (Michell, 2000). But are
they? Which ones? How well? The published literature is
replete with vague claims about the reliability and validity of
measures used. Should we be satisfied with such claims in
Campbell and Cochrane reviews?
Interventions used and related theories in studies. A wide variety of
interventions was used in studies included in the Regehr et al.
(2013) review including eye movement desensitization reprocessing (EMDR), assertion training, cognitive processing
therapy (CPT), prolonged exposure (PE), stress inoculation
training (SIT), and supportive psychotherapy. Some studies
provided intervention in group settings. Does this variety of
interventions reflect theoretical chaos, or do all share a vital
component such as exposure? If so, is the number and intensity
of exposure experiences in each study reported by study
authors? If not, do reviewers note this missing information?
They do not. And, as Ehlers and colleagues (2010, p. 269) ask,
‘‘Do all psychological treatments really work the same way in
post-traumatic stress disorder?’’ Are all interventions offered
‘‘bona fide’’ (Wampold et al., 2010)? Were pretrauma risk factors considered as outcomes (DiGangi et al., 2013)?
Do Studies Warrant Conclusions?
Results of this systematic review provide tentative evidence that
cognitive and behavioural interventions, in particular Cognitive Processing Therapy, Prolonged Exposure Therapy, Stress Inoculation
Therapy, and Eye Movement Desensitization and Reprocessing
can be associated with decreased symptoms of Post-Traumatic
Stress Disorder (PTSD), depression and anxiety in victims of rape
and sexual assault. There is a need for further well-designed controlled studies which differentiate victims of sexual assault and
rape from other traumatic events. (Regehr, Alaggia, Dennis, Pitts,
& Saini, 2013, p. 8)
Do the studies reviewed warrant such a conclusion? What does
the phrase ‘‘tentative evidence’’ mean? What does ‘‘in particular’’ mean? Is there really no great difference among these
interventions? What does ‘‘can be associated’’ mean? One
thing it means is that ‘‘it also might not be associated’’ (Bauer,
2001). Would you want to try one of these four interventions if
a professional said: ‘‘This can be associated with benefits but it
might not be as well.’’ Without a measure of common factors in
each study, we do not know the contribution of common factors
such as the therapeutic alliance as separate from specific interventions (e.g., Laska, Gurman, & Wampold, 2013; Wampold
& Budge, 2012). Budd and Hughes (2009) argue that reliance
on invalid DSM categories in selecting subjects and describing
problems lead inevitably to the Dodo Bird effect because there
is little or no accurate matching of assessment data and selection of an intervention. And, what about follow-up data?
Ideally, a series of single-case data would also be available
in every RCT, so we could explore effects of an intervention for
different people over time (e.g., Farmer & Nelson-Gray, 2005).
In User’s Guide to the Medical Literature, Guyatt, Rennie,
Meade, and Cook (2008) suggest that N of 1 studies provide the
most important kind of feedback to guide decisions. As Djulbegovic and Ash (2011) suggest, ‘‘Relying on efficacy data to
draw conclusions about effectiveness and feasibility of application of trial data to individual patients remains one of the most
important sources of clinical uncertainty . . . this uncertainty is a
key driver of well-documented variations in the practice of
medicine’’ (p. 1; or any other profession including social
work). They argue that ‘‘uncertainty, inevitable errors and unavoidable injustice, must be considered facts of life’’ (p. 3). For
interventions tested in RCTs, we can inform a client that x percentage of people in the trial (or trials) got better. We may even
Downloaded from rsw.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
6
Research on Social Work Practice
say ‘‘people like you’’ get better. But what does ‘‘like you’’
mean? Perhaps only a small percentage of people have a serious adverse reaction to an intervention. But this few may
include you. Single-case studies allow ongoing monitoring of
outcome, so that adverse effects can be determined early on.
may not? The authors note ‘‘the poor methodological quality
of the included studies’’ (p. 2). Here also, N of 1 studies accompanying RCTs would provide important data concerning individual differences. Here again there is no exploration of the
role of common factors.
Example 2
General Discussion
Consider also ‘‘Parent Training Interventions for Attention
Deficient Hyperactivity Disorder (ADHD) in Children Age 5
to 18 years’’ (Zwi, Jones, Thorgaard, York, & Dennis, 2011).
The extent to which the fox (the biomedical industrial complex) has its nose in the Cochrane and Campbell Collaboration’s tents is illustrated in these reviews. Discourse
concerning alleged causes of behaviors, feelings, and thoughts
addressed shows the influence of the biomedical industrial
complex in a number of ways, one being the use of the DSM
classification system. Another is by use of medical terms such
as ‘‘comorbid,’’ straight out of medicine. Why not use the less
loaded term ‘‘co-occurring?’’ Pathology is focused on, ignoring
potential positive consequences of traumatic events (e.g.,
increased empathy for self and others). We find statements
about associations and unique contributing factors such as
self-blame with few facts and figures provided. Factors mentioned that make trauma from sexual abuse unique, such as
self-blame and individual involvement, are similar in other
adverse experiences, such as child abuse, for example, in which
here too, there is a specific individual and children often blame
themselves. There is an extensive experimental literature concerning the influence of high-intensity aversive events on people and animals; I do not see this drawn on. And where is
follow-up data? If a key purpose of the Campbell and Cochrane
Collaborations is to decrease avoidable ignorance, reviewers
must point out such ignorance.
Awareness on the part of reviewers of controversies in an
area is important in estimating possible biases in problem framing in published research reports, especially if a problem area is
heavily politicized, such as child abuse or AIDS. As Kuhn
(1970) argued, most researchers engage in ‘‘normal’’ science;
their investigations are within commonly accepted narratives
regarding subjects of interest. If Cochrane and Campbell
reviewers do not critically comment on the problem framing
used in research reports, they may become cofacilitators of
flawed views. Intervention research reflects a certain view of
behavior that influences how a study is planned (e.g., selection
of subjects). The majority of intervention studies on human
behavior are based around the concept of ‘‘mental illness’’ and
use of DSM categories. If this framing is uncritically accepted,
each related review serves to advance such a view unless there
is an informed critical discussion including description of wellargued alternative views and related evidence. I do not see such
a discussion in the two reviews taken as examples here. Over
time we become accustomed to popular problem framings and
accept these uncritically. Failure to critically appraise the
cogency of problem framing in reviews concerning the effectiveness of different interventions contributes to acceptance
of the status quo—perhaps deeply flawed. Status quo bias is
robust (e.g., Campanario, 2009). Such a bias is especially likely
when grand narratives such as the medicalization of human
Problem framing. Here is how the problem is framed:
ADHD is a neurodevelomental disorder characterized by high levels of inattention, hyperactivity and impulsivity that are present
before seven years of age. These are seen in a range of situations,
are inconsistent with the developmental level of the child and are
associated with impairment in social or academic development.
(American Psychiatric Association [APA], 1994, p. 3)
Comorbidity between ADHD and conduct problems is high. In the
British Child and Adolescent Mental Health Survey, 27% of those
with conduct disorder (CD) and 26% of those with oppositional
defiant disorder (ODD) also qualified for a diagnosis of ADHD,
and more than 50% of those with ADHD had a comorbid behaviour
disorder. (Ford, Goodman, & Meltzer, 2003, p. 9)
The authors state that ‘‘Operationalized diagnostic criteria of
the DSM-III/DSM-IV or ICD-10’’ was used to identify subjects.
The authors note that ‘‘Inattention, hyperactivity and impulsivity are normal traits of children, especially young children’’ (p.
10). Yet the problem is framed as a medical disorder, overlooking the role of environmental contingencies (e.g., Diller, 2006;
Staats, 2012; Timimi, 2012). Indeed, intervention programs
selected are based on ‘‘social learning theory’’ as the authors
point out, creating an unacknowledged disjunction between
problem framing and interventions used. We see the discourse
of disease in use of medical terms such as ‘‘comorbidity.’’ We
are told that genes ‘‘appear to be involved.’’ What does this
mean? How much influence? Concerns about diagnostic validity are noted but not clearly described (p. 11). Nor does lack of
correspondence between intervention used and problem framing receive sufficient attention.
Measures used. Here again, as in the Regehr et al.’s (2013)
review, important details regarding reliability and validity of
measures are missing.
Interventions used and related theories. Interventions used in studies reviewed were behavioral yet problem framing was medical
(see previous discussion).
Do studies warrant conclusions? The authors conclude that
‘‘Parent training may have a positive effect on the behavior
of children with ADHD. It may also reduce possible stress and
enhance parental confidence.’’ Shouldn’t they also say that it
Downloaded from rsw.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
Gambrill
7
behavior are at issue; reviewers confront a constant stream of
related framing in their daily lives in newspapers, journal articles, and the Internet. Only recently has there been an increasing ground swell of criticism of this narrative from many
different sources including ‘‘insiders’’ (e.g., Francis, 2012;
Gambrill, 2014; Gøtzsche, 2013; Kirk et al., 2013).
Questions Needing More Attention
The C1 and C2 focus on effectiveness questions. Close attention is paid to quality of design of intervention studies and
appropriateness of statistical analysis. The purpose of such
reviews is to accurately describe the evidentiary status of
research related to specific practice/policy questions such as,
Does the intervention work (not work)? Other questions
include the following:
Does it work better for some people than for others? If
so, how much better? Does it harm some people? If it
works better for some subgroups than others (e.g., of clients/clinicians/combinations), why does this occur?
How does the intervention work? Examples of factors
that may influence outcome include common factors
(e.g., empathy, the alliance), initial severity of problem
and countervailing factors such as social support (e.g.,
Norcross, 2011; Wampold & Budge, 2012). If a number
of different kinds of intervention ‘‘work,’’ what are the
theoretical implications?
How valid are selection criteria used?
How valid are outcome measures used? Do they measure the domain under consideration? What evidence
is presented? Are positive as well as negative outcomes
assessed?
Are follow-up data gathered?
There are many other kinds of questions: (1) about problem
framing, (2) about prevalence, (3) about the validity of assessment/diagnostic measures, (4) about risks, (5) about harms, and
(6) about costs and benefits. The biomedical industrial complex
devotes great attention to these other questions, especially
problem framing, as illustrated by rampant disease mongering
(Payer, 1992). Neglect of these other kinds of questions in C1
and C2 reviews contributes to avoidable ignorance. A critical
review requires attention to all sections of research reports
including problem framing, reliability and validity of assessment/outcome measures, and accuracy of conclusions.
About Problem Framing
It is time to devote more attention to these other vital questions
that affect client’s lives such as ‘‘What is the evidentiary status
of a problem framing—is x even a problem and if so, to whom,
and who says so, and on what basis,’’ both within reviews of
effectiveness and in separate reviews addressing such questions? What kind of a problem is it? Do the causes of depression and hopelessness lie inside the brain? We find
statements such as ‘‘the treatment of diabetes can be a useful
metaphor for understanding the treatment of GAD’’ (generalized anxiety disorder; Marker & Aylward, 2012, p. 33). Is this
true? What is a problem for others may not be a problem for an
individual. What is ‘‘normal’’ is difficult to discover (Creadick,
2010). Problem creation and framing is a social, political, and
economic issue with billions of dollars at stake, as shown by the
promotional activities of the biomedical industrial complex.
Lack of attention to this critical concern maintains and may
even increase avoidable ignorance about domains of human
experience. For example, problem framing of anxiety in social
situations in related RTCs reflects disease mongering (claims
of undertreatment, underdiagnosis, and a worsening without
treatment) as well as censorship (hiding well-argued alternative
views) including structural causes of anxiety (Gambrill &
Reiman, 2011). Consider the recent book The Body Economic:
Why Austerity Kills by Stuckler and Basu (2013) showing the
rise in suicide with unemployment. An exploratory study
(Gambrill & Reiman, 2011) revealed that a sample of peer
reviewers could not detect propaganda in problem framing in
a sample of articles published in peer-reviewed journals. Miscalculating skill in detecting propaganda is a source of avoidable ignorance. Problems differ in their potential for being
solved. Potential for solution is influenced by the accuracy of
problem description.
Constant repetition of a message is a key propaganda
method taken full advantage of by those who profit from a particular framing such as pharmaceutical companies and other
players in the biomedical industrial complex. Unless reviewers
go out of their way to read alternative well-argued conceptualizations (e.g., that anxiety in social situations is acquired via
unique learning histories), they are unlikely to raise questions
about the rationale for selection of certain phenomena as a
problem, how it is viewed, how it is measured, and what interventions are selected; out of sight out of mind. We tried for
over a year to get high reliability among reviewers of literature
reviews in research projects completed by graduating master of
social work students (n ¼ 70) and failed (Gambrill, Shaw, &
Reiman, 2006). Unless a reviewer knew the area, he or she did
not know what was missing. In the review by Regehr et al.
(2013), we find no discussion of Summerfield’s (2001) critique
of PTSD in terms of who is labeled. We find no discussion of
possible positive outcomes of difficult experiences including
traumatic ones; there is a total focus on the negative.
We need greater attention to the evidentiary status of how
problems are framed in each review regarding effectiveness
as well as more reviews explicitly devoted to critical appraisal
of problem framing. Is there a throw-a-way description such as
claiming that a problem is biopsychosocial (Tesh, 1988)? That
is, I tip my hat to complex causation and then attend to just one
piece of the puzzle. In his checklist for evaluating research
syntheses concerning defining the problem, Cooper (2007)
includes questions as to whether the problem is placed in a
meaningful theoretical, historical, conceptual context, and
whether it is framed as a psychological, biological, and/or
social problem. Every intervention is a hypothesis. Every
description of interventions described in a review is an
Downloaded from rsw.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
8
Research on Social Work Practice
opportunity for reviewers to identify well-argued alternatives
to those relied on and/or to note that there seems to be a ‘‘chaoticness’’ in theories presented, or the reverse—that one hypothesis (e.g., exposure) seems common to all but is given different
names in different studies. Reviewers have an opportunity in
this section of reviews to decrease avoidable ignorance concerning theoretical concerns including rival views and their
evidentiary status.
An example of avoidable ignorance about problems is
claiming that attention deficit hyperactivity disorder (ADHD)
is a ‘‘neurodevelopmental disorder,’’ downplaying the role of
learning experiences in a child’s life (Staats, 2012; see earlier
discussion of Zwi et al., 2011). The word ‘‘developmental’’
implies a biological unfolding, obscuring the key role of learning opportunities. How a problem is framed influences selection of intervention methods. Medication is often prescribed
for children labeled as ADHD, especially children in foster
care (Olfman & Robbins, 2012). Creation of bogus risks and
alleged mental disorders fuel the use of medication. Are we
not missing an opportunity to highlight avoidable ignorance
about problem framing, especially as a prod to further investigations and as a brake on medicalizing human distress and
misallocating causes of human suffering to the individual
(e.g., Timimi, 2012)? Without attention to problem framing
and related measures, systematic reviews may contribute to
maintaining avoidable ignorance.
About Measures Used
Each review should rigorously critique reliability and validity
of all key measures used, drawing on available literature. Questions here include (1) What is the correlation between selfreport measures and behavior in real-life settings? (2) What
is the correlation between the same measures used by different
individuals (e.g., parents and children)? (3) To what degree do
measures reflect empirically based theory regarding a domain/
construct? Is convergent and discriminant validity described
for each measure discussed? To what degree are measures
redundant?
Questions Posed and Avoidable Ignorance
Every Cochrane and Campbell review starts with a question.
Discovery of ignorance often starts with a question or problem.
Why do some children flourish in harsh environments? Why do
some children become callous? Publication and updating of
systematic reviews provide opportunities to decrease avoidable
ignorance. Interested parties include clients, potential clients,
professionals, administrators, policy makers, and researchers
among others such as journalists. To what degree do reviews
accepted and published reflect the questions of most concern
to these different groups and what is the overlap among them?
We should expand our ‘‘science of questions’’—how to harvest, clarify, prioritize them, and discover barriers that exist
to recognizing and bringing them to peoples’ attention (Fenton,
Brice, & Chalmers, 2009). Evidence-informed practice and
policy highlight the importance of transforming information
needs into well-structured questions that guide a search. Harvesting life-affecting questions will allow us to make these
visible. Here are some examples.
Are residents of nursing homes overmedicated and if so,
what are the consequences?
Are foster children overmedicated and if so, what are the
consequences?
What percentage of teachers are well trained in use of
basic behavioral principles and related skills to encourage desired behavior and discourage (mis)behaviors?
What percentage of veterans acquire help to which they
are entitled within 2 months of application?
Although efforts have been made to collect questions that arise
in meetings between professionals and their clients, this has
been piecemeal. I know of no efforts to collect all questions
that arise between child welfare staff and their clients in the
United States or even within one county or state. Such data
would allow special attention to the evidentiary status of questions that arise most often (e.g., pursuit of related research in
the absence of this). Where are the lists of questions of most
concern to different groups of clients and practitioners? How
many questions in different areas have we identified? What
particular questions arise in different areas (e.g., foster care,
adoptions, residential assessment centers, residential treatment
centers). How many questions in each area have been systematically investigated? For those that languish unattended, what
are the reasons?
Question priorities will differ among different individuals
and groups. Consider a state decision to mandate the use of
what policy makers view as an evidence-based program such
as ‘‘Boys Town.’’ As research concerning implementation
shows, programs tested and found to be effective in some
situations might not be in other contexts and successful
implementation requires ongoing monitoring of the fidelity
of specific program components and ongoing coaching
(e.g., Fixen et al., 2009). It is almost certain that personnel
at different system levels will have different questions in
such a context. Government officials may have little interest
or not be informed about implementation challenges. Agency
administrators may be pressured to use the program mandated even if it is not appropriate for certain clients. Line
staff must struggle with the effects of lack of implementation
support and lack of needed flexibility to deal with individual
differences in repertoires and circumstances. Questions will
differ in these different groups. We encourage avoidable
ignorance by not recognizing and seeking answers to such
questions.
Consequences of Hiding Ignorance
Ignorance, engineered by others or self-induced, may result in a
variety of consequences for researchers, professionals, policy
makers, agency administrators, and clients. Consequences of
Downloaded from rsw.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
Gambrill
9
avoidable ignorance include lost opportunities to help clients
and avoid harm and to involve clients as informed participants. Vital research may remain undone. Avoidable errors
may remain undetected. We could plot different consequences for different involved parties over time. Hiding
avoidable ignorance diminishes opportunities to minimize
avoidable suffering. Hiding avoidable suffering is a key kind
of avoidable ignorance. Much mischief in the world continues
because it is hidden. Much (most?) avoidable suffering is out
of sight. Some occurs behind closed doors—doors of institutions, homes, and apartments. Much also occurs in the open
which we can see if we open our eyes, look, and investigate.
For example, we can see the toxic effects of industrial pollutants on neighborhoods and communities that contribute to
health problems and stigma (Richardson, Pearce, Mitchell,
& Shortt, 2013; Satterfield, Slovic, Gregory, Flynn, & Mertz,
2010). But many prefer silence, perhaps not recognizing its
cost to others in avoidable suffering.
Challenges to Recognizing Uncertainties
Professionals must make decisions in the face of uncertainty.
So must other involved parties including researchers, policy
makers, and the public. Repeatedly acting in the face of uncertainty may create the illusion that more certainty exists than is
the case. Professionals may not have acquired skill in informing clients about the evidentiary status of interventions in a
supportive manner and so misrepresent this status. The pressure for academics and professionals to appear as ‘‘experts’’
potentiates this illusion. That is, we are often pressed to give
reasons to others for our decisions. The expression of doubt,
so central to recognition of uncertainty, may not be welcomed
by clients, the public, or government officials who control the
purse strings for money for grants and public programs. After
all, we live in a marketing society characterized by excessive
claims of what is known and what is not. Who would produce
an advertisement for pimple removal saying ‘‘Our product
may be of help.’’ A variety of cognitive biases including the
validity effect, hindsight bias, status quo bias, confirmation
biases, and overconfidence contribute to failure to recognize
uncertainty.
Systematic reviews in the Campbell and Cochrane Collaborations are usually measured in their conclusions. Conclusions may state that things are still ‘‘up-in-the-air’’ in terms
of whether x works and for whom and for what period of time
with what possible adverse effects. Seekers of information may
be very excited when they see the title of a Cochrane or Campbell review that pertains to a life-affecting question. However,
disappointment may quickly follow when they read the conclusion which is often: ‘‘There is no definitive conclusion that can
be made about the effectiveness of x.’’ Unless we help all
involved players including clients to handle uncertainty in a
constructive manner, they are unlikely to do so. They may
become unduly pessimistic or become a ‘‘believer’’ in a remedy they favor.
Interrelated Sources of Avoidable Ignorance
Sources of avoidable ignorance include those unique to an
individual, those that permeate a society, and those unique
to special niches such as certain kinds of schools, agencies,
and companies. Thus, sources of ignorance are closely
related to kinds of ignorance (see earlier discussion). Governments classify reams of information as secret; trade
secrets are carefully protected (Applbaum, 2009; Galison,
2008). Contextual factors include policies that limit how
much money is available to offer services, kinds of feedback required of agencies for funding decisions, and quality
of training for staff and on-site coaching. Quality of and
money devoted to investigative journalism also plays a role,
for example, in the discovery of once forgotten and/or hidden knowledge. There are opportunity costs; what will it
cost to discover certain information?
The types and causes of avoidable ignorance change, in part
with changing technology such as the invention of the printing
press and the Internet. As technologies change, for example,
from writing on scrolls to printing books, some knowledge disappears forever (Janko, 1996). Those who make bogus claims
may not care about the truth or falsity of a claim, just whether
the claim is acted on. That is, truth may not be a concern to
those who promote claims of knowledge (or ignorance; e.g.,
Combs & Nimmo, 1993; Frankfurt, 2005). The goal is to create
beliefs and encourage certain actions, such as being funded.
‘‘The genius of most successful propaganda is to know what the
audience wants and how far it will go’’ (Johnson, 2006, p.
A23). Consider the assertion that smoking marijuana is a gateway drug to use of heroin. This (false) claim has been used to
rationalize the criminalization of marijuana resulting in imprisoning tens of thousands of (mostly African American) men
(Alexander, 2010). The consequences of creating and maintaining avoidable ignorance to different parties vary by the content
hidden and its timing.
Propaganda as a Key Source of Avoidable Ignorance
Avoidable ignorance and propaganda are closely related. Propaganda ‘‘seeks to induce action, adherence and participation—with as little thought as possible’’ (Ellul, 1965, p. 180).
Much propaganda is ‘‘a set of methods deliberately employed
by an organized group that wants to bring about the active or
passive participation in its actions of a mass of individuals, psychologically unified through psychological manipulations and
incorporated in an organization’’ (p. 61).
To be effective, propaganda must constantly short-circuit all
thought and decision. It must operate on the individual at the level
of the unconscious. He must not know he is being shaped by outside forces. . . . (Ellul, 1965, p. 27)
There is a selective use of evidence. This can be contrasted
to critical thinking defined as arriving at well-reasoned
beliefs and actions based on critical appraisal of related
arguments and evidence and an understanding of relevant
Downloaded from rsw.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
10
Research on Social Work Practice
contexts. Interrelated kinds of propaganda in the helping
professions include deep propaganda that obscures the political, economic, and social contingencies that influence problems claimed by a profession, such as alcohol abuse, and the
questionable accuracy of related assumptions about causes
such as labeling hundreds of (mis)behaviors as mental disorders requiring the help of experts. It includes inflated claims
of effectiveness regarding practices and policies that woo
clients to professionals and professionals to professions.
Key methods include creation of doubt, censorship/omission, distortion, diversion, and even fabrication (e.g.,
McGoey, 2012). Realities constructed are partially tilted
toward those that forward beliefs and actions favored by the
claims maker. Other realties are shoved aside, ignored, or
actively censored, such as adverse effects of medications
promoted by pharmaceutical companies. Successful alternatives are censored (Greenwald, 2009). Propaganda hides
influences on our decisions and information of value in
making decisions. It uses false figures and misleading
claims. It hinders our autonomy to make our own decisions
based on accurate information. Propaganda in the media and
in the professional literature interact with groupthink in
organizations as well as with self-propaganda such as wishful thinking and confirmation biases (searching only for
material that supports our views) to compromise decisions.
We grow up in a certain culture that encourages certain
values and beliefs. Ellul (1965) views the most concerning
kind of propaganda as integrative propaganda. In integrative
propaganda, we become ‘‘adjusted’’ to accepted patterns of
behavior. Indeed, people who do not adopt popular beliefs
or follow accepted patterns of behavior are often labeled
deviant, mentally ill, or criminal. This happens in all venues
including science as illustrated by reactions to major discoveries in science such as those by Pasteur and Semmelweiss.
Ellul suggests that much of this kind of propaganda occurs
under the guise of education. Indeed, he views education as
central to the effectiveness of propaganda; it is a precondition. Consider, for example, infiltration of advertising into
our schools. He refers to this as prepropaganda—it can
deluge our minds with vast amounts of incoherent information, already dispensed for ulterior purposes and posing as
‘‘facts’’ and as ‘‘education,’’ thus creating automatic reactions to particular words and symbols. Ellul considers intellectuals as the most vulnerable to propaganda because (1)
they are exposed to the largest amount of secondhand
unverifiable information, (2) they feel a need to have an
opinion on important questions, and (3) they view themselves as capable of ‘‘judging for themselves.’’ The environments in which we work may discourage asking
questions. Indeed, the questioner may be fired as illustrated
by whistle-blower protection laws. Our educational experiences, including our professional degree program, shape
us in ways compatible with mainstream thinking within
that profession. Graduate programs may not educate students about what science is and how it differs from
pseudoscience.
The Media, Advertising, and Public Relations
We live in a sea of advertisements, including direct to
consumer drug advertisement (e.g., Gagnon & Lexchin,
2008). A review of advertising on marketing brochures distributed by drug companies to physicians in Germany revealed that
94% of the content in these had no basis in scientific evidence
(reported in Tufts, 2004; see also Loke, Koh, & Ward, 2002).
Clients’ hopes and wishes for happiness and redress from difficult circumstances, such as chronic pain, or ‘‘out-of-control’’
teenagers, and professionals’ interests in helping, combine in a
potent, often mystical brew that encourages uncritical belief in
claims as to what will help and what will not, often resulting in
use of ineffective or harmful methods. Billions are spent on
public relations firms to promote favored views. This sea of
advertising directs us toward individual solutions to life’s challenges. Feeling lonely? Use my self-help book. Want to
enhance your job prospects? Use the right soap (see Gambrill,
2012a). Newspaper reports echo popular narratives, rarely correcting overhyped claims in earlier reports (Gonon, Konsman,
Cohen, & Boraud, 2012; see also Schwitzer, 2008).
Governmental/Agency Reports
The environments in which we work may discourage questions.
Indeed, the questioner maybe drummed out of the organization.
Altheide and Johnson (1980) argue that the major source of propaganda consists of agency reports prepared to attain or maintain
funding; the goal is funding not candid description of what was
done to what effect. ‘‘Management speak’’ obscures what was
actually done and what resulted (e.g., Ali, Sheringham, &
Sheringham, 2008; Charlton, 2010; Stivers, 2001).
The Biomedical Industrial Complex
Drug companies promote common concerns such as anxiety
in social situations and premenstrual dysphoria as ‘‘mental illnesses’’ to increase profits from sales of drugs. Medicalization and disease mongering are rampant (e.g., Conrad, 2007;
Payer, 1992). Perhaps the single most significant source of
avoidable ignorance today promoted by the APA and fellow
travelers, is the assumption that (mis)behaviors are mental illnesses. (The equation of (mis)behavior and brain disease.)
The medicalization of (mis)behaviors and other kinds of disease mongering has been a spectacular success. This problem
framing hides environmental contributors such as different
learning opportunities (e.g., Szasz, 2007b) and moral and
ethical issues. Most continuing education programs in medicine are funded by pharmaceutical companies (see Brody,
2007, for background).
The Professional Literature
Professionals as well as clients are often bamboozled by false
claims in professional journals and textbooks as well as in the
media about what is helpful and what is not. Such claims hide
certain aspects of reality and/or assert realities that do not exist.
Downloaded from rsw.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
Gambrill
11
Inflated claims are a key form of propaganda. Hiding results,
including minimal and negative results, is common (e.g.,
Turner, Matthews, Linardatos, Tell, & Rosenthal, 2008). Many
efforts described as ‘‘scientific research’’ or ‘‘scholarly
inquiry’’ do not reflect the values, aims, or methods of science
such as open-mindedness and self-correction. Inflated claims
of knowledge are more the norm than the exception. Ioannidis
(2005) argues that most published research findings are false.
Recent reports describe lack of replicability of many studies
(Lewandowsky et al., 2012). Young, Ioannidis, and AlUbaydli (2008) suggest that scientific information is an economic commodity resulting in a distorted view of reality and
a misallocation of resources. There are tens of thousands of
articles and only a handful of high-status journals resulting in
exaggerated unrepresentative results. They suggest that there
is a ‘‘winner-take-all’’ reward structure that decreases diversity
of exploration (see also discussion of the ‘‘Matthew Effect,’’
Rigney, 2010).
What is in sight are thousands of publications containing
hundreds of inflated claims of ‘‘what we know’’ while minimizing the far more extensive domain of ‘‘what we do not
know,’’ and perhaps, ‘‘what we can never know.’’ Perhaps
the metaphor of an iceberg is apt—replicable studies being
the small visible tip and the huge invisible hulk lurking
beneath the surface consisting of unreplicable studies.
Inflated claims of knowledge (what we know) and ignorance
(what we don’t know) include claims about causes, accuracy
of diagnostic tests, effectiveness of proposed remedies,
problems and risks, and prevalence of concerns. We often
find advocacy in place of evidence. There is marketing in
place of scholarship. Marketing values and strategies prevalent throughout time in selling nostrums for our maladies,
have increasingly entered the realm of professional education as well as published literature. The hundreds of journals
with thousands of articles, books, and monographs that
appear each year give an illusion of the growth of knowledge. In her essay review ‘‘Drug companies & doctors:
A story of corruption,’’ Angell (2009) concludes:
It is simply no longer possible to believe much of the clinical
research that is published, or to rely on the judgment of trusted
physicians or authoritative medical guidelines. I take no pleasure in this conclusion which I reached slowly and reluctantly
over my two decades as an editor of The New England Journal
of Medicine. (p. 15)
Professional publications share many of the goals and
related strategies of advertisements. Shared goals include
persuading readers of the accuracy of claims made, for
example, about the causes of certain concerns such as anxiety and depression. Persuasion methods include those
designed to mislead as well as those which invite and allow
readers to arrive at informed conclusions. That is, persuasion efforts differ in terms of whether an informed decision
about the accuracy of content is valued and forwarded or
unwanted and undermined. Shared strategies include
repetition, cherry-picking, begging the question, and appeal
to authority—among others (Gambrill, 2012a). Strategies
used to give an illusion of successful outcomes include
focusing on surrogates (reducing plaque in the arteries
rather than mortality), data dredging (searching for significant findings unguided by specific hypotheses), describing
only outcomes found to be positive and not reporting negative ones, folding outcome measures not found to be significant into a composite score and arguing that the composite
is effective. Such ploys are common in the professional literature (e.g., Gorman & Huber, 2009). Lack of education
about what science is and what it is not, lapses in scholarship on the part of academics and researchers as well as the
daily barrage of propaganda in the media—often in the
guise of ‘‘scientific research’’—blur the distinction between
science and pseudoscience, many times with serious consequences. Lack of historical knowledge fuels the process
such as ignorance concerning use of coercion throughout the
history of psychiatry (Szasz, 2007b).
Words and phrases viewed as good by most people such as
‘‘evidence-based practice and/or policy’’ and ‘‘systematic
reviews’’ assume a slogan-like symbolic quality as can be seen
in the many reviews dubbed ‘‘systematic’’ that do not have the
characteristics of systematic reviews (e.g., Littell, 2008). And,
as Derry, Derry, McQuay, and Moore (2006) notes, ‘‘Even a
review with a Cochrane label does not make it true.’’ He
reported that 4 of the 12 Cochrane reviews on acupuncture
were wrong (p. 3; see also Brassey, 2013; Ford, Guyatt, Talley,
& Moayyedi, 2010).
The Self
We ourselves are a key source of avoidable ignorance,
knowingly and not. Tolerance for uncertainty varies. Too
little contributes to excessive claims of knowledge and perhaps ignorance as well. Should we make greater use of the
‘‘intolerance of uncertainty scale’’ (Buhr & Dugas, 2002)?
We are subject to a variety of cognitive biases such as the
validity effect (if we have heard of a concept we think we
know more about it than we do, Renner, 2004). Other cognitive biases include confirmation biases, hindsight bias,
status quo bias, and overconfidence. We save time by not
questioning and tracking down data related to a claim such
as ‘‘This measure has high reliability and validity.’’
Misleading claims are often effective in influencing our
behavior and beliefs as well as those of clients because
we do not critically appraise them. Rather, we are patsies
for others’ bogus claims because of our own inaction—our
failure to ask questions—our failure to be skeptical. How
many of us are guilty of making excessive claims of knowledge? Are we punished if we make measured claims? A
focus on avoidable ignorance may help to counter confirmation biases—our tendency to search only for data that support our assumptions and to ignore counterevidence. It
should counter the dysfunctional expectation to ‘‘know’’
everything.
Downloaded from rsw.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
12
Research on Social Work Practice
Should We Create a Metric Capturing the
Discourse of Avoidable Ignorance?
Hidden by excessive claims of ‘‘what we know’’ is the vast
domain of ‘‘what we don’t know’’—ignorance—both avoidable and not. Norcross and Lambert (2011) estimate that 40%
of the total psychotherapy outcome variance is unexplained.
Can we gauge the extent of our ignorance—collectively and
individually and related consequences? Can we identify contributors to avoidable ignorance and assess their extent? Many
measures, such as effect sizes, reveal ignorance and uncertainty
as well as suggest knowledge. The greater the excessive claims
of knowledge, the greater the avoidable ignorance. Such excessive claims hinder innovation and discovery of unanswered
(and unanswerable) questions. A metric of avoidable ignorance
could include the following (e.g., Gambrill & Reiman, 2011):
1.
2.
The excessive claim index: We could count the number
of claims unaccompanied by relevant documentation in
an article or other source such as a chapter or review
(see Loke et al., 2002). These include those with no
references cited (easy to detect) as well as those accompanied by one or more references that provide no or limited support for the claim (Greenberg, 2009).
The vagueness index: This index reflects vagueness that
contributes to avoidable ignorance. Here are some
examples:
a. Vague terms used for claimed associations (e.g.,
‘‘highly associated,’’ ‘‘small association’’). What
are the correlations?
b. Vague claims regarding the reliability and validity
of measures. Examples:
‘‘This scale . . . has good reported reliability and
validity’’ (Regehr et al., 2013, p. 33). What is it?
What kinds were explored?
‘‘These two versions of the IES have been used in
hundreds of studies addressing trauma and thus
provide excellent opportunities for comparison
across populations’’ (Regehr et al., 2013, p. 34).
Use does not equate with rigor.
‘‘Results of this systematic review provide tentative evidence that . . . ’’ (Regehr et al., 2013, pp. 8,
57). How should ‘‘tentative’’ be interpreted?
c. Vague questions may be posed so it is impossible to
determine what is relevant and what is not.
3.
4.
5.
The censorship index: Examples include failure to
accurately describe well-argued alternative views to
those promoted and research findings that undermine
the view promoted. What percentage of relevant material is described ranging from 0 to 100.
The distortion index: This index reflects avoidable distortions of positions/evidence such as misrepresentation
of disliked views (e.g., Thyer, 2005).
The waste index: This index reflects the waste of
resources in limiting opportunities to contribute to
knowledge and discover ignorance. There are only so
many resources available to chip away at the mountain
of ignorance. The size of this mountain is unknown as
reflected, for example, by the failure to replicate published studies (Lewandowsky et al., 2012). A wise use
of scarce resources keeps waste of effort and money to
a minimum. A new center has been established at Stanford (METRICS) to decrease waste and increase value
in research. Opportunity costs should be considered in
estimating waste. What do we give up by choosing to
investigate a certain question? When is replication
needed? How many replications do we need of a study?
For example, how many more replications do we need
showing that depression and anxiety are related to environmental stress? When should we end marker variable
research and move on to experimental research drawing
on such marker variables?
Avoidable Ignorance and Deception
Is misrepresenting the conclusions that can be drawn from a
given method a form of deception? Is failure to acknowledge
key related literature in a published report such as wellargued alternative views, counterevidence to preferred views,
negative evidence regarding preferred views, a kind of deception?
Is not the reader deceived regarding the evidentiary status of
claims made and asked to accept conclusions shaped by avoidable
ignorance? Ignorance of the law is no excuse for breaking it. But
this does not seem to apply to the professional literature and content presented in professional education programs (e.g., LaCasse
& Gomory, 2003). Can we escape related moral and ethical concerns by claiming ‘‘Buyer Beware?’’ Can we escape related
harms by helping consumers to exercise informed caution?
Although an increasing number of resources such as Retraction
Watch, pharmedout.org, and criticalThinkRX.org are available
to combat misinformation, deceptive publications continue to
appear because of pressures to market one’s self, one’s organization, one’s products, and related conflicts of interests (e.g.,
between Big Pharma and academic researchers). If this is so, a key
need is education of all involved parties, including clients, regarding avoidable ignorance. (‘‘No decision about me without me.’’)
Would journals agree to require author statements attached to a
publication noting inflated claims and other ploys? Such a statement may be as follows:
‘‘WARNING TO THE READER’’: Although I used a pre-post
design, I make claims of causality that are bogus. I failed to report
that I deleted some data that were not favorable to my hypothesis
but this is a common practice. I did not report the actual correlations for reliability and validity of my measures because these are
so low. (Another reason was to save time.) I did not note well
designed studies of my question because this would make my study
look less important. I thank the publisher for the opportunity to
publish my study and look forward to submitting additional
reports. It feels really good to be honest about the limitations of
my study. I would estimate that the money spent on this research
Downloaded from rsw.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
Gambrill
13
was $30,000 and the time spent of subjects about 100 hours. I thank
them for their time. Yours sincerely, Brian Bogus.
The Socratic Doctrine of Ignorance
Using a lens of ignorance rather than knowledge is thousands
of years old as illustrated by Socrates (This description is from
www.oregonstate.edu/instruc/ph1201). The Oracle in ancient
Greece when asked ‘‘who is the wisest of men?’’ Replied
‘‘No one is wiser than Socrates.’’ When Socrates heard this,
he asked ‘‘What can the god mean for I know that I have no
wisdom, small or great. And yet he is a god and cannot lie.’’
After reflection, Socrates decided that if he could find a man
wiser than himself that would provide refutation that he indeed
was not the wisest of men. He then started to go about examining men thought to be wise. This revealed to Socrates that such
men were not really wise and he concluded that ‘‘I am better off
than he is—for he knows nothing, and thinks that he knows.
I neither know nor think that I know. In this latter particular,
then, I seem to have slightly the advantage of him’’ (www.oregonstate.edu/instruc/ph1201). He went to many men and concluded the same. He thus found that ‘‘the experts are just as
ignorant about what things really are.’’ And as he noted, men
hated him for this and eventually he was jailed, placed on trial,
and executed by drinking hemlock (Plato, 1954/1993).
Socrates concludes that it is better to have honest ignorance than
self-deceptive ignorance. Socrates may not know the ultimate
answers to the questions he raises, but he knows himself. It is this
self-knowledge and integrity that constitutes the wisdom of
Socrates. The open invitation is for all of us to ask ourselves how
much we truly know of what we claim (www.oregonstate.edu/
instruc/ph1201).
Thus, Socrates starts from a position of ignorance. He advocates intellectual modesty and humility. ‘‘Know thyself!’’
means for him, ‘‘Be aware of how little you know’’ (Popper,
1992, p. 32). Popper (1972, 1994) embraced pursuit of ignorance (falsification) rather than justification. ‘‘Every solution
of a problem creates new unsolved problems’’ (Popper, 1992;
p. 50). ‘‘It might do us good to remember from time to time
that, while differing widely in the various little bits we know,
in our infinite ignorance we are all equal’’ (p. 50). In this day
of overlap between advertising and scholarship (Gambrill,
2012a), such an embrace is ever more needed.
Conclusion
Reducing avoidable ignorance that results in harm to clients is
the key reason for the creation of the Campbell and Cochrane
Collaborations. Both the Cochrane and Campbell Collaborations
offer the promise and the continuing development of methods
designed to shed light in corners often dark for the benefit of clients/consumers, professionals, policy makers, and researchers. A
focus on avoidable ignorance directs attention to gaps between
knowledge available and what is used to help clients to enhance
the quality of their lives and to participate as informed (rather
than as uninformed or misinformed) participants in helping
exchanges, and factors that contribute to these gaps. The stringent, transparent guidelines of the CC1 and CC2 for reviewing
methodology contribute to reducing avoidable ignorance by
decreasing the likelihood of inflated claims of knowledge resulting from flawed methodology and analyses that mislead
involved parties. Examples include posing of clear questions,
critical appraisal of methods, gathering reliability data regarding
coding of each study, and a thorough search, both in published
and unpublished literature, for studies related to a question.
Revealing the evidentiary status of specific interventions, including assessment methods, allows clinicians to accurately inform
clients and significant others about the evidentiary status of recommended methods. Avoidable ignorance of policy makers and
funders is reduced, allowing more informed decisions.
Additional steps could be taken to decrease avoidable ignorance. Questions of concern in the helping professions include
questions about problems (what is viewed as a problem and
by whom? what is the prevalence/incidence? who does it
affect?), related causes, and the accuracy of related assessment
measures. Answers to these questions affect selection of interventions used in RCTs and research reports included in systematic reviews. Ignoring the evidentiary status of problem
framing in research reports included in C1 and C2 effectiveness
reviews perpetuates flawed views rather than enlightening
readers about limitations of popular views and contributes to
avoidable suffering. Every intervention is a hypothesis. A medicalized problem framing reflects popular views shaped by the
biomedical industrial complex (Gambrill, 2014). We could collect single-case data for each client in RCTs. We should note
contradictions and other problems in related theory regarding
concerns addressed and avoid medicalized language that fosters a psychiatric view of life’s travails which minimizes environmental circumstances (e.g., ‘‘comorbid’’). We should be
more measured in our conclusions. We can improve the science
of questions by harvesting questions of vital concern to different parties, including clients and practitioners. We can ask:
Of all the avoidable ignorance that lessens quality of life
for clients, what percentage has the Campbell reduced
and with what effect?
What kinds of avoidable ignorance most compromise
quality of life for clients?
What are the most important questions to clients? How
can we harvest and advocate for attention to them? What
questions have been ignored?
We should gather all questions that arise in offering services to
clients and focus special attention on those that arise most often
in terms of their evidentiary status. A bouquet of opportunities
awaits for C1 and C2 reviews to continue to reduce avoidable
ignorance.
What gets us into trouble is not what we don’t know. It’s what we
know for sure that ain’t so. (Mark Twain)
Downloaded from rsw.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
14
Research on Social Work Practice
Author’s Note
This article was an invited lecture at the Annual Conference, Campbell
Collaboration, Chicago, May 2013.
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to
the research, authorship, and/or publication of this article.
Funding
The author disclosed receipt of the following financial support for the
research and/or authorship of this article: The author thanks the funders of the Hutto Patterson Charitable Foundation.
References
Abbott, A. (2010). Varieties of ignorance. The American Sociologist,
41, 174–189.
Adler, N. E., & Stewart, J. (2010). Preface to the biology of disadvantage: Socioeconomic status and health. Annals of the New York
Academy of Sciences, 1186, 1–4.
Alexander, M. (2010). The new Jim Crow laws: Mass incarceration in
the age of colorblindness. New York, NY: Free Press.
Ali, A., Sheringham, J., & Sheringham, K. (2008). ‘‘Managing
the curve downwards’’: A cross-sectional study of the use of
management speak in public health. Public Health, 122,
1443–1446.
Altheide, D. L., & Johnson, J. M. (1980). Bureaucratic propaganda.
Boston, MA: Allyn & Bacon.
American Academy of Social Work and Social Welfare. (2013). Introduction and context for grand challenges in social work. Working
Paper No. 1. Baltimore, MD: Author.
American Psychiatric Association. (1994). Diagnostic and statistical
manual of mental disorders. Washington, DC: Author.
American Psychiatric Association. (2013). Diagnostic and statistical
manual of mental disorders (5th ed.). Washington, DC: Author.
Angell, M. (2009, August 10). Drug companies & doctors: A story of
corruption. New York Review of Books, 1/15.
Applbaum, K. (2009). Getting to yes: Corporate power and the creation of a psycho pharmaceutical blockbuster. Culture, Medicine
and Psychiatry, 33, 185–215.
Barber, B. (1961). Resistance by scientists to scientific discovery.
Science, 134, 596–602.
Bauer, H. (2001). Fatal attractions: The troubles with science. New
York, NY: Paraview.
Berry, W. (2005). The way of ignorance and other essays. Emeryville,
CA: Shoemaker & Hoard.
Boruch, R. F. (2005). What is the Campbell Collaboration and how is
it helping to identify ‘‘what works’’? Evaluation Exchange, 9.
Retrieved February 9, 2014, from http://www.hfrp.org/evaluation
Brassey, J. (2013). A critique of the Cochrane collaboration.
Retrieved March 1, 2014, from http://blog.tripdatabase.com
Brewin, C. R. (2006). Understanding cognitive behaviour therapy: A
retrieval competition account. Behaviour Research and Therapy,
44, 765–784.
Brody, H. (2007). Hooked: Ethics, the medical profession and the
pharmaceutical industry. New York, NY: Rowman & Littlefield.
Brown, G. W., & Harris, T. (1978). Social origins of depression:
A study of psychiatric disorders in women. New York, NY: Free
Press.
Budd, R., & Hughes, I. (2009). The Dodo Bird verdict—Controversial, inevitable and important: A commentary on 30 years of
meta-analyses. Clinical Psychology and Psychotherapy, 16,
510–522.
Buhr, K., & Dugas, M. J. (2002). The intolerance of uncertainty scale:
Psychometric properties of the English version. Behaviour
Research and Therapy, 40, 931–945.
Campanario, J. M. (2009). Rejecting and resisting Nobel class discoveries: Accounts by Nobel Laureates. Scientometrics, 81,
549–565.
Chalmers, I. (2003). Trying to do more good than harm in policy and
practice: The role of rigorous transparent up-to-date evaluations.
Annals of the American Academy of Political and Social Science,
589, 22–40.
Charlton, B. R. (2010). The cancer of bureaucracy. Medical Hypothesis, 73, 273–277.
Combs, J. E., & Nimmo, D. D. (1993). The new propaganda: The dictatorship of palaver in contemporary politics. New York, NY:
Longman.
Conrad, P. (2007). The medicalization of society: On the transformation of human conditions into treatable disorders. Baltimore, MD:
Johns Hopkins University Press.
Cooper, H. (2007). Evaluating and interpreting research synthesis in
adult learning and literacy. Boston, MA: National College Transition Network, New England Literacy Resource Center/World
Education.
Creadick, A. G. (2010). Perfectly average: The pursuit of normality in
post war America. Amherst: University of Massachusetts Press.
Derry, C. J., Derry, S., McQuay, H. J., & Moore, R. A. (2006). Systematic review of systematic reviews of acupuncture published
1996-2005. Clinical Medicine, 6, 381–386.
DiGangi, J. A., Gomez, D., Mendoza, L., Jason, L. A., Keys, C. B., &
Koenen, K. C. (2013). Pretrauma risk factors for posttraumatic
stress disorder: A systematic review of the literature. Clinical Psychology Review, 33, 728–744.
Diller, I. H. (2006). The last normal child: Essays on the intersection
of kids, culture and psychiatric drugs. Westport, CT: Praeger.
Djulbegovic, B., & Paul, A. (2011). From efficacy to effectiveness
in the face of uncertainty: Indication creep and prevention
creep, Journal of the American Medical Association, 305,
298–299.
Djulbegovic, B., & Hozo, I. (2007). When should potentially false
research findings be considered acceptable? PLoS Medicine, 4,
e26.
Douglas, M. (1973). National symbols. London, England: BarrieJenkins.
Drury, S. S., Theall, K., Gleason, M. M., Smyke, A. T., DeVivo, I.,
Wong, J. Y., . . . Nelson, C. A. (2012). Telomere length and early
severe social deprivation: Linking early adversity and cellular
aging. Molecular Psychiatry, 17, 719–727.
Dunning, D., Heath, C., & Suls, J. M. (2004). Flawed self-assessment:
Implications for health, education, and the work place. Psychological Science and the Public Interest, 5, 69–106.
Downloaded from rsw.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
Gambrill
15
Ehlers, A., Bisson, J., Clark, D. M., Creamer, M., Pilling, S., Richards,
D., . . . Yule, W. (2010). Do all psychological treatments really
work the same in posttraumatic stress disorder? Clinical Psychology Review, 30, 269–276.
Ellul, J. (1965). Propaganda: The formation of men’s attitudes. New
York, NY: Vintage.
Farmer, R. F., & Nelson-Gray, R. O. (2005). Personality-guided
behavior therapy. Washington, DC: American Psychological
Association.
Fenton, M., Brice, A., & Chalmers, I. (2009). Harvesting and publishing patients’ unanswered questions about the effects of treatment.
In P. Littlejohns & M. Rawlins (Eds.), Patients, the public and
priorities in healthcare (pp. 165–180). Abingdon, England:
Radcliffe.
Fixsen, D. L., Blase, K. A., Naoom, S. F., & Wallace, F. (2009). Core
implementation components. Research on Social Work Practice,
19, 531–540.
Ford, A. C., Guyatt, G. Y., Talley, N. J., & Moayyedi, P. (2010).
Errors in the conduct of systematic reviews of pharmacological
interventions for irritable bowel syndrome. American Journal of
Gastroenterology, 105, 280–288.
Ford, T., Goodman, R., & Meltzer, H. (2003). The British child
and adolescent mental health survey, 1999; the prevalence of
DSM-IV disorders. Journal of the American Academy of Child
and Adolescent Psychiatry, 42,1203–1211.
Francis, A. (2012, August 8). A clinical reality check. Response essay.
Mental Health and the Law.
Frankfurt, H. G. (2005). On bullshit. Princeton, NJ: Princeton
University Press.
Gagnon, M. A., & Lexchin, J. (2008). The cost of pushing pills: A new
estimate of pharmaceutical promotion expenditures in the United
States. PLoS Medicine, 5, e1.
Galison, P. (2008). Removing knowledge: The logic of modern censorship. In R. N. Proctor & L. Schiebinger (Eds.), Agnotology: The
making and unmaking of ignorance (pp. 37–54). Stanford, CA:
Stanford University Press.
Gambrill, E. (2006). Evidence-based practice: Choices ahead.
Research on Social Work Practice, 16, 338–357.
Gambrill, E. (2012a). Propaganda in the helping professions.
New York, NY: Oxford University Press.
Gambrill, E. (2014). The Diagnostic and Statistical Manual of Mental
Disorders as a major form of dehumanization in the modern world.
Research on Social Work Practice, 24, 13–36.
Gambrill, E., & Reiman, A. (2011, May 25). A propaganda index for
reviewing manuscripts: An exploratory study. PLoS One, 6,
e19516.
Gambrill, E., Shaw, T., & Reiman, A. (2006). A critical appraisal of
master’s students research projects in social work. Presented at
Annual Program meeting of the Council on Social Work Education
(Unpublished manuscript). University of California, Berkeley, CA.
Gonon, F., Konsman, J.-P., Cohen, D., & Boraud, T. (2012). Why
most biomedical findings echoed by newspapers turn out to be
false: The case of attention deficit hyperactivity disorder. PLoS
One, 7, e44275.
Gorman, D. M., & Huber (2009). The social construction of evidencebased drug prevention programs: A reanalysis of data from the
Drug Abuse Resistance Education (DARE) program. Evaluation
Review, 33, 396–414.
Gøtzsche, P. (2013). Deadly medicines and organized crime: How big
pharma has corrupted health care. New York, NY: Radcliffe.
Greenberg, S. A. (2009). How citation distortions create unfounded
authority: Analysis of a citation network. British Medical Journal,
339, b2680.
Greenwald, G. (2009). Drug decriminalization in Portugal: Lessons
for creating fair and successful drug policies. Retrieved January
28, 2011, from http://www.cato.org/pub
Gross, M. (2010). Ignorance and surprise: Science, society and ecological design. Cambridge, MA: MIT Press.
Guyatt, G., Rennie, D., Meade, M. O., & Cook, D. J. (2008). Users’
guide to the medical literature: A manual for evidence-based
clinical practice (2nd ed.). Chicago, IL: American Medical
Association.
Hallam, R. S. (2013). Individual case formulation. New York, NY:
Elsevier.
Healy, D. (2000). The assessment of outcomes in depression: Measures of social functioning. Reviews in Contemporary Pharmacotherapy, 11, 295–301.
Hook, E. B. (Ed.). (2002). Prematurity in scientific discovery. Berkeley: University of California Press.
Insel, T. (2013). Transforming diagnosis. National Institute of Mental
Health. Retrieved May 2, 2012, from www.nimh.nih.gov
Ioannidis, J. P. A. (2005). Why most published research findings are
false. PLoS Medicine, 2, e124.
Jadad, A. R., & Enkin, M. W. (2007). Randomized controlled trials.
Questions, answers, and musings (2nd ed.). Malden, MA: Blackwell Publishing.
Janko, R. (1996). Literature, criticism and authority: The experience
of antiquity. Lecture, October. London, England: University
College.
Johnson, H. M. (2006, December 25). Alas for Tiny Tim, he became a
Christmas cliché. New York Times, A23.
Kay, S., & Vyse, S. (2005). Helping parents separate the wheat from
the chaff: Putting autism treatments to the test. In J. W. Jacobson, R. M. Foxx, & J. A. Mulick (Eds.), Controversial therapies
for developmental disabilities: Fad, fashion, and science in professional practice (pp. 265–277). Mahwah, NJ: Lawrence
Erlbaum.
Kempner, J. (2008). The chilling effect: How do researchers react to
controversy? PLoS Medicine, 5, e222.
Kerwin, A., & Witte, M. (1983). Map of ignorance (Q-Cubed Programs): What is ignorance? Retrieved October 6, 2011, from
http://www.ignorance.medicine.arizona.edu/ignorance
Kirk, S. A., Gomory, T., & Cohen, D. (2013). Mad science: Psychiatric coercion, diagnosis and drugs. New Brunswick, NJ:
Transaction.
Kuhn, T. S. (1970). The structure of scientific revolutions (2nd ed.).
Chicago, IL: University of Chicago Press.
Lacasse, J. R., & Gomory, T. (2003). Is graduate social work education promoting a critical approach to mental health practice? Journal of Social Work Education, 39, 383–408.
Laska, K. M., Gurman, A. S., & Wampold, B. E. (2013, December 30).
Expanding the lens of evidence-based practice in psychotherapy: A
Downloaded from rsw.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
16
Research on Social Work Practice
common factors perspective. Psychotherapy. Online first publication. doi:10.1037/a0034332
Lawn, P. A. (2001). Toward sustainable development: An ecological
economic approach. Boca Raton, FL: CRC Press.
Lewandowsky, S, Ecker, U. K. H., Seifert, C. M., Schwarz, N., &
Cook, J. (2012). Misinformation and its correction: Continued
influence and successful debiasing. Psychological Science in the
Public Interest, 13, 106–131.
Lindstrom, C. M., & Triplett, K. N. (2010). Posttraumatic growth:
A positive consequence of trauma. In T. W. Miller (Ed.), Handbook of stressful transitions across the lifespan (pp. 569–584).
New York, NY: Springer.
Littell, J. H. (2008). Evidence-based or biased? The quality of
published reviews of evidence based practices. Children and
Youth Services Review, 30, 1299–1317.
Loke, T. W, Koh, F. C., & Ward, J. E. (2002). Pharmaceutical advertisement claims in Australian medical publications: Is evidence
accessible, compelling and communicated comprehensively? Medical Journal of Australia, 177, 291–293.
Machlup, F. (1980). Knowledge: Its creation, distribution and economic significance. Princeton, NJ: Princeton University Press.
Marker, C. D., & Aylward, A. G. (2012). Generalized anxiety disorder. Cambridge, MA: Hogrefe.
McGoey, L. (2012). The logic of strategic ignorance. The British Journal of Sociology, 63, 553–576.
Michell, J. (2000). Normal science, pathological science and psychometrics. Theory & Psychology, 10, 639–667.
Moran, G. (1998). Silencing scientists and scholars in other fields:
Power, paradigm controls, peer review, and scholarly communication. Greenwich, CT: Albex.
Norcross, J. C. (Ed.). (2011). Psychotherapy relationships that work:
Evidence-based responsiveness (2nd ed.). New York, NY: Oxford
University Press.
Norcross, J. C., & Lambert, M. J. (2011). Evidence-based therapy relationships. In J. C. Norcross (Ed.), Psychotherapy relationships that
work: Evidence-based responsiveness (2nd ed., pp. 3–21). New
York, NY: Oxford University Press.
Olfman, S., & Robbins, B. D. (Eds.). (2012). Drugging our children:
How profiteers are pushing antipsychotics on our youngest and
what we can do to stop it. Santa Barbara, CA: Praeger.
Payer, L. (1992). Disease mongers: How doctors, drug companies,
and insurers are making you feel sick. New York, NY: John
Wiley.
Petrosino, A, Turpin-Petrosino, C, Hollis-Peel, M. E., & Lavenberg, J.
G. (2013). ‘‘Scared Straight’’ and other juvenile awareness programs for preventing juvenile delinquency. Cochrane Database
of Systematic Reviews, Issue 4, CD002796.
Pignotti, M., & Thyer, B. A. (2009). The use of novel unsupported and
empirically supported therapies by licensed clinical social workers.
Social Work Research, 33, 5–17.
Planck, M. (1949). Scientific autobiography and other papers (Trans.
F. Gaynor.) New York, NY: Philosophical Library.
Plato. (1993). The last days of Socrates (Trans. H. Tredennick & H.
Tarrant.). New York, NY: Penguin. (Original work published
1954)
Popper, K. R. (1972). Conjectures and refutations: The growth of scientific knowledge (4th ed.). London, England: Routledge & Kegan
Paul.
Popper, K. R. (1992). In search of a better world: Lectures and essays
from thirty years. London, England: Routledge & Kegan Paul.
Popper, K. R. (1994). The myth of the framework: In defense of science
and rationality. M. A. Notturno, Ed. New York, NY: Routledge.
Proctor, R. N. (1995). Cancer wars: How politics shapes what we
know and don’t’ know about cancer. New York, NY: Basic Books.
Proctor, R. N., & Schiebinger, L. (Eds.). (2008). Agnotology: The
making and unmaking of ignorance. Palo Alto, CA: Stanford
University Press.
Ravetz, J. R. (1993). The sin of science. Knowledge: Creation,
Diffusion, Utilization, 15, 157–165.
Regehr, C., Alaggia, R., Dennis, J., Pitts, A., & Saini, M. (2013). Interventions to reduce distress in adult victims of sexual violence and
rape: A systematic review. Campbell Database of Systematic
Reviews. doi:10.4073
Rendon, J. (2012, March 22). Post-traumatic stress’s surprisingly positive flip side. New York Times, p. 1.
Renner, C. H. (2004). Validity effect. In R. F. Pohl (Ed.), Cognitive
illusions: A handbook on fallacies and biases in thinking, judgement and memory (pp. 201–213). New York, NY: Psychology
Press.
Richardson, E. A., Pearce, J., Mitchell, R., & Shortt, N. K. (2013).
A regional measure of neighborhood multiple environmental
deprivation: Relationships with health and health inequalities. The
Professional Geographer, 65, 153–170.
Rigney, D. (2010). The Matthew effect: How advantage begets further
advantage. New York, NY: Columbia University Press.
Rose, S. C., Bisson, J., Churchill, R., & Wessely, S. (2002). Psychological debriefing for preventing post traumatic stress disorder
(PTSD). Cochrane Database of Systematic Reviews, Issue 2,
CD000560.
Satterfield, T., Slovic, P., Gregory, R., Flynn, J., & Mertz, C. K.
(2010). Risks lived, stigma experienced. In P. Slovic (Ed.), The
feeling of risk: New perspectives on risk perception (pp.
215–234). New York, NY: Earthscan.
Schwitzer, G. (2008). How do US journalists cover treatments, tests,
products, and procedures? An evaluation of 500 stories. PLoS
Medicine, 5, e95.
Smithson, M. (1989). Ignorance and uncertainty: Emerging paradigms. New York, NY: Springer-Verlag.
Smithson, M. (1993). Ignorance and science. Knowledge: Creation,
Diffusion, Utilization, 15, 133–156.
Smithson, M. (2010). Ignorance and uncertainty. In V. A. Brown, J. A.
Harris, & J. Y. Russell (Eds.), Tackling wicked problems through
the transdisciplinary imagination (pp. 84–97). New York, NY:
Earthscan.
Staats, A. (2012). The marvelous learning animal. Amherst, NY:
Prometheus.
Stivers, R. (2001). Technology as magic: The triumph of the irrational. New York, NY: Continuum.
Stuckler, D., & Basu, S. (2013). The body economic: Why austerity
kills. New York, NY: Basic Books.
Downloaded from rsw.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
Gambrill
17
Summerfield, D. (2001). The invention of post-traumatic stress disorder and the social usefulness of a psychiatric category. British
Medical Journal, 322, 95–98.
Szasz, T. S. (2007a). The medicalization of everyday life: Selected
essays. Syracuse, NY: Syracuse University Press.
Szasz, T. S. (2007b). Coercion as cure: A critical history of psychiatry. New Brunswick, NJ: Transaction.
Tedeschi, R. G., & Calhoun, L. C. (1996). The posttraumatic growth
inventory: Measuring the positive legacy of trauma. Journal of
Traumatic Stress, 9, 455–471.
Tesh, S. N. (1988). Hidden arguments of political ideology and disease
prevention policy. New Brunswick, NJ: Rutgers University Press.
Thornley, B., & Adams, C. (1998). Content and quality of 2000
controlled trials in schizophrenia over 50 years. British Medical
Journal, 317, 1181–1184.
Thyer, B. A. (2005). The misfortunes of behavioral social work:
Misprized, misread, and misconstrued. In S. A. Kirk (Ed.), Mental
health in the social environment: Critical perspectives (pp.
330–343). New York, NY: Columbia University Press.
Timimi, S. (2012). Globalizing mental health: A neo-liberal project.
Ethnicity and Inequalities in Health and Social Work, 4, 154–180.
Tufts, A. (2004). Only 6% of drug advertising material is supported by
evidence. British Medical Journal, 328, 485.
Turner, E. H., Matthews, A. M., Linardatos, E., Tell, R. A., &
Rosenthal, R. (2008). Selected publication of antidepressants trails
and its influence on apparent efficacy. New England Journal of
Medicine, 358, 252–260.
Walton, D. N. (1999). Informal logic: A handbook for critical argumentation. New York, NY: Cambridge University Press.
Wampold, B. E., & Budge, S. L. (2012). The 2011 Leona Tyler award
address: The relationship—And its relationship to the common and
specific factors of psychotherapy. The Counseling Psychologist,
40, 601–623.
Wampold, B. E., Imel, Z. E., Laska, K. M., Benish, S., Miller, S. D.,
Fl}
uckiger, C., . . . Budge, S. (2010). Determining what works in the
treatment of PTSD. Clinical Psychology Review, 30, 923–933.
Wennberg, J. E. (2002). Unwarranted variations in health care
delivery: Implications for academic medical centers. British
Medical Journal, 325, 961–964.
Young, N. S., Ioannides, J. P. A., & Al-Ubaydli, O. (2008). Why
current publication practices may distort science. PLoS Medicine,
5, e201.
Zwi, M., Jones, H., Thorgaard, C., York, A., & Dennis, J. (2011).
Parent training interventions for Attention Deficit Hyperactivity
Disorder (ADHD) in children aged 5 to 18 years. Cochrane Database of Systematic Reviews, Issue 12, CD003018.
Downloaded from rsw.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016