“Is Milgram`s Deceptive Research Ethically Acceptable?” by Diana

1
“Is Milgram’s Deceptive Research Ethically Acceptable?” by Diana Baumrind: Two
2
comments
3
By Augustine Brannigan
4
Professor Emeritus of Sociology, University of Calgary
5
At the recent 2013 “Obedience to Authority Conference” at the University of
6
Nipissing (Bracebridge Campus August 6-8), Arthur G. Miller commented that, in his view,
7
Diana Baumrind’s latest analysis of Milgram’s ethics continued to be as intemperate, angry
8
and imbalanced as ever. At that point I had not read her position, so it was difficult to either
9
agree or disagree with his assessment. Having now read her contribution, I would have to
10
disagree with Professor Miller, but I understand his misgivings. Professor Baumrind takes
11
a very principled position that premises the consent of the experimental subjects to a full,
12
a priori briefing on the objectives and methods of the experiment. In this comment, I offer
13
two basic points that I believe better ground Professor Baumrind’s critique of Milgram
14
without necessarily taking sides with the Milgram establishment or its critics. My first point
15
raises questions about the use of random assignment of subjects to investigate the outcomes
16
of alternative treatment regimes, and the limits of disclosure that are essential to ensure the
17
reliability of scientific inferences about treatment effects. Here I believe Professor Baumrind
18
overstates her case both conceptually and linguistically. My second point is based on recent
19
archival analyses of the questionable protocols that Milgram actually employed in his
20
research, and that were never acknowledged in his publications. This material gives
ab-milg.rv1
T&AE
INVITED
1
Professor Baumrind even more powerful reasons to challenge the ethics of his work,
2
although these were not the points from which her critique was drawn.
3
First point. Clinical treatment regimes in medicine, psychiatry and other applied
4
fields rely on experimental evidence of treatment effects based on random assignments of
5
subjects to a treatment condition (or multiple alternative treatments), and a placebo, or non-
6
treatment condition. The concept of placebo makes it profoundly ill advised to inform the
7
participants to which condition they have been assigned since this knowledge introduces a
8
potential expectation effect that undermines the logic of random assignment. The fact is that
9
in many cases where there is legitimate disagreement about the efficacy of alternative
10
interventions the ‘random assignment controlled experiment’ is the gold standard for
11
resolving such disagreements, and contributing to the growth of knowledge. It is not a
12
fundamental human right to be informed to which condition one has been assigned in these
13
types of experiments. It falls short of fully informed consent (to avoid confounding the
14
treatment effect with placebo expectations), but never approaches outright coercion, as in
15
the German and Japanese medical experiments on detained prisoners during World War II.
16
This suggests that the use of this type of methodology and its required subject naiveté is
17
morally defensible. Consequently, it would be incorrect to say that participants have been
18
the subject of “lying” and “deceit,” as Baumrind suggests. These terms are deeply
19
prejudicial, and connote professional culpability. It is more accurate to say that participants
20
have entered into a social exchange in which their participation is “blind” for defensible
ab-milg.rv1
2
T&AE
INVITED
1
methodological reasons, and their naiveté is accepted with some expectation, however
2
vague, of a social benefit.
3
Do the Milgram experiments meet the standards that we find in clinical trials? There
4
was actually no placebo group in Milgram – unless one counts the preliminary protocols in
5
which subjects received no feedback of harm to the learner, and all subjects administered
6
the maximum shock level. However, we have numerous alternative treatment effects based
7
on proximity, gender, group intervention, location, etc. What is the quantum of scientific
8
insight about human social behaviour that is learned uniquely through such random
9
assignment controlled experiments, and does it justify the misrepresentation of the rationale
10
of the experiment to potential recruits? Professor Baumrind is skeptical that Milgram
11
uncovered anything truly unexpected or novel, or attributable exclusively to his
12
methodology. That is a view that I share in a recent work on genocide (Brannigan 2013).
13
However, that does not mean that his work was unethical. It simply means that, in spite of
14
his blind protocol, he never made much scientific progress. Consider the clinical case in the
15
same light. I do not believe that many observers would condemn as unethical a clinician who
16
tested the effects of alternative treatment interventions (such as drugs or therapies) through
17
random assignment of subjects, but failed to identify significant, alternative treatment
18
outcomes. The evaluation of the ethical propriety of double blind testing cannot be linked
19
exclusively to specific successful achievements. On the other hand, Baumrind’s position
20
would be more compelling if the case could be made that the extensive utilization of
21
deceptive cover stories during the golden age of experimental social psychology generally
ab-milg.rv1
3
T&AE
INVITED
1
had not yielded cumulative, non-trial knowledge that enjoyed significant consensus in the
2
profession, but nonetheless continued to premise its research protocols on secrecy. That is
3
a tall order to meet, but it would clearly differentiate the justification of blind treatments in
4
clinical work that have enjoyed palpable results, and the more general field of experimental
5
social psychology where such as case has been harder to make (Brannigan 2004). I do not
6
believe that Professor Baumrind’s misgivings about the achievements of this particular case
7
approach a general condemnation of blind assignment in contemporary psychological
8
experiments that sets them essentially apart from the use of this methodology. Given its
9
efficacy in other fields, we have no grounds to equate blind assignment per se to lying.
10
Having said that, the clinical model, as opposed to the psychological model, is
11
sensitive to the need to revise protocols to terminate treatments that were unexpectedly
12
adverse, on the one hand, and ultimately to gravitate the placebo group into the experimental
13
treatment where this was more effective on the other. Here Baumrind makes an excellent
14
point: whatever reservations Milgram may have had about adverse reactions, there is no
15
evidence that he significantly altered his protocols to minimize these outcomes, and in this
16
respect his blind assignment was qualitatively different from revisions found in, for
17
example, such studies as the CAST studies of ventricular arrhythmia following cardiac
18
infarctions. Certain drug therapies designed to suppress arrhythmia that was thought to be
19
associated with subsequent fatal heart attacks produced increased fatalities in a small
20
number of subjects, contrary to expectations. The medicine fixed the arrhythmias but killed
21
the patients! Adjustments were made to drug trials to differentiate the arrhythmia
ab-milg.rv1
4
T&AE
INVITED
1
suppression drugs implicated, and the different vulnerabilities of the patients to counter this
2
unexpected effect (Pratt and Moyé 1995). In other words, the blind process was dynamic,
3
and subject to review depending on the very serious mortality outcomes. Milgram seems to
4
have given little consideration to this possibility. His work was not premised on identifying
5
an optimum solution or amelioration of destructive obedience. This stands him apart from
6
the more narrowly focused clinical utilization of ‘random assignment with control’ method,
7
but it dos not automatically obviate all social benefits from this type of method in the social
8
sciences.
9
I turn now to my second point. Recent archival work on the Milgram archives by
10
Nicholson, Gibson and Perry provides a much more nuanced account of how the experiment
11
was carried out than was generally reported. Baumrind omits any reference to this work. Ian
12
Nicholson (2011) explicitly describes the laboratory conditions at Milgram’s Yale as
13
“torture.” Nicholson focuses on debriefing of several subjects by psychiatrist, Paul Errera,
14
who received reports of significant levels of trauma among subjects in the aftermath of the
15
experiment, and from Milgram’s 1962 “reaction of subjects” report. Indeed, Milgram reports
16
being ‘clobbered’ by professional criticism of his methods and, according to Nicholson, he
17
“evidently decided to lie his way out through the criticism” (p. 744-45), and to report, in
18
Nicholson’s term, “dishonestly” that all his subjects were adequately debriefed immediately
19
following the experiment. Nicholson employs the term “lying” to suggest a departure from
20
professional protocol. “It is clear from the archival record that Milgram did traumatize many
21
of his participants and there is evidence that in several cases he did not help his participants
ab-milg.rv1
5
T&AE
INVITED
1
adequately deal with what they had experienced” (p. 746). All this contradicts reports from
2
the Milgram establishment that the negative experiences were minor, transitory and
3
tolerable. These conclusions amplify Baumrind’s skepticism.
4
Gina Perry is an Australian journalist who has broadcast several documentaries on
5
the research of leading psychologists such as Sherif and Milgram. Again, Baumrind omits
6
reference to this work. The Milgram experiment was replicated at La Trobe University’s
7
psychology department in Melbourne in the 1970s, but never published. Perry came across
8
several accounts of long-lasting trauma from psychology majors who participated in the
9
study as program requirements, a condition approaching compulsion, and reflecting
10
Baumrind’s misgivings about voluntariness. Given the poverty of documentary materials
11
surviving the unpublished replication, Perry undertook an investigation of the original Yale
12
archives. Her work took 4 years of international travel throughout North America and
13
resulted in a provocative book, Behind the Shock Machine (2012). Some of her key findings
14
include the following. First, she was able to contact several of the original subjects, and
15
found high levels of trauma, resentment, and misgivings among them based on their
16
experiences. Her book is a moving portrayal of the reactions of subjects contacted decades
17
after their exposure to the experiment and still smarting from their experiences. Second, one
18
reason for this trauma was that the vast majority of subjects were not “de-hoaxed”
19
(Milgram’s expression for de-briefing) immediately after the study, and the majority
20
departed Yale believing that they had administered electrical shocks to an innocent man, in
21
some cases, a man with a reported heart condition. Subjects who were debriefed were told
ab-milg.rv1
6
T&AE
INVITED
1
that their reactions, whether defiant or obedient, were normal. Milgram later characterized
2
the compliance of the obedient subjects as “shockingly immoral.” Third, certain subjects did
3
not accept the cover story about a learning experiment, did not believe anyone was hurt, and
4
complied accordingly. Milgram broadly rejected their skepticism as denial of their
5
culpability. However, Milgram’s assistant, Taketo Murata, wrote a report that suggested that
6
in the vast majority of treatment groups where subjects suspected that anyone was getting
7
hurt, the mean shock levels were lower, indicating that aggression declined where harm was
8
suspected. There was other evidence of suspicious subject reactions. Many asked the
9
Learner to bang on the wall if he could hear them, and offered to switch places. In many
10
cases the Teachers emphasized the correct responses orally to encourage the Learner – all
11
to no avail, and all drawing into question the internal validity of the protocol. Fourth, the
12
Scientist, Mr. Williams, did not adhere rigidly to the 4-prong protocol for orders to pressure
13
subjects to obey. The evidence of this was especially evident in the all-female condition.
14
One subject was challenged 26 times. Another sat for a half-hour in defiance as Mr.
15
Williams offered her a coffee to encourage compliance. This draws into question the
16
assumption of standardization of the various treatment groups. A similar point was made by
17
Stephen Gibson (2011) who found evidence that at several points at the insistence of
18
subjects, Mr. Williams left the room supposedly to consult with the Learner to determine
19
if he wished to continue. He reported that he was willing to continue, a fact that contradicted
20
the internal validity of the violence. Finally, Milgram suppressed the results of the intimate
21
relationships design in which close family members were matched as teachers and learners.
ab-milg.rv1
7
T&AE
INVITED
1
His evidence of defiance – 85% – contradicted the power he attributed to the transitory
2
‘situational determinants of action,’ i.e. bureaucracy, and suggested that the almighty ‘power
3
of the situation’ was mediated by longer-term relationships based on intimacy.
4
On the side of her arguments that emphasize harm to subjects, the unacknowledged
5
fiduciary responsibility towards subjects, and the deficient transparency of Milgram’s
6
methods, Bauman’s position could be considerably strengthened by acknowledgement of
7
these materials. I hope my comments have contributed in that direction. This will be of little
8
consolation to the Milgram establishment who have never acknowledged that the questions
9
of ethical problems in Milgram’s work have been far graver than they have ever
10
acknowledged. The full Yale archives need to be released to the general public after
11
removing any personal references to individual subjects. Only then will we be able to draw
12
more reliable conclusions about the ethical and methodological integrity of this classic
13
study.
14
References
15
Brannigan, Augustine (2004) The Rise and Fall of Social Psychology: The Use and Misuse
16
17
18
19
20
of the Experimental Method, New Brunswick: Aldine-Transaction.
Brannigan, Augustine (2013) Beyond the Banality of Evil: Genocide and Criminology,
Oxford, UK: Clarendon Press.
Pratt, Craig M. & Moyé, Lemuel A. (1995) “The Cardiac Arrhythmia Suppression Trial,”
Circulation 91: 245-247.
ab-milg.rv1
8
T&AE
INVITED
1
2
Gibson, Stephen (2011) “Milgram’s obedience experiments: A rhetorical analysis,” British
Journal of Psychology DOI:10.1111/j.2044-8309.2011.02070.x
3
Nicholson, Ian (2011) ''Torture at Yale'': Experimental subjects, laboratory torment and
4
the ''rehabilitation'' of Milgram's ''Obedience to Authority,'' Theory and
5
Psychology 21 (6): 737-761. DOI: 10.1177/0959354311420199
6
7
Perry, Gina (2012) Behind the Shock Machine, Melbourne: Scribe Books. Revised
and re-released in 2013 by New York: The New Press.
8
9
___________
10
ab-milg.rv1
9
T&AE
INVITED