For the Record

For the Record
American Scientist Essays
on Scientific Publication
Sigma Xi, The Scientific Research Society
Research Triangle Park, North Carolina
2011
Sigma Xi, The Scientific Research Society is an international, multidisciplinary
research society whose programs and activities promote the health of the
research enterprise and honor scientific achievement. Sigma Xi publishes
American Scientist magazine, in which these essays originally appeared.
Copyright © 2011 by Sigma Xi, The Scientific Research Society, Incorporated.
ISBN 978-0-615-55514-0 (paper)
Table of Contents
Preface
Jerome F. Baker
1.Honesty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
John F. Ahearne
2. A Troubled Tradition . . . . . . . . . . . . . . . . . . . 7
David B. Resnik
3. Authorship Diplomacy . . . . . . . . . . . . . . . . . 15
Melissa S. Anderson, Felly Chiteng Kot,
Marta A. Shaw, Christine C. Lepkowski
and Raymond G. De Vries
4. Making Ethical Guidelines Matter . . . . . . 25
Michael J. Zigmond
5. Digitizing the Coin of the Realm . . . . . . . . 35
Francis L. Macrina
6. Raising Scientific Experts . . . . . . . . . . . . . . . 45
Nancy L. Jones
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Preface
Since its founding in 1886, Sigma Xi, The Scientific
Research Society has fostered integrity in engineering and science. The
founding members of the Society pledged in the charter to promote and
encourage “the highest and the truest advances in the scientific field” and
“to lend aid and encouragement” to the newest scientists and engineers.
It was with acknowledgment to our rich history that this booklet was
conceived. The essays contained in this volume were originally printed
in American Scientist, and have contributed to the celebration of Sigma
Xi’s 125th anniversary. Two other ethics booklets published by Sigma Xi
have played a part in training numerous engineers and scientists for the
past 30-plus years. The present volume is intended to build upon Honor
in Science and The Responsible Researcher: Paths and Pitfalls.
The specific focus on peer review and authorship in this booklet
is critical to successfully sharing new scientific knowledge. Authorship
helps define who is responsible for study design, data collection and
analysis, and interpretation of results. Peer review is the quality-control step in the dissemination of knowledge. For the Record will become
a part of Sigma Xi’s efforts to address in a significant way the leadership characteristics of the newest engineers and scientists.
Sigma Xi has been fortunate to have Dr. Elsa Youngsteadt working
on this project for the past 18 months. Her leadership and commitment
have been critical to the booklet’s completion. She has collaborated
with the authors and assisted with the editing of each essay. I want
to extend a sincere thank you to the authors of the essays for their important contributions and ideas expressed in this booklet. Appreciation
is also extended to Katie Lord for her efforts and to Spring Davis for
her design assistance. Finally, I offer a special thank you to Dr. John
Ahearne for encouraging me and this project.
It is our goal that you, the reader, will use these articles to guide
your own research and to help you mentor your colleagues and students.
Jerome F. Baker, Executive Director and Publisher
1
1
Honesty
Ultimately, ethics in scientific publishing,
as in life, comes down to one word
John F. Ahearne
During my 20-year career in the United States
capital advising several levels of government on matters of energy and
defense, I witnessed many instances of honesty, as well as some of dishonesty, and the consequences of each. Those experiences reinforced
my commitment—one held throughout my adult life—to practice the
virtue of honesty and to instill it in my colleagues. But even though
truthfulness is essential to progress, it is clearly not so easy to uphold.
As the Roman poet Juvenal wrote in the first century a.d., “Honesty
is praised and then left to freeze.” Touted but not applied. This frailty
of human nature, lamented for millennia, clearly has ongoing implications for the progress of both science and society.
What Is Honesty?
A dictionary definition of honesty belies the rigor and complexity
of its practice. According to the Random House description, honesty
is: 1. The quality or fact of being honest; uprightness and fairness. 2.
Truthfulness, sincerity or frankness. 3. Freedom from deceit or fraud.
The commitment required to realize these simple terms is more clearly
implied in a second definition, drawn from Funk & Wagnalls Standard
Handbook of Synonyms, Antonyms and Prepositions: “One who is honest
in the highest and fullest sense is scrupulously careful to adhere to all
known truth and right even in thought.”
Few would contest the desirability of honesty, and good intentions are nearly universal. As Tina Gunsalus, director of the National
Center for Professional and Research Ethics, observes: “Almost everybody wakes up every day and wants to do the right thing.” Later in the
day, the goal may be thwarted; the potential pitfalls are many.
One might ask, for example, what long-term damage could come
from seemingly insignificant transgressions. This reasoning seems to be
a common justification among students who cheat on exams, papers and
2Honesty
even theses. If such students don’t understand who is harmed, it is hard
to convince them that the detriments of deceit outweigh the benefits.
Ethicist Sissela Bok, however, warns that “trust and integrity are precious resources, easily squandered, hard to regain. They can thrive only
on a foundation of respect for veracity.” Thus small transgressions, if discovered, can easily destroy one’s credibility on a larger scale. And, even
if undiscovered, missteps set up the classic “slippery slope” on which
small transgressions lead to larger ones. Habits form, and harm is done
first to one’s self and then to others.
Dishonesty may also take the form of omission, as opposed to
overt deception. To address the entire truth without exception can demand extraordinary courage, as United Nations Secretary General Ban
Ki-moon has demonstrated. In early November 2010, Ban met with
President Hu Jintao of China in Beijing. He discussed climate change,
tensions on the Korean peninsula and peacekeeping. However, as the
editors of The New York Times pointed out, “He was shamefully silent on
one critical issue: China’s poor human rights record and its unjustified
imprisonment of Liu Xiaobo, the country’s leading democracy activist
and ... winner of the [2010] Nobel Peace Prize.” Ban has many responsibilities, but speaking truth to power is one of them, and in Beijing he
was unable to deliver.
This is not to say that it is impossible to elevate ethical commitments above the immediate obstacles. In his 1955 book Profiles in
Courage, then-Senator John Kennedy emphasized that individuals
can rise above their desire for personal advantage and advocate positions that they know are right—even when doing so may damage
their careers. Among the courageous figures featured in the book are
John Quincy Adams, Daniel Webster, Sam Houston and Robert Taft.
Although these men held positions that were often right, they suffered
politically for doing so. Robert Taft, for example, was a leading figure in
the Republican Party when he gave a speech attacking the Nuremberg
Trials of Nazi war criminals. Although he did not support any of the
Nazi actions, he concluded that the injustices in the trials were too great
to ignore. He was harshly criticized by his party.
In today’s political environment of bitter attack ads and communications at the speed of the Internet, such courageous positions can
have immediate negative consequences. Perhaps because of that, in recent years principled stands taken by political leaders seldom seem to
be positions that could harm their careers.
A House of Cards
The value of honesty to science is not essentially different from its
value to society as a whole, but the progress and application of science
do depend fundamentally on the truthful reporting of research. As Nobel
John F. Ahearne
3
Laureate Michael Bishop explained to a
group of high-school students, “Each of
us builds our discoveries on the work of
others; if that work is false, our constructions fall like a house of cards and we must
start all over again.” This dependency is
widely recognized and acknowledged in
science and engineering. Consider, for example, the National Academies’ instruction
manual for new interns, which states that
the “responsible and ethical conduct of research is critical for excellence,
as well as public trust, in social science, science and engineering, and is
considered essential to the preparation of future scientists and engineers.”
In the later decades of the 20th century, examples of scientific misconduct led scientific establishments to formalize ethical guidelines.
One such code was published by the National Academies in On Being a
Scientist: A Guide to Responsible Conduct in Research. First printed in 1988
and now in its third edition, the booklet clearly addresses the role of
honesty and trust in research:
The progress and
application of
science do depend
fundamentally on
the truthful reporting
of research.
Over many centuries, researchers have developed professional standards designed to enhance the progress of science and to avoid or minimize the difficulties of research.... Researchers have three sets of obligations that motivate their adherence to professional standards. First,
researchers have an obligation to honor the trust that their colleagues
place in them.... Second, researchers have an obligation to themselves.
Irresponsible conduct in research can make it impossible to achieve a
goal.... Third, because scientific results greatly influence society, researchers have an obligation to act in ways that serve the public.
A failure to meet these obligations is corrosive to science. As the authors of On Being a Scientist explain:
The scientific enterprise is built on a foundation of trust.... When this
trust is misplaced and the professional standards of science are violated,
researchers are not just professionally affronted—they feel that the base
of their profession has been undermined. This would impact the relationship between science and society.
Thus a failure to be honest can directly damage the scientific enterprise
and can also erode the public’s faith in science.
Publication and Temptation
Of course, each discipline of science and engineering faces unique
ethical challenges, including the humane treatment of research animals
and the environmental consequences of engineering designs. But one
4Honesty
nearly universal ambition among scientists and engineers at all stages of
their careers is publication. In the words of biologist and former editor-inchief of Science Donald Kennedy, “in the world of scholarship, we are what
we write. Publication is the fundamental currency; ... research quality is
judged by the printed word.” And as stated in On Being a Scientist:
The rewards of science are not easily achieved. At the frontiers of research, new knowledge is elusive and hard won. Researchers often are
subject to great personal and professional pressures.
Authorship is therefore essential for scientists who seek career advancement in academia, industry and government. But the high-pressure
obligation to publish may drive some researchers to ethical violations.
Triggered by ethical lapses in two prominent physics cases, the American
Physical Society (APS) formed a Task Force on Ethics. The team, led by
Frances Houle, surveyed all APS members who had completed a Ph.D.
within the past three years. The results, published in 2004, were disturbing:
39 percent of respondents said they had personal knowledge of ethical
transgressions, the two most common of which were inclusion of inappropriate authors on a publication and exclusion of appropriate authors. One
respondent wrote that “many breaches of ethics arise from the pressure
to publish.... The recent sad events [show] that it is for many people more
important to publish spectacular results than to publish true results.”
Physicists are not alone in their difficulties with authorship, the
fair assignment of which presents a major and ongoing challenge in
all fields of science and engineering that have been surveyed. Abuse
of power may lead to the exclusion of deserving authors, and “guest”
authorship may be offered to individuals who did not participate substantially in the research. Temptations to cut corners can be great.
Among the ethical transgressions involving authorship, perhaps the most egregious are fabrication, falsification and plagiarism.
According to the book Responsible Science, “fabrication is making up
data or results,” and “falsification is changing data or results.” These
usually involve experimental results. Plagiarism does not: It is “the
appropriation of another person’s ideas, processes, results or words
without giving appropriate credit,” says the Federal Policy on Research
Misconduct. Under pressure, some authors maintain honesty and
follow the guidelines of science. Some do not.
The other side of the publication coin is peer review, a necessary
form of quality control that helps ensure the value of a publication. But
controversy continues about the fairness and adequacy of the process,
and serving as a reviewer can be both an honor and a burden.
W. Robert Connor, the former director of the National Humanities
Center, summed up the ethical complexity of a reviewer’s task in The
Responsible Researcher, a Sigma Xi handbook that I wrote:
John F. Ahearne
5
[For] investigators who may find themselves asked to participate in peer
review decisions at a relatively early stage in their careers ... there are
a host of issues that need to be thought through—how one deals with
friends or rivals whose applications may be in the pile, how one deals
with approaches and methodologies that may be legitimate but with
which one is not sympathetic, how much one can legitimately “borrow”
from research proposals one reviews, etc.
Clearly, the issues surrounding authorship and peer review are
many. Their nuances are discussed further in these pages. At the root of
any publication decision, however, should be the basic quality of honesty. Without it, the system of credit, responsibility and quality control
in the scientific record is undermined—and the house of cards will fall.
Not So Fast
The virtue of honesty seems to be under great challenge in the world
of blogs, Twitter and television “news” programs. An old saying, often
attributed to Mark Twain, identifies the fundamental problem: “A lie can
travel halfway around the world while the truth is putting on its shoes.”
If only these media were used as often to expose lies and herald truths.
Honesty is necessary for science to advance. Unfortunately, it does
not seem to be necessary for society’s leaders, the individuals who largely
hold the purse strings for science, to practice honesty. Recently, The New
York Times columnist Thomas Friedman wrote about this problem:
When widely followed public figures feel free to say anything, without
any fact-checking, we have a problem. It becomes impossible for a democracy to think intelligently about big issues—deficit reduction, health
care, taxes, energy/climate—let alone act on them. Facts, opinions and
fabrications just blend together.
For the long-term health of the research community and of the
individual, honesty is the best policy.
John F. Ahearne is an adjunct scholar for Resources for the Future and
an adjunct professor of engineering at Duke University. He has served as
commissioner and chairman of the U.S. Nuclear Regulatory Commission,
system analyst for the White House Energy Office, U.S. Deputy Assistant
Secretary of Energy, U.S. Principal Deputy Assistant Secretary of Defense,
executive director of Sigma Xi and director of Sigma Xi Ethics Programs.
He is a member of the National Academy of Engineering and the American
Academy of Arts and Sciences, a fellow of the American Physical Society, the
Society for Risk Analysis and AAAS, and chair of the advisory committee of
the National Academy of Engineering Center for Engineering, Ethics and Society. He received a
Ph.D. in physics from Princeton University.
7
2
A Troubled Tradition
It’s time to rebuild trust among
authors, editors and peer reviewers
David B. Resnik
By the mid-1700s,
editors at the world’s first scientific journal
had a problem on their hands. Since its
inaugural issue in 1665, the Philosophical
Transactions of the Royal Society of London
had published many outstanding scientific papers, including such classics as
Isaac Newton’s experiments with prisms.
But some authors had begun to submit
works of fiction and rambling speculative
essays. To maintain standards of quality,
the editors of Philosophical Transactions
launched a system of peer review to
evaluate manuscripts before publication.
Two centuries went by, however, before
the system really caught on. In the mid20th century, increased specialization,
government support for research, and
competition for journal space compelled
editors to seek assistance from experts. The world’s first scientific journal, the
Transactions of the Royal
Today, peer review is an essential part Philosophical
Society of London, pioneered a system of
of scientific publication and is also used peer review to help editors evaluate manuto evaluate manuscripts, grants and scripts. (Image from The Online Books Page,
http://onlinebooks.library.upenn.edu.)
academic careers.
In publication, peer review serves two distinct functions: It ensures that work is published only if it meets appropriate standards
of scholarship and methodology, and it helps authors improve their
manuscripts. The process begins when authors submit a manuscript to
a journal, often with a list of suggested reviewers and a list of scientists
who should not see the work. The journal editor sends papers of interest to members of the editorial board or outside experts who review
8
A Troubled Tradition
the work for free. These referees assess the manuscript for originality,
importance, validity and clarity. They also advise the editor about the
manuscript’s overall merit and provide written comments—usually
anonymously—for the authors. Finally, the editor decides to publish,
reject or request revisions to the manuscript.
Although it is hard to imagine how science could progress without
independent evaluation of research, peer review is an imperfect system,
fraught with questions of bias, efficacy and ethics. At each step of the
process, there are opportunities and temptations for reviewers to go
astray, and these can take many forms, from simple negligence to intentional abuse for personal gain. If scientific publications are to remain a
reliable record of knowledge and progress, editors and reviewers must
actively cultivate high ethical standards.
The Importance of Trust
It seems that most scientists have a story or two about suspected unethical behavior among reviewers. As a beginning assistant professor, I
submitted a paper on scientific methodology to a prestigious philosophy
journal. The review took a long time—over a year—and when I finally
got a decision, the paper was rejected with little comment. A couple of
months after that rejection, a paper very similar to mine appeared in the
same journal. The article did not plagiarize my work word-for-word, but it
defended a similar thesis using many of the same arguments. I suspected
that the author had served as a referee for my paper and had delayed his
review to prevent my article from being published—or perhaps that he
had pilfered my ideas. It is possible that the author of this competing paper
had independently arrived at conclusions and arguments similar to mine,
and that he had submitted his work to the journal before I did. But I had
no way of knowing whether this was so. In the end, I was left with a bitter
taste in my mouth, and I lost some trust in the integrity of peer review.
It can be hard to determine when a reviewer has abused his or
her position. Unscrupulous referees may plagiarize a submitted manuscript, breach confidentiality, delay the review process in order to stifle
competitors, use data or methods disclosed in a manuscript without
permission, make personal attacks on the authors or require unnecessary references to their own publications.
Incidents such as these violate the foundation of trust that is essential to successful evaluation of scientific manuscripts. Authors, editors
and reviewers must rely on one another to fulfill their roles with honesty, transparency, confidentiality and professionalism. Absent such
trust, the system simply doesn’t work: Authors and editors may ignore
reviews that they think are biased or incompetent. Or, fearing that their
ideas could be stolen, authors may withhold information necessary to
repeat experiments—thereby compromising a key function of scientific
David B. Resnik
9
publication. Editors who do not trust reviewers to work carefully and
disclose conflicts of interest may ignore their comments or delay publication by seeking other reviewers. Disillusioned reviewers may submit
careless evaluations or refuse to review manuscripts. Finally, authors
who violate reviewers’ and editors’ trust by submitting fraudulent results can create lasting discipline-wide difficulties for other researchers.
To promote trust among authors, editors and reviewers, it is essential that all parties follow ethical standards. Most policies and scholarship related to scientific publication focus on the ethical duties of authors, but at least two sets of important guidelines do address reviewers.
The International Committee of Medical Journal Editors recommends
that peer review be unbiased and that journals publish their peerreview policies. The Committee on Publication Ethics (COPE), a nonprofit organization of journals, publishers and individuals, has developed guidelines that address confidentiality of peer review, protection
of intellectual property, fairness and conflict-of-interest management.
Some standards of peer review for editors and referees, recognized by COPE and leading authorities on research integrity, are:
Confidentiality: Maintain confidentiality throughout the review process.
Respect for intellectual property: Do not use authors’ ideas, data,
methods, figures or results without permission.
Fairness: Avoid biases related to gender, nationality, institutional
affiliation and career status.
Professionalism: Read manuscripts carefully, give constructive criticism, avoid personal attacks and complete reviews on time. Review
only manuscripts that you are qualified to review.
Conflict-of-interest management: Disclose personal, professional or
financial interests that could affect a review and avoid reviewing an
article if a conflict of interest could compromise judgment.
If referees followed these guidelines faithfully, I suspect there
would be very few setbacks in peer review.
Problems
Despite efforts to encourage professionalism among reviewers, troubles persist. One of the best-documented issues is inefficacy: Reviewers
may miss errors, methodological flaws or evidence of misconduct. To
measure reviewers’ ability to catch errors, Magne Nylenna of the Journal
of the Norwegian Medical Association and his colleagues created two inauthentic manuscripts with obvious methodological flaws such as inappropriate statistical tests. They sent the papers for review and graded
reviewers on the number of flaws they caught. The average score was
only 1.7 out of 4, and more than one-third of the reviewers provided no
comments on methodology at all. Nylenna’s results were not anomalous. In a similar study led by Fiona Godlee of the British Medical Journal,
10
A Troubled Tradition
reviewers discovered, on average, only 2 out of 8 errors introduced into
manuscripts. These studies did not determine why reviewers missed so
much, but they may have simply read the manuscripts carelessly.
Falsification and fabrication are problems that reviewers shouldn’t
have to worry about—but in reality, they must remain alert. One of the most
famous examples occurred in 2004 and 2005, when South Korean researcher
Woo-Suk Hwang and colleagues published two papers in Science claiming
to have developed human embryonic stem-cell lines that were genetically
identical to patients’ cells. The work would have been a breakthrough in regenerative medicine, but in June 2005, a whistleblower declared that some of
the data were fake. Eventually, a university investigation found that Hwang
had fabricated data in both papers, which were then retracted. Whether editors and reviewers should have spotted evidence of Hwang’s misconduct
is unclear—but without access to raw data, detecting fraud is notoriously
difficult. Indeed, the incident prompted the editors of Science to scrutinize
high-profile papers more closely, requiring authors to provide original data
and examining digital images more carefully.
Even when reviewers do catch flaws in a manuscript, they may not
all agree in their assessments. In one recent study, Richard Kravitz of
the University of California, Davis, and colleagues examined reviewer
recommendations for more than 2,000 manuscripts submitted to the
Journal of General Internal Medicine. For editors to publish work with
confidence, reviewers ideally should agree about whether to accept or
reject a manuscript—but in fact, they concurred only slightly more often
than if they had made their decisions by coin toss. Multidisciplinary research can cause particular confusion because reviewers from different
disciplines may accept different methodological standards. In these
cases, editors may feel the need to seek additional reviews, potentially
delaying publication.
A more subtle but equally pervasive problem is reviewer bias. A
reviewer’s evaluation can be influenced by an author’s institutional affiliation, nationality, gender or career status, or by the reviewer’s own
financial or professional interests. For example, a referee might be more
likely to give a favorable review to a friend than to a competitor, or
to favor a well-known researcher from a prestigious institution over a
less-familiar researcher.
Although specific allegations of bias are difficult to prove, the phenomenon has been documented in systematic studies. In a 1982 study,
for example, Douglas Peters and Stephen Ceci selected 12 previously
published psychology papers by authors from prestigious institutions,
then resubmitted the papers to the same journals using fake author
names and institutions. Of nine journals that sent the papers for review,
eight rejected them due to poor quality. The results suggest that the
original reviewers’ favorable evaluations may have been influenced by
the prestige of the authors or their institutions. Although the sample
David B. Resnik
11
incompetent
biased
unnecessary references
personal attacks
delayed review
breached confidentiality
used ideas, data, methods without permission
0
10
20
30
40
50
survey responses (percent)
60
70
80
Biomedical researchers at the National Institute of Environmental Health Sciences have experienced an
array of ethical problems with peer review. Well over half of those surveyed in 2006 said they had received
at least one biased or incompetent review. More malicious ethical problems, such as breach of confidentiality, were also disturbingly common. (Illustration by Barbara Aulicino.)
size was small and the experiment lacked controls, larger trials with alternative forms of peer review also suggest that referees are influenced
by their assumptions about authors.
More malicious transgressions, such as intentionally delaying reviews, are less well documented. To fill this gap in the literature, my
colleagues, Shyamal Peddada and Christine Gutierrez-Ford, and I conducted a survey in 2006 to ask scientists about a range of problems in
peer review. The respondents included 220 postdoctoral researchers,
staff scientists, principal investigators and technicians working in 22 different biomedical disciplines at the National Institute of Environmental
Health Sciences (NIEHS). They were about 54 percent male and 44 percent female; 2 percent did not specify gender. On average, they were 42
years old and had about 35 publications each.
Reviewer incompetence was the most common problem this group
reported: More than 60 percent of respondents said they had encountered at least one reviewer who did not read an article carefully, was not
familiar with the subject matter or made mistakes of fact or reasoning
in the review (see the chart above). About half said a reviewer was biased.
Other common problems were that reviewers required unnecessary references to their own publications or made personal attacks in reviews.
About 10 percent of respondents said a referee delayed the review
so that he or she could publish an article on the same topic. Rarer, but
still troubling, were reports that reviewers had breached confidentiality or used ideas, data or methods without permission.
An author’s age and number of publications were both positively
associated with experiencing an incompetent or biased reviewer—perhaps
because a researcher who has published more papers has had more
opportunities to encounter reviewers whom he or she views as biased or
incompetent. Scientists who are well established in a field may also be less
12
A Troubled Tradition
open to criticism from reviewers and therefore be more likely to perceive
reviews as inadequate.
This study did have some limitations. The questionnaire asked for
respondents’ experiences, but we did not attempt to confirm whether
alleged problems actually occurred. For example, some reports of reviewer bias might simply reflect the authors’ dissatisfaction with a referee’s comments. We also did not attempt to determine how frequently
respondents experienced the problems they reported. Finally, our
sample of biomedical researchers working at a government institution
may or may not reflect the experiences of other researchers at other
institutions or in other disciplines.
The study does, however, provide some of the only empirical evidence that scientists regularly experience a range of ethical problems
with peer review. To expand upon our results, future research should examine the prevalence and significance of such problems in peer review,
as well as potential causes—such as inadequate training, or competition
for status or funding. This research might take the form of focus groups,
interviews and surveys with editors, reviewers and authors. The results
could guide both policy development and educational initiatives.
Alternative Forms of Peer Review
Some journals and conferences have adopted or tested alternative forms of peer review. One common alternative, double-blind
review, could serve to reduce reviewer bias because neither authors
nor reviewers know each other’s identities or affiliations. Another alternative is unblinded (or open) review, in which both authors and
reviewers do know each other’s identities—a situation that might encourage ethical behavior among reviewers who cannot hide behind a
cloak of anonymity.
Studies of these two forms of review have, however, yielded
mixed results. Logistics are an important hurdle: In trials of doubleblind review, several medical journals found that about one-quarter to
one-half of reviewers were able to correctly guess authors’ identities
despite blinding. And in trials of open review, referees who were asked
to reveal their names to authors often refused to participate.
There is nevertheless evidence that blinding does reduce bias.
Joseph Ross of the Robert Wood Johnson Clinical Scholars Program led
a five-year study, published in 2006, which showed that authors’ nationality and institutional affiliation affected acceptance of abstracts for
the American Heart Association’s annual Scientific Sessions. Among
thousands of abstracts submitted per year, blinded reviewers accepted
about 12 percent fewer abstracts from prestigious institutions than did
reviewers who were aware of authors’ affiliations. Blinded reviewers
also accepted fewer abstracts from within the United States and more
David B. Resnik
13
from outside the United States than did unblinded reviewers. Blinding
must have reduced bias resulting from reviewers’ assumptions about
authors’ countries and institutions.
Whether blinding also improves the quality of reviews is unclear.
In 1990, Robert McNutt and colleagues found that blinded reviewers
provided more accurate, well-supported and courteous reviews than
did unblinded reviewers of articles submitted to the Journal of General
Internal Medicine. But several years later, Amy Justice led a similar study
with five different medical journals, and found no effect of blinding on
review quality.
Results for open review have been similarly mixed. One study
at the British Journal of Psychiatry, led by Elizabeth Walsh, found that
when referees revealed their identities to authors, they provided better
reviews, were more courteous, took longer to complete reviews, and
were more likely to recommend publication than were anonymous reviewers. But a pair of studies led by Susan Van Rooyen of the British
Medical Journal found that revealing reviewers’ identities—either to authors or to co-reviewers—did not impact review quality or reviewers’
recommendations.
The discrepancies among these studies of double-blind and open
review could arise from their differing methodologies and sample populations. It is also worth mentioning that none of these studies examined
the most serious ethical issues, such as respect for intellectual property.
Future studies should take these factors into account, and they may
eventually tip the balance in favor of one form of review or another.
Conclusion
Peer review is a key feature of scientific publication, but it is susceptible to bias, inefficacy and other ethical transgressions. Alternative
forms of review have produced only equivocal improvements in fairness and efficacy and have not been tested with respect to other problems. What are the next steps we should take to improve peer review?
First, researchers should receive more education on how to review
scientific articles—a skill that is not typically emphasized during research training. Some scientists do show students and postdocs how
to review papers, and some research institutions cover peer review in
seminars and workshops on research ethics. These practices must become more widespread. In particular, investigators should teach their
trainees how to evaluate articles for scientific merit and to follow ethical standards of peer review. Asking young scientists to help review
papers is a good way to educate them, provided journal editors give
their permission and the process remains confidential.
Journals should also develop and publicize instructions for new
reviewers and policies for reviewers and editors, just as they have done
14
A Troubled Tradition
for authors. Rules should address confidentiality, fairness, conflict of
interest, respect for intellectual property, and professionalism.
Editors should carefully manage the peer review process to prevent or address problems and concerns. They should explicitly inform
reviewers about their journals’ peer-review policies, remind reviewers to
disclose conflicts of interest and return their reports on time, and delete
any personal attacks from reviewers’ comments. If editors have evidence
that a reviewer provided a poor review or abused the process, they should
not invite that person to do other reviews. These ideals may be complicated when editors have difficulty finding experts willing to review a
manuscript, and when referees submit their reviews late, overlook errors,
or disagree about the quality of a submission. Workshops and conferences
on the subject could help editors to cope with these challenges.
Finally, scholars should conduct additional research on the ethics
of peer review. Our exploratory study of the experiences of NIEHS
researchers suggests that some problems are common, but the results
should be confirmed in other settings. Future work should determine
how often ethical problems occur and how they affect scientists’ attitudes and behaviors. Studies should also address the causes of unethical behavior in journal peer review and the effectiveness of alternatives, such as double-blind or open review, at preventing various types
of transgressions.
There is certainly no perfect solution to the problem of quality
control in the scientific record. Despite its flaws, the system adopted
by the editors of Philosophical Transactions two and a half centuries ago
seems to work as well as any method that has been tried—but its age
and pervasiveness must not foster complacency. While journals, editors and scholars work to understand and regulate peer review, it’s up
to every individual scientist to maintain a thoughtful awareness of his
or her participation in the process. Such vigilance and professionalism
can only improve the quality of reviews, and might even spark new
insights into how the review system could eventually be improved.
David B. Resnik is a bioethicist at the National Institute of Environmental
Health Sciences. He received a Ph.D. in philosophy from the University of
North Carolina, Chapel Hill, and a J.D. from the Concord University School
of Law in Los Angeles in 2000.
Acknowledgments
I am grateful to Bruce Androphy, Zubin Master, Christine Flowers and Adil
Shamoo for helpful comments. This essay does not represent the views of the
NIH, the NIEHS or the U.S. government.
15
3
Authorship Diplomacy
Cross-national differences complicate
allocation of credit and responsibility
Melissa S. Anderson, Felly Chiteng Kot, Marta A. Shaw,
Christine C. Lepkowski and Raymond G. De Vries
Among scientists, authorship is a very big deal—
and for good reason. It not only establishes the record of scientific progress but also stakes a scientist’s claim to originality and priority. As sociologist Robert Merton noted decades ago, recognition for original work
is the coin of the realm in science. Authorship is the basis for promotion,
tenure, salary, honors and invitations to participate in prestigious initiatives. It is important for collaborating authors to get it right.
Getting it right seems like a simple and straightforward task:
Include those who contributed to the project and omit those who did
not. Most scientists, however, have encountered situations in which coauthors disagreed about who should be included on a publication or in
what order they should be listed. In a recent study, two of us and our
colleague Brian Martinson found that 12 percent of midcareer scientists admitted that they had inappropriately assigned authorship credit
within the previous three years. Such situations may reflect competitive pressures in science or disputes among authors.
Problems with authorship are complicated enough in domestic
research, but they can be particularly thorny in the context of international scientific collaborations. Whether authorship disagreements are
more common in international or domestic research is an open question, but some aspects of cross-national collaboration do complicate
authorship decisions.
International Collaborations
Scientific research is increasingly international in scope and practice. Worldwide, the percentage of science and engineering research
articles with authors from more than one country increased from 8 percent in 1988 to 22 percent in 2007, according to the 2010 Science and
Engineering Indicators compiled by the U.S. National Science Foundation.
16
Authorship Diplomacy
Rates of international collaboration
as defined in the Indicators are 20
to 30 percent in the United States,
China, Japan and India, but around
50 percent in the European Union,
in part because recent EU policies
and incentives favor international
collaboration.
As we consider authorship issues that arise in these collaborative
ventures, we draw on our own and
our colleagues’ work in the recently
published book International Research Collaborations: Much to be Gained,
Many Ways to Get in Trouble and our ongoing research on international
scientific collaborations. Specifically, we use material from 10 focus
groups and 60 interviews that we conducted over the past year with
scientists in the U.S. (and a few outside the U.S.) who are involved in
cross-national research collaborations. When we asked these scientists
about problematic and beneficial aspects of international research, we
inevitably heard about issues with authorship and publication.
Getting it right seems
like a simple and
straightforward task:
Include those
who contributed
to the project and
omit those who did not.
Errors of Omission
One of the most obvious problems in collaborative authorship is
omitting authors from a paper. The classic form of omission occurs when
two collaborators are in conflict (professional or personal) and one leaves
the other’s name off a paper out of spite. Such cases are possible in almost
any collaboration—​domestic or international. But other forms of omission are more directly linked to cross-national research. Qualifications
for authorship, based on scientific contribution or professional status,
differ internationally. One scientist told us about working with collaborators in another country who were unwilling to give authorship credit
to graduate students simply because of their junior status. He tried to
correct the injustice without triggering professional retaliation against
the students: “You really almost have to be subversive to help younger
people in a way that doesn’t ruin their lives at home, which is not so
simple—but I think it is a huge integrity issue.”
We also heard about researchers who left others’ names off publications in order to advance their own careers. In some countries, a senior scientist may feel entitled to take full credit for a junior colleague’s
work. But the reverse can also happen: Sometimes young scientists
train in labs outside their native countries, then publish the results on
their own once they return home.
Authors who are omitted without having given their consent often
feel wronged, but sometimes authors agree to be left off a publication
Anderson, Kot, Shaw, Lepkowski and De Vries
17
In this map of research collaborations, coauthors’ cities are connected by white lines. The brightness reflects both the number of collaborations between a pair of cities and their proximity. The network is based
on author affiliations on scientific papers aggregated in Elsevier’s Scopus database from 2005 to 2009.
(Image courtesy of Oliver Beauchesne, Science-Metrix, with adaptations by Tom Dunne.)
in exchange for some other form of compensation, usually financial.
This arrangement, known as ghost authorship, is a problem in the U.S.
as well as in other parts of the world. In 2010, Shen Yang of Wuhan
University in China released estimates that Chinese academics spent
more than $145 million on ghostwritten papers in the previous year.
That sum is considerable, especially considering reports of low pay to
ghostwriters. For example, Associated Press reporter Gillian Wong in
2010 wrote about a Chinese ghostwriter who received the equivalent of
$45 per paper for composing professors’ research articles. One of our
focus-group participants commented that pressures and financial rewards for publications increase Chinese academics’ willingness to pay
for ghostwriters.
Authors may also remove themselves—either voluntarily or
under pressure—from a publication because they fear repercussions
for having participated in politically or religiously sensitive research.
International collaborators whose research findings may embarrass
their governments—for example by exposing weaknesses in healthcare systems—sometimes ask to be omitted from publications for the
sake of their careers.
Undeserved Credit
An omitted author is clearly denied the recognition he or she deserves, but the addition of undeserving authors can also be damaging.
Extra names dilute the credit allocated to deserving authors and obscure
responsibility for the work. We identify four categories of the addedauthor problem, distinguishing them according to the motivations for
18
Authorship Diplomacy
adding an author. These categories overlap to some extent because motivations can be multiple and may not be fully known.
Surprise authorship is when a researcher finds out after publication
that his or her name appears on a paper. In some cases, collaborators
from different countries do not observe the same practices with regard
to coauthorship and review of manuscripts. One scientist told us about
a paper published by international colleagues: “I found it by stumbling across the paper in the literature. There it is. This is my name, and
there’s the paper, and I have never seen this paper. ”
Gift authorship occurs when someone is given more credit on a
paper than he or she deserves. Sometimes a principal investigator decides that it is someone’s turn to be on a publication and arranges for
that person’s name to appear—even if he or she has not done enough to
deserve authorship. In other cases, a senior researcher may decide that
his collaborators need publications more than he does, so he allocates
publication credit generously (or overly generously) to his collaborators.
For instance, in some countries it is common practice to include individuals who have only had administrative oversight. One scientist we
interviewed complained about too-liberal inclusion standards among
his international colleagues: “Sometimes I am very strict about them, basically saying that I’m not going to allow it. Other times I know there is
a political reason why they do it, and so unfortunately I may just let it go
by.” Gift authorship is less benign when it involves an expected quid pro
quo in the form of future assistance, favors or advantages.
Honorary authorship is often equated with gift authorship, but the
motivations are different. Honorary authorship goes to individuals with
higher status, as a way of honoring them personally or in their roles as superiors. One scientist told us about working with collaborators in Europe
for whom it was standard practice to include “out of respect” the student,
the supervisor, the supervisor’s mentor and the department chair.
Often, however, the honor bestowed through unearned authorship is not freely given; rather, it is demanded by supervisors, administrators or funders. This issue came up frequently in our interviews and
focus groups. An epidemiologist told us,
People may find that in an international context, if there is a head of the
laboratory, that person may expect to go on anything regardless of their
contribution or lack of contribution.… So there are issues of sensitivity of
where somebody is in terms of the hierarchy.
A focus-group participant explained:
The culture is different. You may be dealing with a researcher who in fact
has several layers of bosses. And when it comes to negotiating the dollars, they all get involved. When it comes to publications, they all get to
have their name on the paper.
Anderson, Kot, Shaw, Lepkowski and De Vries
19
50
international coauthorship (percent)
45
European Union
40
35
30
China
25
Asia (8 countries)
20
India
United States
15
10
Japan
5
0
1989
1991
1993
1995
1997
1999
2001
2003
2005
2007
year
Articles with authors in two or more countries represent the fastest-growing segment of the science and
engineering literature. European Union incentives that encourage collaboration within Europe have had
a striking effect. The eight Asian countries represented by the dotted line are India, Indonesia, Malaysia,
the Philippines, Singapore, South Korea, Taiwan and Thailand. (Illustration by Tom Dunne, adapted from
data published by the U.S. National Science Board.)
In a 2010 Nature article, author Karen Kaplan relayed suggestions from
academics on how to get tenure—including “name a senior department member as a coauthor on your papers if you’re in Europe.”
In legitimizing authorship, a guest author may be listed because of
the credibility that his or her position or status will bring to the publication. As one of our respondents put it,
People really want to have your name on their paper, sometimes on papers where you … didn’t even know that they were doing the study. But
they’re using a little bit of the reagent you gave them which, you know,
you would give anyone freely.… I think that they want your name on a
paper because it may legitimize things.
Another biomedical scientist told us about her experience of being
named an author of a paper by a research group in another country,
to which she had sent plasmids. She thought the study was done
20
Authorship Diplomacy
incorrectly, and the findings directly contradicted a paper she had published. She continued,
They were upset because what they wanted was my name on the paper
so that they could submit it to a journal that was a little bit higher up in
the hierarchy, because I already had a reputation in the field. So … this
was very uncomfortable because I was saying to them, “No, your work
isn’t good enough.” And I was trying to find ways not to say that, but
that’s frankly what I felt.
In other cases, legitimizing authorship comes into play when an
author is added to mask the illegitimate contributions of others. For example, pharmaceutical firms may recruit researchers to serve as figurehead authors of company-authored papers in order to hide potential
conflicts of interest. The figurehead is usually paid well to allow his or
her name to appear, frequently replacing the actual authors who may
be acknowledged in a footnote or may be absent altogether.
Out of Order
Trouble can also arise as collaborators work out the sequence in
which their names will appear on a publication. Disciplinary customs
differ, particularly in the significance of the first- and last-named authors.
Cross-national teams are often cross-disciplinary as well, and the order
of authors can be a point of dispute. One of our focus-group members
described collaborating with a high-ranking scientist in another country
who insisted on taking the last position to signify, he said, his minimal
role on the project. In the journal in question, however, the last position indicated a significant responsibility as the corresponding author
for the study. The scientist described it as “a very difficult situation.”
Another scientist we interviewed had worked with a team that included
a member of a royal family (a princess) who expected to be the first author on every publication in her country, regardless of her contribution.
Some collaborative teams adjust the order of authors to share
credit fairly among themselves. They may adopt alphabetical ordering
and rotate the alphabetical list in subsequent papers to take turns being
first author. Some teams include a note that all authors, or some subset
of the authors, contributed equally. Others take advantage of differences in how contributions are measured in different countries. For example, a biological chemist told us,
It worked out nicely.… I was offering them either first or last [author position], thinking that they would of course choose last, but they wanted
first, which was perfectly fine with me. That put me last, so in that sense,
yeah, it was great.
Anderson, Kot, Shaw, Lepkowski and De Vries
Plagiarism
21
Recent media attention
to plagiarism has alerted
scientists to the need to
check manuscripts for
plagiarized material
using increasingly
sophisticated software.
Plagiarism stakes an implicit
authorship claim on someone else’s
words or ideas. Recent media attention to plagiarism has alerted scientists to the need to check manuscripts for plagiarized material using
increasingly sophisticated software.
Plagiarism among international trainees has long been a concern in many countries. In a 2011 article, research ethicists Elizabeth Heitman of Vanderbilt University and
Sergio Litewka of the University of Miami Miller School of Medicine
discussed numerous factors that may lead to plagiarism. These include
the “normalcy of plagiarism” in some social and cultural environments,
vague integrity standards, and rejection of U.S. concepts of originality
and intellectual property. Plagiarism may also be a strong temptation
for international trainees who have difficulty writing in English but
are under pressure to publish in English-language journals. Such problems are not unique to students. In a letter published in Nature in 2007,
Turkish physicist Ihsan Yilnaz writes,
For those of us whose mother tongue is not English, using beautiful sentences from other studies on the same subject in our introductions is not
unusual.… Borrowing sentences in the part of a paper that simply helps
to better introduce the problem should not be seen as plagiarism. Even if
our introductions are not entirely original, our results are.
Cultural perspectives on authority also influence writers’ attitudes
toward plagiarism. One of our interviewees said, “It took me some years
to figure out that there’s an idea at large that anything that comes from authority figures is held in high regard.” She noted that material in the published literature is assumed to come from a person of authority, so students
may conclude, “What right do I have to change that person’s words?”
Ready access to articles online has made plagiarism easier.
Environmental engineer James Leckie observed in an interview that
students in some cultures do not believe they are doing anything wrong
when they plagiarize. They argue that “if the authors didn’t want you
to use their material, they wouldn’t publish it, but since it’s published
and accessible, it should be free for everyone to use in any way.”
A specific issue in the global context is translation plagiarism—that
is, translating a publication in whole or in part and publishing the
translation as one’s own work without acknowledging the original authors. One U.S. biomedical researcher we interviewed encountered this
22
Authorship Diplomacy
problem with a collaborator in a developing country. The collaborator
asked him to review and endorse a book manuscript, which turned out
to be a translation of materials that the U.S. researcher had given to the
collaborator, for which the collaborator intended to take sole credit. The
U.S. scientist saw that the country in question would derive great value
from having the materials in its native language, but he confronted the
collaborator about the plagiarism:
I said, “I’ve got a real problem with this, because I recognize why you’ve
done it and all of this, but the fact is that you have put your name on
other people’s words, and what you did is you translated it.” And he
said, “Well yes!… What’s the problem with that? And aren’t I your
friend? And look at what I have done for you!” and things like that. It
was pretty awkward.
Skills for the Global Context
Authorship sits at the intersection between collective effort and individual ambition. Scientists participate in international collaborations
for many reasons, including a belief that collaboration will benefit all
involved. But the pursuit of individual recognition cannot be completely
eliminated. Collaborators must pay careful attention to authorship in
order to share credit and responsibility fairly among all team members.
The most helpful way to deal with authorship issues is to agree on
general principles for authorship at the beginning of the collaboration
and then to agree on authorship of each article when its content is first
outlined. A U.S. scientist who is experienced at international collaboration told us, “You have to have the guts to tackle [authorship issues]
before you go into it.” Another said,
I found it difficult at first. But it was very clear that if there is any doubt as to
how authorship—­especially credit—is going to be divvied up, it is better to approach that before, rather than after, just so everybody has a pretty good idea.
It is clear, though, that discussions alone will not clear away all authorship problems. Several resources provide additional guidance.
Collaborators should consult the policies of the journals to which they
plan to submit their work. International guidance is available through the
Council of Science Editors, the Committee on Publication Ethics and the
International Committee of Medical Journal Editors’ Uniform Requirements
for Manuscripts Submitted to Biomedical Journals. The Singapore Statement,
released in conjunction with the Second World Conference on Research
Integrity in 2010, provides a succinct statement that responsible authorship
is a duty of researchers worldwide. Scientists can join AuthorAID, a free, international research community that supports researchers from developing
countries with services such as networking and mentorship.
Anderson, Kot, Shaw, Lepkowski and De Vries
23
The U.S. scientists we interviewed were aware of the ethical complexity of authorship and the importance of meeting high standards in
practice. But they were also careful to take their collaborators’ perspectives into account. A focus-group participant said,
the problem is that our ignorance of the way these kinds of systems work
in other countries can sometimes be really detrimental to the way the
research is performed and expressed and published.
Leckie provides a cautionary example, drawn from his own experience
working with an Asian collaborator:
I once was designing a research activity with this fellow, and he brought
to me very early on several drafts of his part of the proposal. And on it, he
had the name of a division head who was associated with the overall program but had nothing to do with his research activity. The division head
had no expertise relevant to the proposal and was not going to contribute
anything. I told my collaborator, “Look, that guy’s not doing anything.
Take his name off.” And he said, “Well, I can’t.” Then I said, “Well then,
take my name off.” And so we had a real confrontation, and in the end my
collaborator took the name off, and it resulted in an attempt to fire him.
Maintaining the integrity of authorship is complicated in the
global context, but the stakes can be high for all concerned. It is worth
the time and effort required to get it right.
Melissa S. Anderson is a professor of higher education in the Department of Organizational
Leadership, Policy and Development at the University of Minnesota, Minneapolis. She received a
Ph.D. in higher education from the University of Minnesota. She studies research integrity and
international scientific collaboration. Felly Chiteng Kot is a Ph.D. candidate and Marta A. Shaw
and Christine C. Lepkowski are Ph.D. students in Anderson’s department. Raymond G. De Vries is a
professor of bioethics at the Center for Bioethics and Social Sciences in Medicine at the University of
Michigan. He received a Ph.D. in sociology from the University of California, Davis. He is writing a
social history of bioethics and researching the internationalization of ethics guidelines.
25
4
Making Ethical Guidelines Matter
Professional societies are uniquely positioned
to develop effective codes of conduct
Michael J. Zigmond
Back in 1995, I thought it would be a fairly simple
job to draft a set of guidelines for the responsible communication of scientific results. The Society for Neuroscience (SfN) wanted such guidelines,
and I chaired the committee that drafted them. At first, we assumed that
a small group of people would easily prepare a simple document of a
few pages. The reality proved to be quite different: The process spanned
more than three years, during which our text evolved from a few paragraphs written by two people to about 5,000 words written by a committee of seven, and finally to 13,500 words composed by 13 individuals
with experience in academia, industry, publishing and law.
Illustration by Tom Dunne.
26
Making Ethical Guidelines Matter
Societies are often in
the best position to
understand and set
standards of conduct
for the specific
segment of science
that they represent.
In the end, our guidelines didn’t just
give instructions; they explained the reasons that researchers should behave in certain ways. And in generating those explanations, we sometimes decided to modify the
very rules we had set out to justify. The SfN
continues to refine its guidelines today, and
the process has stimulated much reflection
on the role of professional societies in establishing ethical guidelines—and keeping
them alive, relevant and effective.
Getting Specific
Professional societies have a long and honorable history, tracing
back at least to the 15th century. But only very recently have scientific
societies begun to establish guidelines on research ethics for their members. Today, more than 50 societies have written guidelines, and this is
as it should be: Professional organizations are in a unique position to
promote the responsible conduct of research.
Responsibility for overseeing research ethics has typically fallen
to research institutions rather than societies. But institutions can set
standards only for the most basic and universal matters, such as plagiarism and fabrication of data. Other aspects of research, including
authorship, data management and the sharing of reagents, can be too
specific to a given field to be regulated at the institutional level.
True, professional societies are generally ill equipped to investigate claims of misconduct. They also have limited powers of enforcement and few penalties to impose on those who misbehave. But societies are often in the best position to understand and set standards
of conduct for the specific segment of science that they represent.
Moreover, although there are some 2,500 colleges and universities in
the United States alone, there are far fewer scientific societies—each of
which can efficiently communicate with a large number of scientists.
Finally, societies are increasingly involved in publishing research journals and organizing conferences at which scientists present results. In
such venues, these organizations have a unique opportunity—and an
obligation—to educate their members about responsible conduct.
Integrity in Neuroscience
My own work developing guidelines for professional conduct
is almost solely with the SfN. But my experiences may illustrate how
other societies could begin such an endeavor, what they might do once
guidelines are adopted, and what problems they might encounter. The
Michael J. Zigmond
27
Ethics discussions such as this one are an integral component of the professional skills workshops that
the Society for Neuroscience offers at its annual meetings. (Copyrighted photograph by Jeff Nyveen,
courtesy of the Society for Neuroscience, 2010.)
SfN example also illustrates how a society can go beyond guidelines
alone to promote research integrity in other dynamic ways.
The SfN is a relatively young society. The word neuroscience itself did not even appear in the literature until the 1960s, and the SfN
was created in 1969. Since then, both the field and the society have expanded rapidly. Indeed, during the past five years, a quarter of a million research articles on aspects of the nervous system were published.
That amounts to more than 100 papers every day! And the size of SfN
membership has expanded dramatically since its inception, growing
from a few hundred to more than 40,000 members.
Shortly after its inception, the SfN began to take on ethical issues.
In the early 1980s, public concern arose over the treatment of monkeys
at the Institute for Behavioral Research in Silver Spring, Maryland. In
response, the SfN organized a symposium on the use of animals in research, established a standing committee on the treatment of laboratory
animals and human subjects, and developed a formal policy on those
issues. The society also established a general policy on research ethics
and an initial statement about fabrication, falsification and plagiarism,
which was replaced in 1999 with a comprehensive set of guidelines for
responsible conduct in scientific communication.
Also in the early 1980s, the SfN began to offer a Social Issues Roundtable
at the society’s annual meetings. Sessions have included “Neuroscience
in Developed and Developing Countries: Partnership or Exploitation,”
“Perspectives on Gender in Neuroscience” and an overview of several other
ethical issues. The series continues today and adapts to encompass timely
issues. Shortly after the destruction of the World Trade Center towers, for
example, the program committee abandoned the intended topic in favor of
a special session on the impact and treatment of trauma.
As another addition to the SfN meetings, my colleague Beth Fischer
and I began in 1997 to hold annual workshops on professional skills such
28
Making Ethical Guidelines Matter
as writing research articles and making oral presentations. We firmly
believe that research ethics are best taught in the context of other skills
rather than in isolation, and our workshops adhere to this principle. Skillcentered events tend to attract more students than those that address integrity alone. And where better to discuss authorship criteria and plagiarism
than in a workshop on writing research articles? We use the lunch hour
for case discussions led by neuroscience faculty members, thereby emphasizing that ethical concerns are worth the time of working scientists. And
we establish a supportive culture by acknowledging that students and
researchers do need guidance in professional skills that they might not
learn in traditional training programs. Melissa Anderson, a social scientist
at the University of Minnesota, and her colleagues have shown that such a
culture fosters research integrity. To use the words of Addeane Caelleigh,
former editor of Academic Medicine, we provide a “hidden curriculum” by
presenting a “message of actions rather than formal statements.”
The twin objectives of educating and promoting a culture of responsible conduct also led the SfN to establish in 2003 the ongoing annual
Neuroethics Lectures. At the most recent lecture, Henry Greely spoke of
the many ethical issues raised by discoveries in neuroscience. Among the
difficult questions he addressed were: If we develop cognitive enhancers,
should they be available to anyone or only to those with special needs?
And should we try to detect neurological diseases before symptoms appear even when there are no treatments? At the same meeting, the SfN
held a special session called “The Brain on Trial,” in which neurologists
testified before an actual judge about whether a fictitious individual
should be convicted of murder despite evidence in his brain scans.
In 2006, growing interest in the intersection of neuroscience and
ethics spawned a whole new organization, now named the International
Neuroethics Society. It coordinates its meetings with those of the SfN
and hosts symposia on topics such as global health, predictive biomarkers for disease, and neuroscience and national security.
Starting from Scratch
It was in the context of these expanding ethics initiatives that the
SfN decided to develop guidelines on responsible conduct in communicating scientific results. The stimulus for this committee, which I chaired,
was a concern among some society members that a few journals were
fast tracking papers—that is, soliciting and then prioritizing certain papers rather than evaluating manuscripts in the order in which they were
submitted. But the scope of our committee quickly expanded.
It is often said that one should not reinvent the wheel, and scientists often (but not always) examine what other people have written
about a topic before initiating research in that area. Of course, learning
too much about what has already been done can close one’s mind to
Michael J. Zigmond
29
new possibilities. At least that is
the committee’s excuse for the fact
that when we started working on
our guidelines, we were not hampered by knowledge! (For those
embarking on the task anew, I
would now recommend doing a
little homework. This might include examining existing guidelines and reading “Eighteen Rules
for Writing a Code of Professional
Ethics,” by Michael Davis, a philosopher at the Illinois Institute
of Technology. Caelleigh’s “Roles for Scientific Societies in Promoting
Integrity in Publication Ethics” is another excellent resource.) We did
eventually make considerable use of the pioneering efforts of the
American Chemical Society, which kindly gave us permission to adapt
and extend its guidelines for our purposes.
We might also have begun by considering our objective: How did
we want our guidelines to be used? Mark Frankel, a senior staff member
of the American Association for the Advancement of Science, has been
a major force in encouraging professional societies to promote research
integrity. He has noted that codes of ethics come in three flavors: aspirational (what would we like everyone to do?), regulatory (what must we
do?) and educational (why should we do this?). Education researchers
Felice Levine and Joyce Iutcovich have made a related distinction, noting
that such guidelines generally focus on “(1) general education and professional development, (2) prevention and advisement and (3) complaint
handling and enforcement of codes of ethics.” Levine and Iutcovich add
that societies also vary in the level of effort they devote to each objective.
But our committee did not begin by contemplating either set of
distinctions—or how much effort we would expend. We started out
developing what would have been, in retrospect, a brief set of aspirational guidelines. But over three years, we transitioned to a more regulatory and educational approach, so that our final document included
extensive discussion and reasoning for each guideline. I believe that
this approach made the document not only more valuable but also
more honest. By forcing ourselves to justify each directive, we found
instances in which we could not do so. And rather than use the “because we said so” explanation, we modified the guidelines.
For some topics, our guidelines were at odds with the codes of other
organizations, and we were explicit about this in our document. A case
in point involves the matter of placing prepublication copies of manuscripts on a website. Although this practice is common in some fields,
such as physics, some journals do not wish to consider a manuscript
By forcing ourselves to
justify each directive,
we found instances in
which we could not do
so. And rather than use
the “because we said so”
explanation, we modified
the guidelines.
30
Making Ethical Guidelines Matter
that has already appeared on the Web or been circulated widely among
colleagues. They may consider such papers to have been previously
published or to have lost their novelty. But our committee felt that both
practices were not only acceptable but desirable because they helped
promote communication among scientists. We did, however, feel obliged
to warn authors that, if they followed our recommendations, they might
risk having certain journals refuse their manuscripts.
Conundrums
The question of sharing manuscripts evoked little debate in our
committee or in the governing council of the SfN. Some issues were
even more straightforward: One cannot imagine, for example, a serious argument in favor of falsifying data. But few aspects of responsible conduct are so simple. Like most sets of guidelines, our document
dealt with many subtler issues, some of which have led to a good deal
of discussion. Among them were authorship, dual publication, plagiarism and sharing reagents.
Authorship: Should the head of a research group automatically be
listed as an author of every publication that derives from his or her
lab? Does a technician who spent many hours collecting valuable data
qualify as an author? The SfN guidelines say no in both cases, stating
clearly that intellectual contribution is an essential criterion for authorship. But it took a lot of discussion for our committee to reach this position, and it is still not universally accepted within our field.
Dual publication: Although we encouraged informal sharing of
manuscripts, our guidelines say that publishing the same material in
two primary research journals is always wrong. But what if an author
works in a country where English, the language of nearly all internationally known journals, is not well understood by those who would
benefit from reading the paper? In such cases, a translation could be
beneficial, and some have argued that the “no dual publication” rule
hampers the distribution of knowledge.
Plagiarism: The prohibition against publishing someone else’s text
or data without permission is clear enough, but what about the use of
someone else’s ideas? If, for instance, a 1993 publication by Jones states:
“We believe that maternal stress often leads to a marked change in the endocrine response to stress in offspring,” then the use of those very words
would require quotation marks and a specific reference to Jones, 1993.
But what if a later author writes: “We think that stressing a mother can
alter the hormonal reaction to a stressor in neonates”? Quotation marks
are no longer appropriate, but surely the minor change in wording still
warrants a reference to Jones. How many words must one change to absolve oneself of providing a citation? Often this is a judgment call, but it
is always better to err on the side of giving Jones some credit.
Michael J. Zigmond
31
Sharing research reagents: The SfN
guidelines are quite emphatic about
this, stating that “unique … materials
used in studies being reported must be
made available to qualified scientists
for bona fide research purposes.” This
directive is consistent with the regulations of most biomedical journals and funding agencies. But what of
the student who spent years developing a reagent in order to conduct
specific experiments and now wishes to reap the benefits? Surely, in
evaluating what is best for science, one should also consider what is
best for the careers of scientists, especially junior ones. This tension became all too clear to me at an event that I wrote about in 2003 in Science
and Engineering Ethics. I quote from that report:
How many words
must one change to
absolve oneself of
providing a citation?
A year after the publication of the [SfN] guidelines I decided to help promote an awareness of the document by organizing a workshop at a small
conference of neuroscientists. My focus was on the stipulation that authors be prepared to share with other investigators any materials developed in a lab and described in a peer-reviewed publication—a requirement that … had been considerably strengthened as a result of input
to the guideline committee. I invited several “opinion leaders” to help
with this task, including a member of the editorial board of a prominent
neuroscience journal and the director of a federal agency that supports
research in neuroscience. To bring a little bit of levity to an otherwise
very serious discussion, party hats were provided for each of the discussants. For example, the editor was given the shade of a copy editor and
the director of the funding agency received Uncle Sam’s top hat. Then I
distributed an ethics case:
“Dr. Michelle Tyson is happy. She has just completed three years of hard
work as a postdoc and has a great deal to show for it: She’s developed a
knockout mouse (Syko) that is a model for schizophrenia, published a paper
on that mouse in a prestigious journal, and has secured an assistant professorship at State University. She knows that the new mouse and the paper
were critical in getting her the job and she’s ready to show the search committee that they did the right thing: She will immediately set up her new
lab and then begin to reap the benefits of her hard work by exploring the
neurobiology of this mouse. However, no sooner had she arrived at State
University, than she received an email message from Dr. Max Megalab
asking her to provide a dozen mice from her Syko colony. It is clear from the
email that Dr. Megalab understands the potential of the mouse line and will
have no trouble figuring out and completing the very experiments that Dr.
Tyson had plans to pursue. Should Dr. Tyson provide the mouse?”
Imagine my surprise when no one on the panel felt that Dr. Tyson should
provide the mouse and thereby follow the guidelines for sharing! Instead,
concerns were raised about the fairness of asking a hardworking junior
32
Making Ethical Guidelines Matter
researcher to turn over the fruits of her labor to Dr. Megalab, the possibility that people would rather postpone publication than share a unique
resource that was critical to their ongoing experiments, the absence of
funds to facilitate the distribution of those materials, or the means by
which the guidelines would be enforced. (For comments on this case,
see [http://skillsand­ethics.org].) I was still trying to deal with this unexpected outcome the next morning when I returned to the location of the
workshop only to discover that my own key reagents—the party hats—
had apparently been stolen! I have not tried to repeat this experiment.
In a recent review of the original guidelines, the SfN reaffirmed its
mandate that reagents must be shared if they are not otherwise available—
a practice that reduces the waste of funds for making duplicate reagents
and also promotes attempts to replicate published results. But just as in the
case of honorary authorship, the regulation is not universally followed.
The Big Thaw
If some senior scientists refuse to follow an ethical guideline even in
a mere case study, perhaps it is fair to ask: Do guidelines for responsible
conduct matter? Do they make a difference? Unfortunately, we have little
information with which to evaluate this essential question. But then, we
actually have little information about the impact of any ongoing effort to
reduce scientific misconduct—behaviors that are almost certainly very
rare but are nonetheless of great significance to science. And we must
ask, as we do of the proverbial tree falling unheard in the forest, does a
set of guidelines that remains unread make a sound?
In “Honesty” (Chapter 1), John Ahearne quoted the Roman poet
Juvenal, who observed that “honesty is praised and then left to freeze.”
And Caelleigh has written:
Anyone who works to change human behavior engages in “magical
thinking” at some point—such as the irresistible hope that small, simple
changes can produce large, complex results in behavior.… Magical
thinking also underlies the situation when a scientific society passes a
resolution that its members are committed to the highest standards in
all aspects of research and then, based on the resolution alone, expects
members to meet the standards.
As the chair of the initial SfN committee on guidelines for communicating research findings, I worried a great deal about the fate of our
document. How might we avoid letting our guidelines “freeze,” sitting
in the archives only to be thawed when a potential ethical breach was
discovered and procedural guidelines were required? How would we
avoid “magical thinking”—something that should, after all, be anathema
to any group of scientists? As a researcher in the field of brain disorders,
I am interested in preventing diseases as well as treating them, and I
Michael J. Zigmond
33
believe that guidelines should serve a similar pair of functions. Thus, my
colleagues and I have sought ways to make our code a more effective
educational tool, not just a reactive mechanism to deal with misconduct.
And I believe that we are beginning to be successful.
To accomplish this, the SfN has added to its lively array of lectures and workshops on ethics, providing several specific programs
that complement the guidelines. This has been particularly evident
since 2010, when the society issued revised guidelines for responsible
communication and established an e-mail address through which individuals can make comments or raise questions about those guidelines. The SfN has also recently hosted two international symposia on
responsible conduct in communicating scientific results, along with a
two-day workshop on the subject. And the society has commissioned
a manual that will contain its guidelines on research communication
along with related ethics case studies, discussion notes and a bibliography for further reading. That manual will be ready for distribution
to SfN members and other interested individuals later this year. We
anticipate that it, too, will help the guidelines serve the educational
purpose for which they were designed and keep them from being relegated to the archives. I urge other professional societies to take similar
approaches if they have not already done so. Let us do away with magical thinking and rewrite Juvenal’s quote to state: “Honesty is praised
and then helps us change the world around us!”
Michael J. Zigmond is a professor of neurology, psychiatry and neurobiology
at the University of Pittsburgh. He is a past secretary of the Society
for Neuroscience and past president of the Association of Neuroscience
Departments and Programs.
35
5
Digitizing the Coin of the Realm
Electronic publication has transformed
the culture of scientific communication
Francis L. Macrina
Imagine it’s 1991. You’ve just completed a series
of exciting experiments in the lab and now it’s time to write up the
results. You review the instructions for authors of the journal where
you will submit your paper and you consult some rudimentary authorship guidelines published by your scientific society. You are even
inspired to reread a few sections of Robert Day’s classic book How to
Write and Publish a Scientific Paper, then in its third edition. (It’s now
in its sixth.) Engaging your coauthors, you work diligently for a few
weeks to draft and revise your manuscript. Finally, you package the
printed pages, attach the correct postage, and get ready to hoof it over
to the campus postal drop. Before you leave your office, you power
down your desktop computer. As its cooling fan turns quiet, you never
imagine that in 20 years, you’ll be able to carry orders of magnitude
more computing power in your pocket.
Scientific Currency
Over the past two decades, computing has transformed scientific publication, a process so central to the research enterprise that it
is often called the “coin of the realm.” Sociologist Robert K. Merton is
credited with introducing that phrase in the context of science. As he
explained in his 1968 article, “The Matthew Effect in Science,” Merton
intended the “coin of the realm” to refer to recognition by one’s peers
for one’s work. But over time, the phrase has become more broadly
connected with the concept of authorship. Nuances aside, publishing
one’s research results is a critical step in earning peer recognition.
It is also essential for the progress of science. The value of publishing scientific results has always been indisputable and it will remain
so. But virtually every aspect of the process is, or soon will be, affected by
the digital revolution. Generally, scientists have accepted these changes,
assuming or hoping they were for the better. As the digital landscape
36
Digitizing the Coin of the Realm
In 1991, when it
came to scientific
publication,
computers weren’t
much more than
word processors.
continues to evolve, we need to think about
and systematically examine the impact technology is having on the coin of the realm.
This reflection should lead to engagement and
action by the community of science—editors,
publishers, scientific societies and scientists
themselves—to ensure that the digital revolution has the maximum positive effect on the
reporting of research. I hope that this essay will
stimulate such thinking.
That Was Then
Twenty years ago, you would have used your personal computer
solely as a means to prepare your manuscript. In 1991, when it came to
scientific publication, computers weren’t much more than word processors. But the winds of change were already blowing, and some aspects of the electronic preparation of your 1991 manuscript did portend
things to come. Computers were making it easier to create complex,
high-quality illustrations. You would have used a software program
to compile your list of literature cited and to insert citations into your
manuscript. Such programs provided relief from the burdensome job
of building reference lists, and they were harbingers of the effect digital
tools would have on scientific publication over the next two decades.
For historical perspective, consider the following 1991 truisms.
Communication by e-mail was growing rapidly, but e-mail attachments
were still a few years away. Manuscript submission and review were
solidly grounded in paper and the postal service, but facsimile machines were beginning to accelerate the process. The rise of electronic
journals and open-access publication was years in the future. Unless
you were in the computer sciences, you probably had not heard of the
World Wide Web project. Digital photography was an emergent technology, but the Adobe corporation had only recently launched version
1.0 of Photoshop. And the now-ubiquitous Adobe Portable Document
Format (PDF) did not yet exist. You can expand this list yourself, but I
trust I have made my point. These elements have all contributed to the
rapid transformation of scientific communication.
This Is Now: From Print to Pixels
Let’s take a look at where things stand today by considering how
computers, computing, and the Internet have affected the publication process itself. The availability of detailed, quality information about how to
publish our scholarly work has grown dramatically, creating a valuable
resource that is just a few mouse clicks away. The spartan instructions for
Francis L. Macrina
37
authors (IFAs) of the early 1990s have given way to complex web pages
and downloadable electronic files. Along the way, IFAs themselves have
changed from brief documents that conveyed preparative and administrative instructions to lengthy, detailed compendia of authorship definitions,
responsibilities, expectations and policies. In 1991, the IFA for the journal
Nature amounted to a single printed page of 1,300 words. Today, Nature
publishes its IFA electronically as the “Guide to Publication Policies of the
Nature Journals.” It is an 18-page, 12,000-word PDF.
Such evolution is more likely to be the rule than the exception. I recently reviewed the publication guidelines of five scientific journals for
a study that appeared this year in Science and Engineering Ethics. Most
of these journals had expanded their IFAs into detailed documents.
I also looked at guidelines provided by a few professional societies
and noted that they, too, contained considerable detail about authorship and publication practices, much of which agreed with the journals’ IFAs. In “Making Ethical Guidelines Matter” (Chapter 4), Michael
Zigmond made a compelling case for the role that professional societies can and do play in developing and promoting codes of conduct.
Zigmond chaired the Society for Neuroscience committee that wrote
guidelines for responsible scientific communication. This document is
so comprehensive that it leaves almost nothing to the imagination.
IFAs and society guidelines have expanded for a variety of reasons.
They have become more detailed and precise in response to lessons learned
from high-profile misconduct cases. And they have grown longer to encompass new policies on topics such as digital image manipulation. Taken
together, modern journal IFAs and professional-society
guidelines form the basis for ethical standards and best
practices in scientific publication. Today, this trove of
information is instantly accessible using whatever electronic portal—PC, laptop, tablet or smart phone—suits
you. The digital availability of information should be a
Illustration by Tom Dunne.
38
Digitizing the Coin of the Realm
catalyst for promoting responsible conduct,
but its mere existence won’t guarantee the
production of ethical researchers. We’ve got
to practice what we preach, and teach what
we practice. The legendary football quarterback Johnny Unitas summed it up before
every game, after the coaches finished their
pep talks. Unitas’s speech was always the
same: “Talk is cheap. Let’s play!”
Computers have not only increased
the availability of ethical guidance, they have also impacted the work
flow of manuscript preparation, submission, peer review, revision and
publication. At one end of the spectrum, your favorite journal may have
gone digital by mandating that some or all manuscript-related activities be conducted by e-mail. At the other extreme, the publisher may
require the use of a web-based, graphic interface to handle all phases
of submission and review, with e-mail communication augmenting the
process. But across this spectrum of modern digital work flows, the
common denominator is a greatly reduced role for the nonelectronic
exchange of materials.
Clearly, digitization makes the manuscript production-to-publication
cycle more convenient for all parties, especially authors. You’ll have to
accept this as my assertion based on experience and intuition, because
data to support the claim are scarce. But if you published papers 20 years
ago, and still do so today, you’ll know what I mean. I believe that most
scientists do not miss drawing figures (even with early computer programs), photocopying manuscripts and mailing printed papers.
But the notion of convenience should not be confused with speed. To
be sure, the time between acceptance and publication has gotten shorter:
just a few weeks for online articles, compared to months for print articles.
But there’s also the issue of the time from submission to acceptance. If you
look at papers published online, you’ll likely find that the time between
submission and acceptance can be a few months, sometimes longer. The
obvious interpretation is that peer review can take varied, unpredictable
and sometimes excessive amounts of time. This may reflect a process that
is desirably rigorous. But excessive submission-to-acceptance time can
also be a sign of the human foibles of overcommitment or procrastination.
Evidently, even the most attractive graphic interface can’t overcome these
age-old problems among authors, editors and reviewers.
The digital
availability of
information
should be a catalyst
for promoting
responsible conduct.
Instant Feedback
The online revolution has changed the way papers are read and
evaluated after they’re published. Studies were once critiqued in
formal letters and reviews—often published in the same journals as the
Francis L. Macrina
39
original papers—and in journal clubs and discussions. The visibility
and pace of those critiques have exploded with the advent of blogs and
other online venues.
For example, an initiative called Faculty of 1000 was established
in the early 2000s as a corporate endeavor to provide postpublication
peer review online. The program selects and enlists scientists, termed
faculty, to scan the literature and comment each month on the papers
they consider most interesting. Their reviews, which highlight good articles and provide constructive criticism, are available by subscription.
Subscribers may also log in and comment on any evaluated article,
but the site is systematically monitored for inappropriate commentary.
Abusive, defamatory or otherwise offensive remarks can be reported
and may be deleted by the service provider.
Such consistent controls are not necessarily in place on independent
blogs, which have also taken on an increasingly visible role in postpublication peer review. One notable example began to unfold in 2010, when
Science magazine published online a research paper about a bacterium
isolated from arsenic-rich lake sediments. The authors reported that this
organism could incorporate arsenic, instead of the usual phosphorus,
into its DNA. The biological implications of the work were huge, and the
paper got considerable exposure in the media. It also attracted scrutiny
from scientists who used their blogs to offer a variety of critical comments. But these evaluations were met with disdain by the paper’s authors, who said they would only respond to critiques that had been peer
reviewed and vetted by Science. Rather than engage with their critics, the
authors simply asked scientists to work to reproduce the controversial
results. This attitude prompted the journal Nature to publish an editorial
which asserted that there is indeed a role for blogging in the assessment
of published results. Still, for some scientists, the speed and directness of
unvetted digital criticism popped up unexpectedly. A subsequent news
article in Nature, cleverly titled “Trial by Twitter,” claimed that “blogs
and tweets are ripping papers apart within days of publication, leaving
researchers unsure how to react.”
That may be so. But fast-forward a few months for a completely
different take on the social networking of scientific data. One of the
largest outbreaks of potentially lethal Escherichia coli infections began
in Germany in May 2011. In about six weeks, there were more than
3,000 cases and 36 deaths. Scientists on multiple continents shared biological samples and used online media such as Twitter, wikis and blogs
to compile their data. Within 10 days of the recognition of the outbreak,
the entire genomic sequence of one of the isolated E. coli strains was
available on the Internet. As I write, data analysis is still underway,
but the collaborative research has already yielded new and valuable
information about the E. coli strains involved. The speed and real-time
availability of this genetic analysis is unprecedented and underscores
40
Digitizing the Coin of the Realm
Electronic publication has transformed scientific communication over the past 20 years. A process that
was once grounded in paper and the postal service (left) is now conducted almost entirely online (right).
The changes have been most profound for open-access journals (center), which provide free access for
readers. Although many subscription-based publications (far right) continue to print their journals,
electronic publication has accelerated the exchange of information. (Illustration by Barbara Aulicino.)
Francis L. Macrina
41
a powerful use of digital social media in the dissemination of new
scientific knowledge.
Access for All
Earlier in this essay, I discussed online publication of articles as
a service provided by publishers to complement the printed versions
of their journals. This form of electronic publication still requires users
to have a subscription or an institutional site license to access online
articles. But a second, slower-growing form of publishing, which also
began in the 1990s, lets readers access online articles for free. The business model for such publications depends on the relatively low cost of
distributing digitally encoded articles. And the expenses of doing so
are borne by the authors themselves or by their academic institutions
or funding agencies.
At a meeting in 2003, a group of scientists, librarians and publishers wrote a document now commonly known as the “Bethesda
Statement on Open Access Publishing,” which has been widely embraced among open-access (OA) publishers. The statement lays out
two conditions that define OA publication. First, it grants “to all users
a free, irrevocable, worldwide, perpetual right of access” to the published work, as well as a license to “copy, use, distribute, transmit and
display the work publicly,” to distribute derivative works based on the
original and to make a few copies for personal use. Second, it promises
that the work will be deposited in at least one online repository that is
supported by an academic institution or other “well-established organization” that supports open access and long-term archiving. In practice, this definition has been widely adopted, with individual variations among publishers.
Although it had a slower start than subscription-based online publishing, the OA model is now viable and growing. A website called The
Directory of Open Access Journals reported the existence of 6,671 OA
journals in all scholarly fields at the end of June 2011. And in a study
published this year in the OA journal PLoS ONE, Mikael Laakso and colleagues reviewed 16 years of OA publishing, from 1993 to 2009. They
report that the number of such journals grew 18 percent per year, while
the total number of scholarly journals grew only 3.5 percent per year.
The numbers of OA journals and articles show impressive growth curves
over the time frame. But despite the increase, these articles accounted for
only about 8 percent of all scholarly papers published in 2009.
Other studies have shown that researchers are well aware of OA
journals and increasingly publish their work in them. Perceived advantages include free accessibility to users and the ability to reach a wide
readership. The validity of criticisms, such as diminished prestige and
lower peer-review quality, remains unresolved. The acceptance of OA
42
Digitizing the Coin of the Realm
publishing and the success of the enterprise may be best expressed by
the fact that PLoS ONE published 6,749 papers in 2010, making it the
world’s largest journal that year. Today, free access is part of the culture
of scientific publication. But it will be interesting to follow the trends as
new publishers enter the marketplace and authors develop a better sense
of the desirability (or undesirability) of publishing in OA journals.
OA publication is often referred to as an “author pays” model,
in contrast to the alternative, in which the user pays a fee to gain access. Typically, authors have always paid to publish their work. In 1991,
authors often spent several hundred dollars on page charges and reprint fees for a single paper. But for OA publication, the author pays
even more. These journals charge an article-processing fee that can
range from about $1,000 to several thousand dollars, depending on the
journal. If you publish in a subscription-based journal that distributes
articles in print and online, and also offers an OA option, you rack up
additional fees. Today, page charges are $50 to $100 per page, which
would get you into print and online and make your paper accessible
to journal subscribers. Then, you might want to publish supplemental
information on the journal’s website, for a surcharge of up to several
hundred dollars. Finally, you might decide to make your article freely
available from the moment it appears online, a decision that may cost
you several hundred to thousands of dollars more. Today, as in the
past, most journals waive or reduce page charges or OA fees if the author demonstrates that he or she cannot afford them.
Although OA journal publishing is the dominant means for putting scientific results into the public domain, other strategies do exist—
and all are facilitated by online technology. Journal articles may be
deposited into any of several public-domain digital archives such as
PubMedCentral. Subscription-based journals may allow authors to pay
an extra fee to make their papers freely available online. Finally, researchers may post OA manuscripts on noncommercial servers such as
arXiv.org, or on personal or institutional websites.
Pondering the Power of Pixels
For more than 20 years, we have witnessed the profound effects
of digital technology on scholarly publication. Changes in logistics
and culture have been diverse and numerous. But I would argue that
today, open access is the central issue in the marriage of publication to
the pixel. It may be growing too fast for some and not fast enough for
others, but it is growing nonetheless. I believe it is here to stay. It takes
multiple forms, from journals that exclusively practice free-to-user
availability, to individual investigators who maintain online libraries
of their own published work. Will one model dominate over time? Are
there more models to come? If the past 20 years are any predictor, the
Francis L. Macrina
43
interplay of imagination, market forces and evolving digital technology
will continue to change the publication landscape.
In the meantime, the scholarly community has a role to play in the
development of the OA movement. That community includes authors,
publishers, scientific societies, librarians and computer scientists. OA
journal publishing should be subjected to ongoing evaluation to measure its impact, to address problems and to improve the platform for
all its users. There should be transparent assessment of performance
metrics such as article processing times, citations, peer-review quality
and the costs to those involved.
Once such evaluations have been performed, they may help answer a growing host of questions: Is a goal of 100-percent open access
reasonable or desirable? Should researchers embrace some forms of OA
publication and not others? What about server space, backup and security issues specific to online-only journals? As we move toward a more
OA culture, what role do—and should—printed journals have? Should
there be more proactive education about OA publication? Do we need
to be more forward thinking about who should pay for publication
costs? Many research-funding agencies do pay grantees’ publication
fees, but with OA publishing, the budget may have to increase. Should
our institutions step up to the plate with their checkbooks? To gain
maximum effect, the analyses that address these questions should be
made by parties devoid of conflict of interest, and—in the spirit of open
access—the results should be placed in the public domain.
Francis L. Macrina is the Edward Myers Professor of Dentistry and Vice President
for Research at Virginia Commonwealth University. He is past editor-in-chief of
Plasmid and has served on the editorial boards of the Journal of Bacteriology;
Infection and Immunity; and Antimicrobial Agents and Chemotherapy.
He has served on the ethics practices committees of the American Society for
Microbiology and the American Association for Dental Research.
Acknowledgments
I thank Andrekia Branch for her help in manuscript preparation and Glen
Kellogg for helpful comments.
45
6
Raising Scientific Experts
Competing interests threaten the scientific record,
but courage and sound judgment can help
Nancy L. Jones
I would be a wealthy woman if I had a dollar for
each time a student, a postdoctoral fellow, one of my colleagues—or
even I—moaned and groaned about the capriciousness of scientific
peer review. Some newbies are stymied in front of their computer
keyboards for months as they write their first manuscript, trying to
organize their meandering paths of research and messy, gray data
into logical experimental designs and strong conclusions. Others, demoralized by pithy, anonymous critiques (surely from their toughest
competitors), have to muster all their restraint to keep from writing
scathing, retaliatory responses to their reviewers. I remember my own
qualms on one of the first occasions that I evaluated grant proposals
with a panel of reviewers. I felt certain that my lack of gamesmanship
was the reason a few outstanding applications were not funded. While
I reviewed my assignments using the criteria given, other reviewers
adamantly championed—and got more attention for—the best of the
proposals they evaluated. No one had prepped me for how the review
committee would operate.
Peer review is one of the central activities of science, but students
and trainees are often ill prepared to assume their duties as authors and
reviewers. Clearly, there is more to peer review and publication than
factual knowledge and technical skills. Science is a culture. To succeed,
we need to nimbly navigate within our professional culture. There is
much that training programs can do to instill professionalism in the
next generation of scientists, and I will outline some of the approaches
that my colleagues and I used to develop an ethics and professionalism
curriculum at Wake Forest University School of Medicine (WFUSM).
But first, if we are to aspire to excellence in scientific publication—and
train young scientists to do the same—it is important to understand the
purpose of scientific publishing and the competing interests that may
compromise it.
46
Raising Scientific Experts
Research findings
must be reported in an
accurate and accessible
way that allows other
scientists to draw their
own conclusions.
The Ideal
The central role of publication is
to create a record that advances collective knowledge. When research and
scholarship are published in a peerreviewed journal, it means that the
scientific community has judged them
to be worthwhile contributions to the
collective knowledge. That is not to
say a publication represents objective
truth: All observations are made in the context of the observer’s own
theories and perceptions. Research findings must, therefore, be reported in an accurate and accessible way that allows other scientists to
draw their own conclusions. Readers should be able to reinterpret the
work in light of new knowledge and to repeat experiments themselves,
rather than rely solely on the authors’ interpretations.
The nature of scientific progress also requires that the scientific
record include negative results and repetitions of previous studies.
Reporting both positive and negative results informs future work, prevents others from retracing wrong avenues and demonstrates good
stewardship of limited resources. Out of respect for the contributions
of research subjects, especially humans and other primates, some argue
that there is a moral imperative to publish negative results. Doing so
can prevent unnecessary repetition of experiments. On the other hand,
reproducibility itself is a cornerstone of science. There must be a place
to report follow-up studies that confirm or refute previous findings.
Finally, scientific discourse should embrace the principle of questioning certitude—reevaluating the resident authoritative views and
dogmas in order to advance science. The scientific record should challenge the current entrenched ideas within a field by including contributions from new investigators and other disciplines. Examining novel
ideas and allowing them to flourish helps the scientific community uncover assumptions, biases and flaws in its current understanding.
In an ideal world, peer review is the fulcrum that ensures the veracity of each research report before it enters the scientific record. The
prima facie principle for the practice of science is objectivity, but we
all know that true objectivity is impossible. Therefore, science relies
on evaluation by subject-matter experts—peer reviewers—who assess
the work of other researchers. They critique the experimental design,
models and methods, and judge whether the results truly justify the
conclusions. Reviewers also evaluate the significance of each piece of
research for advancing scientific knowledge. This neutral critique improves the objectivity of the published record and assures that each
study meets the standards of its field.
Nancy L. Jones
47
Illustration by Tom Dunne.
For peer review to serve its intended function, authors, reviewers,
editors and scientific societies must uphold certain ethical obligations,
detailed in the figure on page 49. In short, authors must do their best
to conduct sound, worthwhile research and openly share the results.
Reviewers must be open about potential conflicts of interest, and they
must provide critiques that are fair, thorough and timely. And scientific societies, as guardians and gatekeepers of their specific spheres of
knowledge, must provide a normative process that ensures the rigor
and validity of published results.
The Reality
In fact, the scientific record serves other purposes besides advancing collective knowledge. As a result, highly charged ethical conundrums emerge throughout the publication process. Science is an
interactive process conducted by humans who have their own aspirations and ambitions, which give rise to competing interests—some of
which are listed in the figure on page 49. The inescapable conflict in
science is each individual’s underlying self-interest and commitment
to promoting his or her own ideas. Furthermore, authorship is the primary currency for professional standing. It is necessary for credence
and promotion within one’s home institution and the scientific community, and is essential to securing research funds.
Indeed, the requirement that scientists obtain grants to support
their research and salaries, coupled with funders’ accountability to the
public for its investment in science, puts intense strain on the system.
Increasingly, the publication record is used to weigh whether public
funding for science is worthwhile. U.S. investment in science and technology has long been tied to the idea that science will give our society
progress and improve our prosperity, health and security. That perspective was famously articulated in 1945 by Vannevar Bush, who was
48
Raising Scientific Experts
then director of the U.S. Office of Scientific Research and Development,
and it continues to shape funding for science today.
Although public investment in research has sped the progress of
science, it has also placed scientific communities in an advocacy role.
They are no longer just the guardians of knowledge; they compete for
public resources and champion their specific fields. Their advocacy
cases are often based heavily on promoting the potential outcomes of
research—such as cures, solutions and new economic streams—rather
than justifying support for the research itself. The scientific record is not
immune to this pressure. Scientific societies that publish journals can
be tempted to boost the prestige of their fields by prioritizing highly
speculative, sexy articles and by egregiously overpromoting the potential impact of the research.
Such overpromising is particularly problematic because of the
pervasiveness in our society of scientism and scientific optimism, which
hold that scientific knowledge is truth. According to the philosophy of
scientism, science is universal and above any cultural differences. It is
immune to influences from an investigator’s psychological and social
milieu or gender, and even to the scientific community’s own assumptions and politics. Under the influence of scientism, the public, media
and policy makers can be tempted to apply research results without exercising the judgment needed to put them in context. Individuals who
are deeply vested in scientific optimism can have difficulty seeing any
potential harm as science “moves us toward utopia.” They may even
become confused about science’s ability to make metaphysical claims
about what life means.
But the epistemology of science (how science knows what it
knows) cannot support these unreasonably optimistic conclusions.
Scientific knowledge is tentative. It forms through an ongoing process
of consensus making as the scientific community draws upon empirical evidence as well as its own assumptions and values. And scientific
models—classification schemes, hypotheses, theories and laws—are
conceptual inventions that can only represent our current best understanding of reality. Although these models are essential tools in science,
we must continually remind ourselves, our students and the public
that conceptual models are not reality. Nor is a research article—even a
peer-reviewed publication—the truth.
Authors, reviewers and editors must take care to accurately communicate the kind of scientific knowledge addressed in any given publication, as well as its limitations. Authors should pay careful attention
to inherent biases in their work and tone down overly optimistic conclusions. Reviewers and editors must correct any remaining inflation of the
interpretations and conclusions. And scientific societies need to provide
an adequate understanding of the process of science. They must convey
levelheaded expectations about the speculative nature of any individual
Nancy L. Jones
For scientific publication to serve its intended function—to accurately advance scientific knowledge—
authors, editors, reviewers and scientific societies need to meet certain obligations (top). To do so, they
must also learn to manage numerous competing interests (bottom). (Illustration by Tom Dunne.)
49
50
Raising Scientific Experts
study and about the time and resources that will be needed to realize
the public’s investment in a field of research. Otherwise, the continued
projection of scientism—science is always progress—will erode trust in
science at a much more fundamental level than will the few flaws and
misconduct cases that surface in the scientific record itself.
Teaching Professionalism
More than ever before, acquiring technical skills does not assure success as a scientist. Survival depends on operating with finesse, using what
are often called soft skills. Of course scientific communities have an obligation to train their future scientists in the conceptual and methodological
tools for conducting research. But they must also train students to function within the scientific culture, based on a thorough understanding of
the norms, standards and best practices in the student’s specific discipline.
At WFUSM, several of my colleagues and I pioneered a curriculum to promote professionalism and social responsibility within science. Our goal was to equip our students with the tools to navigate the
challenging research culture with high integrity. These included soft
skills such as the ability to recognize ethical issues within the practice
of science, solve problems, work in groups, articulate and defend one’s
professional judgment and critique the judgment of one’s peers. We
also wanted to develop within each student an identity as a scientific
professional, acculturated to the standards of the discipline through
open communication with peers and role models.
To work toward these goals, we chose a problem-based learning
format, to which my colleagues later added some didactic lectures.
Problem-based learning is structured around authentic, engaging case
studies and requires that students gain new knowledge to solve problems in the cases. After a scenario is introduced in one class session, students seek out relevant information on their own, then apply that knowledge to the case during the next class session. Students work actively
in groups, with guidance from facilitators (faculty and postdoctoral fellows) who serve as cognitive coaches rather than content experts.
In our curriculum, the scenarios were designed to provide a realistic understanding of the practice of science and to prompt discussion
of the norms and best practices within the profession. They also required
students to identify ways that the various stakeholders—principal investigators, postdoctoral fellows, graduate students, technicians, peer
reviewers and others—could manage their competing interests. We
constructed activities and discussion questions so that different cases
stressed distinct types of moral reflection. For example, we introduced
two moral-reasoning tools, each one a set of questions that students
could use to systematically sift through the principles, values and consequences in the cases. (Questions included, for instance, “What are
Nancy L. Jones
51
the issues or points in conflict?” and “Can
I explain openly to the public, my superiors or my peers my reasons for acting
as I propose?”) Some sessions focused
on moral character and competence by
requiring students to solve problems
and defend their decisions. Others called
for students to take the perspective of a
professional scientist, thereby building a
sense of moral motivation and commitment. Finally, some cases cultivated moral sensitivity by presenting
the perspectives of multiple stakeholders and promoting awareness
of legal, institutional and societal concerns. Facilitators gave students
feedback on their reasoning, moral reflection, group skills and ability to
analyze problems. During a debriefing activity at the end of each case,
students identified which concrete learning objectives they had accomplished. They also discussed how they were functioning as a group and
what they could do to improve their team dynamic.
The curriculum addressed a range of issues in ethics and professionalism, among which peer review and authorship were important themes.
Cases on scientific authorship required students to investigate, between
class meetings, the criteria by which their own laboratory groups, departments, institution, and professional networks assigned authorship credit.
Back in class, each small group collectively assembled a standard operating procedure for assigning credit, and applied it to resolve the authorship problem in the scenario. Cases on peer review called attention to the
various roles of the author, the reviewer and the editor in evaluating a
manuscript. Students identified essential elements of a well-done review,
the greatest ethical risks for a reviewer and strategies to mitigate those
risks. Students then applied this information in their discussions of a case
study in which an up-and-coming researcher was asked to review grant
proposals that could influence her own research or affect a friend’s career.
To refine one’s
judgment requires
extensive practice,
a supportive climate
and constructive
feedback.
Transforming the Culture
Although my colleagues and I are confident that students benefited from our curriculum, it takes more than a professionalism course
to really nurture a scientific expert. Opportunities to improve and test
one’s understanding of scientific culture and epistemology should be
pervasive throughout the training experience. To refine one’s judgment requires extensive practice, a supportive climate and constructive
feedback. This means that mentors, graduate programs, societies and
funders must value time away from producing data. Fortunately, scientific knowledge is not the data; it is how we use the data to form and
refine conceptual models of how the world works. So activities that
52
Raising Scientific Experts
improve scientific reasoning
and judgment are worth the
investment of time.
More attention must be
paid to the epistemology of
science and the underlying
assumptions of the tools of
the trade. As methods and experimental approaches become entrenched
in a field, rarely do students return to the rich debates that established
the current methodology. This lack of understanding comes to light
when prepackaged test kits, fancy electronic dashboard controls and
computer-generated data tables fail to deliver the expected results. To interpret their own research and critique that of their peers, scientists need
to understand the basis of the key conceptual models in their discipline.
The best way to develop sound scientific judgment is to engage
with the scientific community—friend and foe alike—to articulate, explain and defend one’s positions and to be challenged by one’s peers.
This learning process can take place in laboratory discussions, journal
clubs, department seminars and courses, as well as during professionalsociety functions and peer-review activities. As my colleagues and I
learned from the evaluations of our problem-based learning course, students (and faculty) need explicit instruction on the goals, expectations
and skills of these nondidactic activities. After a laboratory discussion,
for example, time could be spent reviewing what was learned, giving
feedback on how to develop soft skills, and providing an opportunity to
collectively improve the group process. Students should be evaluated on
their meaningful participation in these community activities.
Professional societies also have an important role to play in fostering professionalism among young scientists. As Michael Zigmond
argued in “Making Ethical Guidelines Matter” (Chapter 4), societies are
uniquely positioned to develop effective, discipline-specific codes of
conduct to guide the standards of their professions. Criteria and practical guides to authorship and peer review are important—but they’re
not enough. We must open the veil and show how seasoned reviewers
apply those criteria. Discussions among reviewers and editors about
how they put the guidelines into practice are the best way to move forward ethically. Indeed, such rich exchanges should be modeled in front
of the entire research community, especially for students, showing how
different individuals apply the criteria to critique a paper or proposal
and then respectfully challenge each other’s conclusions. Societies and
funders could also provide sample reviews with commentary on their
strengths and weaknesses, and how they would be used to make decisions about publication or funding.
As graduate programs and societies implement such programs,
they also need to ask themselves if their activities are actually conducive
Criteria and practical guides
to authorship and peer
review are important—
but they’re not enough.
Nancy L. Jones
53
to open scientific dialogue. Sometimes, the cultural climate stifles true
engagement by tolerating uncollegial exchange or by allowing participants to float in unprepared for substantive discussion. There must
be a spirit of collective learning that allows students to examine the
assumptions and conceptual models that are under the surface of
every method and technique. No question should be too elementary.
Arrogant, denigrating attitudes should not be tolerated.
Finally, scientists should foster commitment to their profession
and its aspirations and norms. This goal is best accomplished through
frank discussions about how science really works and about the various competing interests that pull on a scientist’s obligations as author, peer reviewer and, sometimes, editor. Many students enter the
community vested in scientism—living above the ice-cream parlor, if
you will. Viewing life through those idealistic, optimistic lenses causes
them to stumble into an epistemological nightmare the first time they
try to make black-and-white truth out of their confusing data. Or even
worse, they become sorely pessimistic after naively smacking headfirst into the wall of disillusionment when trying to publish in a prestigious journal or competing, for the first time, for an independent research grant. We must provide opportunities that afford socialization
around the principles, virtues and obligations of science. Our faculty
and trainees should freely discuss how they have dealt with their own
competing interests and managed conflicts within peer review and authorship. All participants need to enter these conversations with a willingness to learn from others and address how to improve the culture.
I’ll end by positing a new definition of professionalism: A scientist,
in the face of intense competing interests, aspires to apply the principles
of his or her discipline to support the higher goal of science—to ethically advance knowledge for the good of humankind. Professionalism
takes courage, but when leaders display this courage, the journey for
those who follow is better.
Nancy L. Jones is a health-science policy analyst at the National Institute of
Allergy and Infectious Diseases. She received a Ph.D. in biochemistry from
the Wake Forest University School of Medicine (WFUSM) and an M.S. in
bioethics from Trinity International University (TIU) in Deerfield, Illinois.
For 17 years, she was on the full-time faculty of WFUSM, conducting basic
biomedical research on the role of macrophages in atherosclerosis. She shifted
to a science-policy career beginning with a 2005–07 American Association
for the Advancement of Science and National Institutes of Health (NIH)
Science Policy Fellowship. As an adjunct faculty member at WFUSM and
TIU, from 2005–2008 she helped develop a National Science Foundation–supported, two-year
curriculum in ethics and professionalism for doctoral students in the biomedical sciences. This
article does not represent NIH views, nor was it part of Jones’s professional NIH activities.
55
Bibliography
Ahearne, J. 1999. The Responsible Researcher: Paths and Pitfalls. Research Triangle Park, NC: Sigma Xi.
Anderson, M. S., and N. H. Steneck, eds. 2010. International Research Collaborations: Much to be Gained,
Many Ways to Get in Trouble. New York: Routledge.
Bebeau, M. J., et al. 1995. Moral Reasoning in Scientific Research: Cases for Teaching and Assessment.
Bloomington: Poynter Center for the Study of Ethics and American Institutions.
Bethesda Statement on Open Access Publishing. 2003. Accessed July 9, 2011. http://www.earlham.
edu/~peters/fos/bethesda.htm.
Bok, S. 1978. Lying: Moral Choice in Public and Private Life. New York: Pantheon Books.
Bush, V. 1945. Science: The Endless Frontier. Washington: United States Government Printing Office.
Available online at http://www.nsf.gov/od/lpa/nsf50/vbush1945.htm.
Caelleigh, A. S. 2003. Roles for scientific societies in promoting integrity in publication ethics. Science
and Engineering Ethics 9:221–241.
Committee on Publication Ethics. 2010. Code of Conduct. Accessed September 4, 2010. http://
publicationethics.org/files/u2/New_Code.pdf.
Committee on Science, Engineering, and Public Policy, National Academy of Sciences, National
Academy of Engineering and Institute of Medicine. 2009. On Being a Scientist: A Guide to Responsible
Conduct in Research, third edition. Washington: National Academies Press.
Davis, M. 2007. Eighteen rules for writing a code of professional ethics. Science and Engineering Ethics
13:171–189.
Day, R. A., and B. Gastel. 2006. How to Write and Publish a Scientific Paper, sixth edition. Westport, CT:
Greewood Press.
Fischer, B. A., and M. J. Zigmond. 2001. Promoting responsible conduct in research through “survival
skills” workshops: Some mentoring is best done in a crowd. Science and Engineering Ethics 7:563–87.
Fisher, M., S. B. Friedman and B. Strauss. 1994. The effects of blinding on acceptance of research
papers by peer review. Journal of the American Medical Association 272:143–46.
Frankel, M. S. 2000. Scientific societies as sentinels of responsible research conduct. Proceedings of the
Society for Experimental Biology and Medicine 224:216–219.
Friedman, T. L. 2010. Too good to check. The New York Times (November 16).
Godlee, F., C. Gale and C. Martyn. 1998. Effect on the quality of peer review of blinding reviewers
and asking them to sign their reports: A randomized controlled trial. Journal of the American
Medical Association 280:237–40.
Gudeman, K. 2010. University of Illinois to develop national center for ethics in science, mathematics
and engineering. Coordinated Science Laboratory News. Accessed January 19, 2011. http://csl.
illinois.edu/news/university-illinois-develop-national-center-ethics-science-mathematics-andengineering.
Hamilton, J., et al. 2003. Report of Ethics Task Force to APS Council. Accessed January 19, 2011. http://
www.phys.utk.edu/colloquium_blume_spring2004_ethics.pdf.
­Heitman, E., and S. Litewka. 2011. International perspectives on plagiarism and considerations for
teaching international trainees. Urologic Oncology 29:104–108.
56Bibliography
International Committee of Medical Journal Editors. 2010. Uniform Requirements for Manuscripts
Submitted to Biomedical Journals. Accessed September 6, 2010. http://www.icmje.org/urm_
main.html.
Iserson, K. V. 1999. Principles of biomedical ethics. Emergency Medicine Clinics of North America
17:283–306, ix.
Jones, N. L. 2007. A code of ethics for the life sciences. Science and Engineering Ethics. 13:25–43.
Jones, N. L., et al. 2010. Developing a problem-based learning (PBL) curriculum for professionalism and
scientific integrity training for biomedical graduate students. Journal of Medical Ethics 36:614–619.
Justice, A. C., et al. 1998. Does masking author identity improve peer review quality? A randomised
controlled trial. Journal of the American Medical Association 280:240–242.
Kennedy, D. 1997. Academic Duty. Cambridge: Harvard University Press.
Kirby, K., and F. A. Houle. 2004. Ethics and the welfare of the physics profession. Physics Today 57:42–46.
Kravitz, R., et al. 2010. Editorial peer reviewers’ recommendations at a general medical journal: Are
they reliable and do editors care? PLoS ONE 5:e10072.
Laakso, M., et al. 2011. The development of open access journal publishing from 1993 to 2009. PLoS
ONE 6: e20961.
Levine, F. J., and J. M. Iutcovich. 2003. Challenges in studying the effects of scientific societies on
research integrity. Science and Engineering Ethics 9:257–268.
Macrina, F. L. 2011. Teaching authorship and publication practices in the biomedical and life sciences.
Science and Engineering Ethics 17:341–354.
Mandavilli, A. 2011. Trial by Twitter. Nature 469:286–287.
Martinson, B. C., M. S. Anderson and R. De Vries. 2005. Scientists behaving badly. Nature 435: 737–
738.
McNutt, R. A., et al. 1990. The effects of blinding on the quality of peer review: A randomized trial.
Journal of the American Medical Association 263:1371–76.
Nature Editors. 2010. Response required. Nature 468:867.
The New York Times Editors. 2010. Mr. Ban pulls his punches. The New York Times (November 2).
Nylenna, M., P. Riis and Y. Karlsson. 1994. Multiple blinded reviews of the same two manuscripts:
Effects of referee characteristics and publication language. Journal of the American Medical
Association 272:149–51.
Panel on Scientific Responsibility and the Conduct of Research, National Academy of Sciences,
National Academy of Engineering, Institute of Medicine. 1992. Responsible Science, Volume I:
Ensuring the Integrity of the Research Process. Washington, D.C.: National Academies Press.
Peters, D., and Ceci, S. 1982. Peer-review practices of psychological journals: The fate of submitted
articles, submitted again. Behavioral and Brain Science 5:187–255.
Resnik, D. B., C. Gutierrez-Ford and S. Peddada. 2008. Perceptions of ethical problems with scientific
journal peer review: An exploratory study. Science and Engineering Ethics 14:305–10.
Ross, J. S., et al. 2006. Effect of blinded peer review on abstract acceptance. Journal of the American
Medical Association 295:1675–80.
University of Notre Dame. 2010. Responsible Conduct of Research Statement. Accessed January 19, 2011.
http://or.nd.edu/compliance/responsible-conduct-of-research-rcr/responsible-conduct-ofresearch-statement/.
Van Rooyen, S., et al. 1998. Effect of blinding and unmasking on the quality of peer review. Journal of
the American Medical Association 280:234–237.
Van Rooyen, S., et al. 1999. Effect of open peer review on quality of reviews and on reviewers’
recommendations: A randomised trial. British Medical Journal 317:23–27.
Walsh, E., et al. 2000. Open peer review: A randomised controlled trial. The British Journal of Psychiatry
176: 47–5.
Wong, G. 2010. In China, academic cheating is rampant. The Boston Globe (April 11).
Zigmond, M. J. 2003. Implementing ethics in the professions: Preparing guidelines on scientific
communication for the Society for Neurosciences. Science and Engineering Ethics 9:191–200.