SCIENTIFIC VALUES Why a Focus on Eminence is Misguided

Running head: SCIENTIFIC VALUES
Why a Focus on Eminence is Misguided:
A Call to Return to Basic Scientific Values
Katherine S. Corker
Grand Valley State University
Draft submitted 2/26/17
Perspectives on Psychological Science
Word count (body): 1498
1
SCIENTIFIC VALUES
2
Abstract
The scientific method has been used to eradicate polio, send humans to the moon, and enrich
understanding of human cognition and behavior. It produced these accomplishments not through
magic or appeals to authority, but through open, detailed, and reproducible methods. To call
something “science” means there are clear ways to independently and empirically evaluate
research claims. There is no need to simply trust an information source. Scientific values thus
prioritize transparency and universalism, emphasizing that it matters less who has made a
discovery than how it was done. Yet, scientific reward systems are based on identifying
individual eminence. The current paper contrasts this focus on individual eminence with reforms
to scientific rewards systems that help these systems better align with scientific values.
SCIENTIFIC VALUES
3
Why a Focus on Eminence is Misguided: A Call to Return to Basic Scientific Values
What makes science as a way of knowing special? Why do we accord knowledge derived
from the scientific method a privileged position compared to common sense, appeals to authority
figures, or other forms of rhetoric? If scientists rely on their own expertise as justification for
prioritizing their claims, then we are, in fact, not better positioned to make truth-claims than
religious, political, and other leaders (Lupia, 2013). On the contrary, science’s special claim on
truth comes not from its practitioners’ training and expertise, but rather from its strong adherence
to norms of transparency and universalism (Merton, 1973; Anderson, Martinson, & De Vries,
2007; Nosek, Spies, & Motyl, 2012). Transparency means that the methods used to reach a
scientific conclusion are laid bare for all to see, so others need not trust but can instead “see for
themselves.” Universalism means that scientists reject claims of special authority; it matters far
less who did the research than how it was done.
How, then, do we square these scientific ideals with a scientific culture that fetishizes the
lone scientific genius? You know the stereotype – the lonely, workaholic scientist chained to his
bench, exhausted, yet somehow consistently making groundbreaking discoveries (Diekman,
Brown, Johnston, & Clark, 2011). Working scientists know that this is not an accurate job
depiction. Yet the narrative of the “scientific hero” and “famed researcher” persists – even as the
field recognizes that the methods used to produce a scientific claim are more important than the
eminence of the person who produced it.
I propose, in agreement with many others (Chambers et al., 2015; Nosek et al., 2015;
Open Science Collaboration, 2015; Smaldino & McElreath, 2016; Spellman, 2015; Vazire,
2016), that our current methods of identifying research eminence are flawed and ultimately
misplaced. I review several problems that impede scientific progress, which stem from structures
SCIENTIFIC VALUES
4
that support eminence but undermine scientific quality. I then consider possible ways we might
define research excellence in the future.
Why a Focus on Lone Scientific Genius is Flawed
An overemphasis on individual researcher excellence hurts psychological science for
three reasons. First, the current concept of eminence reflects values that are likely
counterproductive for maximizing scientific knowledge. We give jobs, tenure, full
professorships, grants, and awards to researchers who meet arbitrary criteria for excellence. The
current value system privileges quantity over quality, as well as the outcome of research rather
than the process itself. Publications in high impact journals, and citations to those papers, are the
primary – some would argue the only – currency used to identify excellence (Ruscio, 2016). It
has been widely noted that this emphasis on publication quantity produces undesirable
consequences for scientific quality. The focus is on getting research published, not getting it right
(Nosek et al., 2012). Current ways of defining excellence place too much emphasis on the wrong
things (impact factor, number of contributions) and not enough emphasis on the right ones
(validity, transparency, and openness).
Second, modern scientific research is done in (often large) teams of individuals (Wuchty,
Jones, & Uzzi, 2007), yet evaluation of excellence happens at the individual level, and teamwork
is undervalued. Cooperative work often advances scientific progress more quickly, as illustrated
by the success of major collaborative efforts like the Large Hadron Collider in physics or the
Human Genome Project in biology. Psychology, too, is beginning to reap the benefits of large
scale team science (Open Science Collaboration, 2015; Klein et al., 2014). Bibliometric analysis
suggests that teams are more likely than individuals to produce novel contributions, and such
SCIENTIFIC VALUES
5
work has a citation advantage (Uzzi, Mukherjee, Stringer, & Jones, 2013), yet evaluators (e.g.,
tenure committees, grant funders) may undervalue scientists’ contributions to team efforts.
Finally, systemic biases – structural sexism, racism, and status bias – are likely to affect
how we identify who qualifies as eminent under the status quo. The stereotypical scientist better
matches stereotypes of male than female gender roles (Diekman et al., 2011), and this can impact
evaluation of scientists (Eagly & Miller, 2016). Indeed, a host of social biases can infect the peer
review process (Lee, Sugimoto, Zhang, & Cronin, 2012). One recent experiment (Okike, Hug,
Kocher, & Leopold, 2016), echoing an older one (Ceci & Peters, 1982), showed that research
labeled as coming from a more prestigious institution was more likely to receive an “accept”
recommendation than the same paper with institutional and author identities withheld. From the
perspective of the value of universalism, such findings are disappointing. A researcher’s gender,
nationality, race, or institution should not matter in assessing research quality, though it often
does.
Defining and Recognizing Eminence in Science 2.0
Psychological science is changing (Spellman, 2015). Technological advances (e.g., open
workflow systems like the Open Science Framework) and new opportunities for collaboration
(driven by online and social media based connections) promise to fundamentally reshape how
psychologists go about their work and evaluate scholarship. These changes are poised to improve
science by better aligning scientific incentives with scientific values. I close by examining how
eminence can be characterized in light of these changes. Attention to these issues will stimulate a
more inclusive, diverse, and robust psychological science.
First, structural changes (coming from scientific societies, journals, and research funders)
should be initiated to help researchers reward and evaluate quality research. I define quality
SCIENTIFIC VALUES
6
research here as work that is reproducible, transparent and open, and likely to be high in validity.
One such initiative is Registered Reports (Chambers et al., 2015). This publishing model inverts
the review process, so that research is peer-reviewed prior to analysis of data. This model puts
the emphasis of review on quality of research design instead of statistical significance of results.
Another initiative involves changes to research funders’ policies. For instance, the German
Psychological Society (DGP) has recently made open data the default, incentivizing researchers
to share data more freely (Schönbrodt, Gollwitzer, & Abele-Brehm, 2016). Such policies help to
shift scientists’ norms, first by decoupling success from getting p < .05 and second by setting the
stage for researchers to receive credit for generating rich datasets that are useful to the scientific
community.
Second, relatedly, we can do a much better job of recognizing and rewarding the many
activities that researchers do that support scientific discovery beyond publishing peer-reviewed
articles. Among these currently undervalued activities are developing scientific software (e.g., R
packages or experimental web applications), generating large datasets with potential for reuse,
writing data analytic code and constructing tutorials to teach others to use it (e.g., for multi-level
modeling or Bayesian applications). Thus, we ought to broaden the concept of eminence to
include this fundamental, and extremely valuable, scientific work.
Third, we need to reevaluate the ways that we assess individual researchers’ excellence in
light of the value and promise of team-driven research. Science is a communal endeavor – the
scientific community works together to try to understand the natural and social world. It seems
problematic that we almost exclusively reward and promote individual researchers and ignore
fundamental contributions of teams. A related, but distinct, problem involves research funding
structures that support giving few individuals very large research grants, thereby excluding the
SCIENTIFIC VALUES
7
rest of the research community. Recent studies suggest that such a tactic, though it may further
evaluations of individual eminence, has diminishing returns for scientific quality (Fortin &
Currie, 2013; Mongeon, Brodeur, Beaudry, & Larivière, 2016). Funders may be better served by
shrinking the size of awards and supporting more research teams with their limited funds. Thus
evaluating individuals’ contributions to teams, and rewarding team excellence on its own, ought
to figure prominently in how we recognize eminence going forward.
Finally, to combat structural and systemic problems associated with recognizing
eminence mentioned above, double blind peer review (in which reviewers do not know the
identities of authors or candidates) ought to be considered standard practice for journal
publication, grant funding, and awards committees. Technological solutions could even be
developed to allow departments to blind in early stages of faculty hiring. Although blinding is
not a panacea (and in some ways it is antithetical to openness), evidence suggests that blinding is
associated with higher levels of diversity (e.g., Roberts & Verhoef, 2016, in conference
submissions), and it reduces the impact of status bias (Okike et al., 2016). It may still be possible
to ascertain an author or candidate’s identity during blinded review, but that possibility shouldn’t
stop us from trying to improve inclusion.
Closing Thoughts
The raison d’etre for this symposium was to better understand and measure research
eminence – that special quality or talent that distinguishes famous psychologists from the rest. I
argue that an obsession with eminence actually undercuts scientific progress by shifting attention
from the process of science to qualities of individuals. Moreover, alternative ways of defining
research excellence might promote both reproducibility and greater inclusiveness. Solutions
proposed here help researchers prioritize scientific values of transparency and universalism.
SCIENTIFIC VALUES
8
After all, what makes science trustworthy is not that it is done by extraordinary scientists, but
rather, it is scientists’ dogged adherence to open, transparent, and reproducible methods working
together to advance our cumulative body of knowledge.
SCIENTIFIC VALUES
9
References
Anderson, M. S., Martinson, B. C., & De Vries, R. (2007). Normative dissonance in science:
Results from a national survey of U. S. scientists. Journal of Empirical Research on
Human Research Ethics, 3-14. doi: 10.1525/JERHRE.2007.2.4.3
Ceci, S. J., & Peters, D. P. (1982). Peer review: A study of reliability. Change,
14, 44-48. doi: http://www.jstor.org/stable/40164010
Chambers, C. D., Dienes, Z., McIntosh, R. D., Rotshtein, P., & Willmes, K.
(2015). Registered reports: Realigning incentives in scientific
publishing. Cortex, 66, A1-A2. doi: 10.1016/j.cortex.2015.03.022
Diekman, A. B., Brown, E. R., Johnston, A. M., & Clark, E. K. (2011). Seeking congruity
between goals and roles: A new look at why women opt out of science, technology,
engineering, and mathematics careers. Psychological Science, 21, 1051-1057. doi:
10.1177/0956797610377342
Eagly, A. H., & Miller, D. I. (2016). Scientific eminence: Where are the women? Perspectives on
Psychological Science, 11, 899-904. doi: 10.1177/1745691616663918
Fortin, J.-M., & Currie, D. J. (2013). Big science vs. little science: How scientific impact scales
with funding. PLoS One 8(6), e65263. doi: 10.1371/journal.pone.0065263
Klein, R. A., Ratliff, K. A., Vianello, M., Adams, R. B., Jr., Bahník, S., Bernstein, M. J., Bocian,
K., Brandt, M. J., Brooks, B.*, Brumbaugh, C. C., Cemalcilar, Z., Chandler, J., Cheong,
W., Davis, W. E., Devos, T., Eisner, M., Frankowska, N., Furrow, D., Galliani, E. M.,
Hasselman, F., Hicks, J. A., Hovermale, J. F., Hunt, S. J., Huntsinger, J. R., IJzerman, H.,
John, M., Joy-Gaba, J. A., Kappes, H. B., Krueger, L. E., Kurtz, J., Levitan, C. A.,
Mallett, R. K., Morris, W. L., Nelson, A. J., Nier, J. A., Packard, G., Pilati, R., Rutchick,
SCIENTIFIC VALUES
10
A. M., Schmidt, K., Skorinko, J. L., Smith, R., Steiner, T. G.*, Storbeck, J., Van Swol, L.
M., Thompson, D., van 't Veer, A. E., Vaughn, L. A., Vranka, M., Wichman, A. L.,
Woodzicka, J. A., & Nosek, B. A. (2014). Investigating variation in replicability: A
"many labs" replication project. Social Psychology, 45(3): 142-152. doi: 10.1027/18649335/a000178
Lee, C. J., Sugimoto, C. R., Zhang, G., & Cronin, B. (2012). Bias in peer review. Journal of the
American Society for Information Science and Technology, 64, 2-17. doi:
10.1002/asi.22784
Lupia, A. (2013). What is the value of social science? Challenges for researchers and government
funders. PS: Political Science & Politics, 47, 1-7. doi: 10.1017/S1049096513001613
Merton, R. K. (1973). The normative structure of science. The sociology of science: Theoretical
and empirical investigations. Chicago: University of Chicago Press.
Mongeon, P., Brodeur, C., Beaudry, C., & Larivière, V. (2016). Concentration of research
funding leads to diminishing marginal returns. Research Evaluation, 25, 396-404. doi:
10.1093/reseval/rvw007
Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., Buck, S.,
Chambers, C. D., Chin, G., Christensen, G., Contestabile, M., Dafoe, A., Eich, E., Freese,
J., Glennerster, R., Goroff, D., Green, D. P., Hesse, B., Humphreys, M., Ishiyama, J.,
Karlan, D., Kraut, A., Lupia, A., Mabry, P., Madon, T. A., Malhotra, N., Mayo-Wilson,
E., McNutt, M., Miguel, E., Levy Paluck, E., Simonsohn, U., Soderberg, C., Spellman, B.
A., Turitto, J., VandenBos, G., Vazire, S., Wagenmakers, E. J., Wilson, R., & Yarkoni, T.
(2015). Promoting an open research culture. Science, 348, 1422-1425. doi:
10.1126/science.aab2374
SCIENTIFIC VALUES
11
Okike, K., Hug, K. T., Kocher, M. S., & Leopold, S. S. (2016). Single-blind vs. double-blind
peer review in the setting of author prestige. JAMA, 316 (12), 1315. doi:
10.1001/jama.2016.11014
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science.
Science, 349, 6251. doi: 10.1126/science.aac4716
Roberts, S. G., & Verhoef, T. (2016). Double-blind reviewing at EvoLang 11 reveals gender bias.
Journal of Language Evolution, 1, 163-167.
Ruscio, J. (2016). Taking advantage of citation measures of scholarly impact: Hip hip h index!
Perspectives on Psychological Science, 11, 905-908.
Schönbrodt, F., Gollwitzer, M., & Abele-Brehm, A. (2016, November 6). Data management in
psychological science: Specification of the DFG guidelines. Retrieved from
http://www.dgps.de/fileadmin/documents/Empfehlungen/Data_Management_eng.pdf
Smaldino, P. E., & McElreath, R. The natural selection of bad science. Royal Society Open
Science, 3, 160384. doi: 10.1098/rsos.160384
Spellman, B. A. (2015). A short (personal) future history of Revolution 2.0. Perspectives on
Psychological Science, 10, 886-899. doi: 10.1177/1745691615609918
Uzzi, B., Mukherjee, S., Stringer, M., & Jones, B. (2013). Atypical combinations and scientific
impact. Science, 342, 468-472. doi: 10.1126/science.1240474
Vazire, S. (2016, June 28). Don’t you know who I am? Retrieved from
http://sometimesimwrong.typepad.com/wrong/2016/06/dont-you-know-who-i-am.html
Wuchty, S., Jones, B. F., & Uzzi, B. (2007). The increasing dominance of The increasing
dominance of teams in production of knowledge. Science, 316, 1036-1039. doi:
10.1126/science.1136099