A New format for Performing and Publishing Psychological

A New format for Performing and Publishing Psychological Research
Alexander A. Aarts
Nuenen, the Netherlands
A New format for Performing and Publishing Psychological Research
Current research- and publication practices contribute to the existence and maintenance of issues like
underpowered studies (Maxwell, 2004), (too much) data collection and -analysis flexibility (Simmons,
Nelson, & Simonsohn, 2011; Wagenmakers, Wetzels, Borsboom, van der Maas, & Kievit, 2012), a lack of
direct replications (cf. Koole & Lakens, 2012; Makel, Plucker, & Hegarty, 2012), the file-drawer problem
and publication bias (Ferguson & Heene, 2012), a less than optimal approach to theory development
and -testing (cf. Ferguson & Heene, 2012; Fiedler, Kutzner, & Krueger, 2012), uninterpretable research
literature reviews (Meehl, 1990), and wasted resources as a result of the aforementioned issues (cf.
Ioannidis, 2012).
Incorporating the solutions to these issues into a research- and subsequent publication format
could possibly help improve psychological science fast, to a large extent, and in a structural manner. In
what follows, a research format will be described which has combined several ways to improve
psychological science into a single, adaptable, and easily adoptable format. It largely builds on the ideas
behind Registered Reports (Chambers, 2013), more specifically concerning pre-registration, high power,
and publishing the results no matter the outcome. It differs from Registered Reports in not necessarily
involving journal reviewing before executing the research (cf. van ‘t Veer & Giner-Sorolla, 2016)
although it is compatible with the format, by including direct replications in the format, and by making
sure follow-up studies adhere to the same standards. The total process differs slightly from what has
been the norm thus far, but the format could easily be adopted by any group of researchers if deemed
useful.
Performing and Publishing Psychological Research: The Replication Round Robin format
The Replication Round Robin format involves a collaboration of three or more (groups of)
researchers who are working on the same topic/theory/phenomenon, and who will prospectively
replicate each other’s work. The Replication Round Robin format starts when researchers want to
rigorously test their ideas, which could possibly be based on exploratory/pilot studies. At this point in
time, researchers all pre-register their studies, and propose to have their studies be replicated before
submitting and publishing their research (see Figure 1).
When the results of the the replications and original studies are known, each (group of)
researcher(s) now comes up with their own follow-up study related to the same theory or phenomenon
which would also be prospectively replicated in the Replication Round Robin manner. The total process
would entail a clear distinction of post-hoc theorizing and theory testing (cf. Wagenmakers, et cl., 2012),
rounds of theory testing and reformulation (cf. Wallander, 1992), and could be viewed as a systematic
manner of data collection (cf. Chow, 2002).
researcher 2
replicates study 1
exploratory/pilot
study 1 by
researcher 1
pre-register study
1
researcher 1
performs study 1
researcher 3
replicates study 1
researcher 1
replicates study 2
exploratory/pilot
study 2 by
researcher 2
pre-register study
2
researcher 2
performs study 2
researcher 3
replicates study 2
researcher 2
replicates study 3
exploratory/pilot
study 3 by
researcher 3
pre-register study
3
researcher 3
performs study 3
researcher 1
replicates study 3
Figure 1. Diagram of the Replication Round Robin format
publish all 3 results
of all 3 studies in
single paper:
"round 1"
repeat process for
"round 2", "round
3", etc.
Benefits of the format for Psychological Science and the Individual Researcher
Combining improvements into a single format
The Replication Round Robin format builds on the offered solutions to the issues mentioned
above with the aim to improve psychological science fast, to a large extent, and in a structural manner.
The format includes pre-registration, highly powered research, and direct replications. The format is
compatible with Registered Reports which prevents publication bias, but does not necessarily depend
on journal participation of Registered Reports in maximizing the chances of publishing possible nullresults. This is because possible null-results will be based on highly powered, pre-registered, and directly
replicated studies, and will be presented in a single paper with other possibly significant results. It is
reasoned that these characteristics will increase the chances of publication of possible null-results.
The format is based on collaboration between a minimum of three (groups of) researchers, but
is adaptable concerning other researchers that may want to join the efforts and/or start their own group
collaborations. Collaboration can be facilitated by Study Swap (https://osf.io/view/studyswap/), a
recently developed website where researchers can contact each other for interlab replication,
collaboration, and research resource exchange. When more researchers join in, decisions can be made
regarding what might be considered to be the optimal amount of replications.
Increasing research efficiency
Collaboration might be a good way to increase total statistical power (Open Science
Collaboration, 2017), and could be seen as a way to maximize research efficiency for the individual
researcher. In the past researchers have published multi-study papers, which often presented several
related and/or conceptual replication studies. However, these reported studies have probably often
been under-powered, and “failed” studies have been left out (cf. Francis, 2012; Schimmack, 2012). It has
been estimated that at least 50% of studies in psychology go unreported (see Bakker, van Dijk, &
Wicherts, 2012). When researchers use the Related Round Robin format, they collectively also create a
multi-study paper similar to those that have been published in the past, but both significant and nullresults could be seen as informative, and deserving of inclusion in the final paper, due to them being
highly powered (cf. LeBel, Berger, Campbell, & Loving, in press).
Furthermore, the results coming from the format could be seen as being of relatively high
quality and thus providing relatively optimally useful information. This may be especially relevant for
researchers who value good/open practices, because they may be disproportionally dependent on the
quality of prior research compared to those that simply “follow the rules of the game called
psychological science” (cf. Bakker et al., 2012). If using, what can be considered to be, sub-optimal
research practices can lead to finding significant results for just about anything (cf. Simmons et al.,
2011), prior evidence, reasoning, and theories can all be considered to be completely irrelevant. This is
not the case for those that value good/open practices, so these researchers can help their future selves
by amassing optimally gathered information upon which to base future studies. Collaboration using the
Replication Round Robin format could maximize research efficiency for the individual researcher by
increasing the chances of publication of all performed studies, and by increasing the informational value
of their research.
Contributing to a more healthy approach to discovery
It has been argued that current incentives playing a role in psychological science have
contributed to the problematic issues mentioned in the introduction. Finding new and counter-intuitive
results is what is/has been rewarded (Nosek, Spies, & Motyl, 2012). However, what exactly constitutes a
discovery, or perhaps better formulated: when is a discovery really a discovery? Recent developments in
psychological science have cast doubt on exactly which published findings have been true discoveries
(cf. Pashler & Wagenmakers, 2012), and it can be argued that being replicable is necessary for a finding
to be considered to be a true discovery (cf. Lebel et al., in press). The Replication Round Robin format
gives possible discoveries a little more backbone by making sure the published results coming from
studies are highly powered, pre-registered, and replicated. The format leaves room for researchers to
individually come up with their own ideas and studies with each new round, hereby stimulating
creativity, and discovery, but it couples that with confirmation and replication. As such, following the
format results in finding a more balanced approach between discovery and confirmation, and achieves a
more collaborative approach to discoveries with researchers sharing the possible credits.
References
Bakker, M., van Dijk, A., & Wicherts, J. M. (2012). The rules of the game called psychological science.
Perspectives on Psychological Science, 7, 543-554
Chambers, C. D. (2013). Registered Reports: A new publishing initiative at Cortex. Cortex, 49, 609-610
Chow, S. L. (2002). Methods in psychological research. In: Encyclopedia of Life Support Systems. Eolss
Publishers, Oxford, UK
Ferguson, C. J., & Heene, M. (2012). A vast graveyard of undead theories: Publication bias and
psychological science’s aversion to the null. Perspectives on Psychological Science, 7, 555-561
Fiedler, K, Kutzner, F., & Krueger, J. I. (2012). The long way from α-error control to validity proper:
Problems with a short-sighted false-positive debate. Perspectives on Psychological Science, 7,
661-669
Francis, G. (2012). Too good to be true: Publication bias in two prominent studies from experimental
psychology. Psychonomic Bulletin & Review, 19, 151-156
Koole, S. L., Lakens, D. (2012). Rewarding replications: A sure and simple way to improve psychological
science. Perspectives on Psychological Science, 7, 608-614
Ioannidis, J. P. A. (2012). Why science is not necessarily self-correcting. Perspectives on Psychological
Science, 7, 645-654
LeBel, E. P., Berger, D., Campbell, L., & Loving, T. J. (in press). Falsifiability is not optional. Journal of
Personality and Social Psychology.
Makel, M. C., Plucker, J. A., & Hegarty, B. (2012). Replications in psychological research: How often do
they really occur? Perspectives on Psychological Science, 7, 537-542
Maxwell, S. E. (2004). The persistence of underpowered studies in psychological research: causes,
consequences, and remedies. Psychological Methods, 9, 147-163
Meehl, P. E. (1990). Why summaries of research on psychological theories are often uninterpretable.
Psychological Reports, 66, 195-244
Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia II: Restructuring incentives and practices to
promote truth over publishability. Perspectives on Psychological Science, 7, 615-631
Open Science Collaboration (2017). Maximizing the reproducibility of your research. In S. O. Lilienfeld &
I. D. Waldman (Eds.), Psychological Science under scrutiny: Recent challenges and proposed
solutions. New York, NY: Wiley
Pashler, H., & Wagenmakers, E.-J. (2012). Editor’s introduction to the special section on replicability in
psychological science: A crisis of confidence? Perspectives on Psychological Science, 7, 528-530
Schimmack, U. (2012). The ironic effect of significant results on the credibility of multiple-study articles.
Psychological Methods, 17, 551-566
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: undisclosed flexibility in
data collection and analysis allows presenting anything as significant. Psychological Science, 22,
1359-1366
Van ‘t Veer, A. E., & Giner-Sorolla, R. (2016). Pre-registration in social psychology: A discussion and
suggested template. Journal of Experimental Social Psychology, 67, 2-12
Wagenmakers, E.-J., Wetzels, R., Borsboom, D., van der Maas, H. L. J., & Kievit, R. A. (2012). An agenda
for purely confirmatory research. Perspectives on Psychological Science, 7, 632-638
Wallander, J. L. (1992). Theory-driven research in pediatric psychology: A little bit on why and how.
Journal of Pediatric Psychology, 17, 521-535