Author`s response to reviews Title: Why prudence is needed when

Author’s response to reviews
Title: Why prudence is needed when interpreting articles reporting clinical trials results in
mental health
Authors:
Rafael Dal-Re ([email protected])
Julio Bobes ([email protected])
Pim Cuijpers ([email protected])
Version: 1 Date: 27 Feb 2017
Author’s response to reviews:
Tobias Krieger, PhD
Trials
26 February 2017
Dear Dr Krieger
Please find attached the new revised version of our paper entitled " Why prudence is needed
when interpreting articles reporting clinical trials results in mental health" (Ref No TRLS-D-1600887).
We thank you for the opportunity to submit this revised version. We have addressed the
reviewers‟ and editor‟s comments and suggestions, and believe the manuscript has been
improved as a result. Below you will find the answers to the editor and reviewers in red. In
yellow are the sentences we have added in the revised manuscript to address their suggestions
and comments.
We hope you will find this version suitable for publication in Trials.
Sincerely yours
Rafael Dal-Ré MD, PhD, MPH
Below, I have some additional comments, some of which are noted by the reviewers as well:
- Page 2: The first paragraph "It is well known" needs more references.
Following the editor‟s suggestion, we have added two references to support this statement.
- Page 3-4: The arguments of sample size and publication bias are mixed together. Disentangle
these two arguments would be very helpful for readers.
To address this suggestion we have moved one sentence (“The second deficiency…[7,8]”) to
leave all information regarding sample size and effect size in the same paragraph (the 1st in the
section Clinical trials on mental health interventions and reporting bias). The next paragraph
starts with the sentence “The second deficiency…” which is followed with an additional
sentence that we have added to address one reviewer‟s comment (see below)
To help readers, we have added a short sentence as the first of the next paragraph, that now reads
as follows (page 4): “Publication bias is common on mental health trials. Thus, in clinical trials
on treatments…”
- Page 7: "non-regulated intervention" --> Please specify what you mean with this term. For
example, there are regulations for psychotherapy and psychiatry trials in some countries (namely
GCP ICH).
The sentence in page 7 reads as follows: “As a non-regulated intervention, psychotherapy trials
are especially prone to this fact.”
When discussing on non-regulated interventions we are referring to those interventions that are
not under EMA or FDA regulations such as, for instance, physiotherapy, surgery, …and
psychotherapy. So, if an investigator wants to conduct a trial comparing two psychotherapy
treatments, she will conduct her trial after gaining REC‟s approval, but with no intervention from
any regulatory agency. On the contrary, if she wants to compare a drug with a psychotherapy
treatment, she will have to obtain REC and regulatory agency approval – but this is because the
trial includes a regulated intervention, a drug.
So, we believe we do not need to explain the above mentioned statement.
Reviewer #3: The article highlights some interesting points surrounding publication bias and
reporting bias in psychotherapy trials.
The issue of transparency in clinical trials is key and the authors should be commended for
exploring this.
We thank this reviewer comment.
My main point where I would like the article to improve is:
1. To expand on the issue of the variations in interpretation and analytic mode of outcome
measures. The authors mention this in the introduction but do not provide an example or explore
this further.
We thank the reviewer for this suggestion. To address it we have added a sentence after to the
one mentioned by the reviewer. Now, both sentences read as follows (page 4): “The second
deficiency refers to the use of outcomes requiring a certain grade of interpretation, such as
psychometric scales: since these require a subjective assessment, they are prone to a remarkable
variability depending on the implemented analytical option [7,8]. In these cases, in a trial with a
small sample size, the magnitude of the estimated effect could vary (the so-called „vibration
effect‟) and will depend on factors such as the principal endpoint (psychometric scale) chosen,
the use of adjustments for certain confounders and the availability of alternative options in
statistical approach [12]”
Minor comments:
2. SMPCs are only provided for drugs. Perhaps the authors could mention the greater risk of nondrug interventions.
Our manuscript only addresses two types of treatment, medicines and psychotherapy. The
reviewer is right when noting that we could mention the risks for psychotherapy, but we believe
that since this is a short article, focused on selective outcome reporting, it is enough to mention
that adverse reactions to psychotherapy are rarely reported. We do it on page 4, but we have
added data to also address comment #3, below: “Safety information is rarely reported in both
drug and psychological intervention trials. Outcome reporting bias of key safety results (serious
adverse events, suicides, deaths) is common in trials on antidepressants and antipsychotics [14]. :
when comparing the data included in the corresponding clinical trial registries, only 57%, 38%
and 47% of serious adverse events, cases of deaths or suicide were reported in articles,
respectively [18]. In psychotherapy the picture is remarkable worse since harms of the
interventions are rarely reported: possible or actual adverse reporting is nine to twenty times
more likely in pharmacotherapy trials than in psychotherapy trials [19].”
3. The authors could mention the changes to guidelines regarding mandatory reporting of adverse
events for certain trial registries, e.g. clinicaltrials.gov
We thank the reviewer for this comment. To address it, we have provided data showing that
registries are a much better source of information than articles. We have added the following
sentence (pages 4-5): “Outcome reporting bias of key safety results (serious adverse events,
suicides, deaths) is common in trials on antidepressants and antipsychotics [14]. : when
comparing with the data included in the corresponding clinical trial registries, only 57%, 38%
and 47% of serious adverse events, cases of deaths or suicide were reported in articles,
respectively [18].”
4. Grammar should be reviewed. The minimal use of commas make some sentences difficult to
read.
We thank the reviewer for this comment. We have reviewed all the manuscript accordingly.
Reviewer #4: Congratulations on putting together this commentary. It was a very good read. I'm
glad to see that efforts are made to change the current research state of things.
We thank this reviewer comment.
General comments:
This is a commentary that well highlights the importance of remaining cautious when reading the
results of clinical trials and meta-analyses. This is a very meaningful and important commentary
that will surely help toward changes in the scientific field. I enjoyed reading the manuscript, it is
highly informative and it addresses key issues in research that are hindering our knowledge of
current treatments.
We thank this reviewer comment.
My general comment would be to provide the reader with more practical information on how to
overcome these biases in research, for instance with suggestions on where protocols could (and
should) be registered (e.g., PROSPERO database).
We thank this reviewer‟s suggestion, and have addressed it as follows (page 5): “There are two
major methods to reduce trial results reporting bias. One is to register the trial or systematic
review or meta-analysis in a public registry before the start of the trial or review. Trials could be
registered in a number of registries that accept both trials on medicines and psychotherapy such
as, for instance, ClinicalTrials.gov [9] or ISRCTN [22], whereas systematic-reviews could be
registered on PROSPERO [23].”
More specific comments:
Sentence lines 27-32 – how does this add to the argument? By saying this do you mean that
because the p-value is marginal, the positive results are more likely to have been “fished”?
We think the reviewer refers to the following sentence (page 3) More recently, an unusually high
prevalence of psychology studies published with a p value just below 0.05 has been observed:
the number of studies with a p value between 0.045 and 0.05 is much higher than that expected
[5].” We are not conveying a specific interpretation of this fact, but it is somewhat striking to
observe what is happening in psychology studies. We could agree with your interpretation…but
there are others. For example, investigators play with the results and analyses just to have a result
very close to p<0.05
Lines 34-36 - Here spell out a bit more what this could imply – that the wrong p-value was
reported to match the hypothesis? Or that there were computation mistakes?
We think the reviewer refers to: “Furthermore, in 9% of psychology trials reported p values are
inconsistent with the reported statistic [6].” Again, we are not conveying a specific interpretation
of this finding, but it is somewhat striking to observe what is happening in psychology studies.
The author of the research asked himself about if these were “strategic mistakes?” and stated that
“generally speaking we cannot know which part is erroneous -the p value, the test-statistic or
perhaps the degrees of freedom”
Lines 45-52 – Yes, a small sample size can lead to false positive results, although it is also very
likely that it can lead to a lack of power to detect a true effect. This should be highlighted, and
the likelihood of both outcomes (false positives or false negatives) should be compared.
We think the reviewer refers to: “The first deficiency relates to the low number of trial
participants: the low statistical power associated with small numbers of participants facilitates
the finding of false positive results and exaggerated size effects [7,8].”
We thank for this comment. However, we honestly believe that although the reviewer is correct
on the type of errors a small sample size could lead to, the two he/she has highlighted and that
are not addressed in the article (i.e., lack to detect a true effect, false negatives) are not so
important, if we bear in mind that many negative trials are not reported (the principal reason
behind outcome reporting bias and reporting bias) and that the objective of the article is to warn
about the effects (positive effects when sometimes are false positives) described in mental health
interventions. In other words, we think that many clinicians (physicians and psychotherapists)
could be making decisions based on false positive trials. We are not so concerned on false
negative trials, in the context described in the article So, we think we should not add any
reflection on false negative trials in the article, and ensure reasers are focusing their attention on
false positive trials -the ones that are published..
Lines 1-22: Please explain what is meant by appropriately registered vs. not appropriately
registered. What are your criteria for something to be appropriately registered?
We thank very much this comment. We think the reviewer refers to the first paragraph, page 6.
We have addressed this editing the sentences as follows: “Unfortunately, the rate of registered
trials is low -only 25% of psychiatry journals ask for preregistration of trials to have the results
published [20]- and, in many cases, of poor quality. Thus, among the top 5 psychiatry journals,
that also ask for pre-registration of trials, only 33% of 181 trials published in 2009-2013 were
appropriately registered before the onset of participant enrollment, whereas only 14% had, in
addition, no outcome reporting bias [30]. In other analysis among 170 trials on depression
published between 2011 and 2013, only 33% and 20% of trials assessing drugs and cognitive
behavior therapy were appropriately registered (i.e., before study start and reporting fully
specified outcomes), respectively [31]. With regards to psychotherapy, a recent systematic
review on 112 randomized clinical trials published between 2010 and 2014, showed that only
18% were appropriately prospectively registered and only 5% were free of outcome reporting
bias [32].”
First paragraph on page 7 – You could suggest that one way to tackle this publication bias in the
future would be to include a procedure for reviewers where they have to read the trial manuscript
along with its original protocol.
This reviewer is completely right. However, since protocols are not usually available,
investigators have studied if reviewers would support checking the information included in the
manuscript with that on the corresponding registry: 93% of surveyed reviewers believed this is
not a task for reviewers (Matthieu et al. Plos One 2013, 8: e59910). This is a task for editors, but
it seems they are not willing to do so.
In any case, we believe this reflection is somewhat out of the scope of our article.
In the “concluding remarks” section: It would be interesting to add about some future ideal
avenue, where all reviewers/journals would have access to pivotal trial results, just as regulatory
agencies do.
Again, our article is focused on flagging an issue to clinicians and providing some clues to help
them in their professional practice, so we believe we should not comment on this. In addition, we
should emphasize that it is legally required to provide public access to all pivotal trials on
pharmacotherapy. This is not the case for psychotherapy trials (except for those NIH-funded
trials from 2017 onwards). We think this has been clearly discussed in our article.