PRISMA Harms - The EQUATOR Network

PRISMA Harms: improving harms reporting in systematic reviews.
Zorzela L, Loke YK, Ioannidis JP, Golder S, Santaguida L, Altman D, Moher D, Vohra S.
Evidence-based medicine seeks the best, unbiased evidence to make appropriate decisions to improve
patient care. For optimal care, accurate knowledge of both benefits and harms are needed.
Randomized trials are usually designed with sufficient statistical power to observe differences between
interventions for their primary outcome. However, they are known to be poor at identifying and
reporting harms (2-8). Well designed, conducted, and reported RCTs can paint an incomplete and
potentially biased picture due to their emphasis on efficacy results combined with their inadequate
reporting of harms (2-8). A single trial is usually not powered for the assessment of adverse events (AEs);
unless this is explicitly acknowledged, a misconception may be perpetuated that a given intervention is
safe, when its safety is actually unknown (2-8). For example, the majority of newly released drugs
usually do not add many benefits in comparison to current treatments and the full investigation of
adverse events can be the decisive factor in choosing between different treatment options (2).
Systematic reviews (SRs) compound poor reporting of harms in primary studies by failing to report
harms or doing so inadequately (9-17). Such reviews can be misleading as they do not represent the true
risk-to-benefit ratio of a given treatment (2, 6, 12-15). As many harms are rare and not the primary
outcome of included studies, the search strategy, eligibility of study designs, screening method, data
collection and statistical methods may differ from reviews of efficacy.
The number of reviews exclusively designed to measure harms represents less than 10% of the total
number of reviews (9-12). In 1994, only five reviews retrieved from Database for Abstracts of Reviews of
Effects (DARE) and the Cochrane Database (CDSR) were specifically designed to address an unintended
effect of an intervention. This number has increased over the years. In 2010, 104 reviews addressed a
harm as a primary outcome (11), although this may reflect the increased number of SRs in general as the
proportion of reviews of harms in comparison to efficacy reviews remains stable at 5% during this period
(11).
A recent review (10) of SRs from CDSR and DARE identified 296 DARE reviews and only 13 Cochrane
reviews with a singular primary intent to measure harms of interventions. Even though many systematic
reviews increasingly try to include all outcomes (both beneficial and harmful) for consideration, data on
AEs may be more fragmented and incomplete, and given more cursory treatment than efficacy data.
Some reporting deficiencies identified included: lack of a clear definition of the event reviewed; lack of
specification regarding study designs selected for inclusion; and no report on length of follow up or
measurement of any associated risk factors (10).
The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) Statement was
published in 2009 (1) to offer guidance to review authors, peer reviewers, and journal editors on
reporting standards when publishing a systematic review or meta-analysis of randomized clinical trials
[RCTs]. PRISMA helps authors ensure they report in a complete and transparent fashion. Thus far,
PRISMA has focused mainly on effectiveness. We are developing the PRISMA Harms extension to
improve harms reporting in SRs and to encourage a more balanced assessment of benefits and harms of
interventions.
Contact information:
Dr Sunita Vohra. Professor. Dep Pediatrics, University of Alberta, Canada.
Email: [email protected]
References:
1.
2.
3.
4.
5.
6.
Moher D, Liberati A, Tetzlaff J, Altman D. The PRISMA Group. Preferred Reporting Items
for Systematic Reviews and Meta-Analyses: The PRISMA Statement. July 2009; 6 (7).
http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.100009
7
Ioannidis JP. Adverse events in randomized trials: neglected, restricted, distorted, and
silenced. Arch Intern Med 2009; 169: 1737–39.
Sigh S, Loke Y. Drug safety assessment in clinical trials: methodological challenges and
opportunities. Trials 2012, 13:138 http://www.trialsjournal.com/content/13/1/138
Papanikolau P, Ioannidis J. Availability of Large-scale Evidence on Specific Harms from
Systematic Reviews of Randomized Trials. American Journal of Medicine 2004, 117: 582589.
Pitrou I, Boutron I, Ahmad N, Ravaud P. Reporting of safety results in published reports
of randomized controlled trials. Arch Intern Med 2009; 169: 1756–61.
Loke Y, Derry S. Reporting of adverse drug reactions in randomized controlled trials- a
systematic survey. BMC clinical pharmacology 2001. 1:3. [PMID: 11591227]
7.
8.
9.
10.
11.
Ioannidis J, Lau J. Completeness of Safety reporting in Randomized Trials. JAMA 2001.
285 (4) 437-443.
Yazici Y. Some Concerns about Adverse Event Reporting in Randomized Clinical Trials.
Bulletin of the NYU Hospital for Joint Diseases 2008;66(2):143-5.
Ernst E, Pittler M. Assessment of therapeutic safety in systematic reviews: literature
review. BMJ 2001;323:546
Zorzela L, Golder S, Liu Y, Pilkington K, Hartling L, et al. Quality of reporting in systematic
reviews of adverse events: Systematic review. BMJ 2014;348:f7668
Golder S, Loke Y, Zorzela L. Some improvements are apparent in identifying adverse
effects in systematic reviews from 1994 to 2011.