UE11–Parcours- recherche clinique cours n°4 04/11/2015 Agnès Dechartes [email protected] RTs :Maximilian Schwarz Et Prune Sterckx Reporting and other designs Plan : (times ou Cambria, 12, souligné) I. Reporting A- Published research articles B- Poor reporting C- The consort statement II. Other designs A- Cross over trial B- Equivalence or non inferiority trial C- Cluster trial I. Reporting A. Published research articles It is very important to have all the elements well reported in the article or in the protocol. This is an essential part of the research, because it will allow the reader to understand what was done and what was found. If the results are not well reported, we won’t be able to access the risk of bias and so we won’t be able to tell if the results apply to our practice or not. Another important part is the reproducibility: it means that another team could replicate our results by using the same method. To allow reproducibility, we need total transparency and a full report. Nowadays, there are often several studies on the same subject, so we need to summarize the information in all of them. This is the aim of systematic review and meta-analysis. -systematic review: it’s a synthesis of all available literature concerning a clinical question. -Meta-analysis: it’s a statistical method, used to combine several results. To be able to use a systematic review or a meta-analysis, we need to have the results well reported, in a clear, detailed, complete and unselective way. B. Poor reporting There are several different type of poor reporting: Non-reporting of the trial (non publication) This concerns 1 out of 2 trials. Trials with statistical relevant results are more likely to be published than trials with negative results. So if we perform a systematic review, we will tend to overestimate the global results. Non-reporting of important information Sometimes some important information are not reported, for example the method of randomization. If the method of randomization is not reported, we won’t be able to access the risk of bias. Selective reporting Studies with positive outcomes are more likely to be published than studies with negative ones. Incomplete reporting Results are incompletely reported in the article, so the results can’t be included in the metaanalysis. For example, in a binary outcome like mortality at one month, to perform a metaanalysis we need the number of patients in each group and the total number of patients analyzed. Another example: for a continue outcome like pain, we will need the mean and the standard deviation of pain in each group, and the number of patients analyzed. Poor reporting concerns all types of studies, not only Randomized Controlled Trials (RCT). There are many methodological reviews assessing the quality of reporting: -in a particular disease -for a particular treatment -in a particular journal Example of one of those methodological reviews: The journal “The Lancet” published all the trials that were published in PUBMED in December 2000. They identified 519 RCTs, but the method of randomization wasn’t published in 79% of the trials, and the method of allocation concealment wasn’t published in 82% of the trials. Primary outcomes were only defined in 45% of the trials. C. Consort statement Now, there more initiatives to improve the quality of reporting. One of them is the Consort statement, which was created by methodologists, editors and investigators to improve the reporting in randomized controlled trials, by helping authors, peer-reviewers and editors to know the necessary information that have to be reported in the article. It consists in a check list of all elements that should be reported in an article. The flow chart we saw in the previous lecture is also based on the consort statement. The consort statement recommends having a flow chart that describes very clearly the flow of participants from evaluation of eligibility to statistical analysis. An explanation manuscript explaining why it is important to report each information. On this address: www.consort-statement.org we can download an example of the flow chart, like this one: Assessed for eligibility (n= ) Excluded (n= ) Not meeting inclusion criteria(n=) Declined to participate (n= ) Other reasons (n= ) Enrollment Randomized (n= ) Allocation Allocated to intervention (n= ) Received allocated intervention (n= ) Did not receive allocated intervention (give reasons) (n= ) Allocated to intervention (n= ) Received allocated intervention (n= ) Did not receive allocated intervention (give reasons) (n= ) Follow-Up Lost to follow-up (give reasons) (n= ) Discontinued intervention (give reasons) (n= ) Lost to follow-up (give reasons) (n= ) Discontinued intervention (give reasons)(n=) Analysis Analysed (n= ) Excluded from analysis (give reasons) (n= ) Analysed (n= ) Excluded from analysis (give reasons) (n= ) The consort statement was first published in 1996 and the revisited in 2001 and 2010. Many medical journals use the consort statement, and the investigators are often asked to send the consort statement check list with their manuscript in order to be published. Thanks to consort statement, there has been an improvement in the reporting. A team wanted to measure this improvement by comparing reporting in PUBMED in 2000 and 2006. There was indeed an improvement in almost every aspect, but it is still not perfect: in 2006 the method to generate sequence was only reported in 34% of published articles, and for allocation concealment it was only 25%. They also showed that journals endorsing the consort statement had better reporting than journals that didn’t. In fact, the consort statement was developed for published articles dor trials assessing a therapeutic intervention, like RCT in the first place. Now there are extensions for cluster trials, equivalence trials, non-pharmacological interventions and safety. However, each type of study has a guideline to improve reporting in the published articles: -for diagnosis studies: STARD statement -for observational studies: STROBE statement -for systematic reviews and meta-analyses: PRISMA statement A good reporting doesn’t mean that the study is good quality. It only allows the reader to understand were the problems are. II. Other designs Until now we saw the most common situation : randomized controlled trial with an individual randomization, parallel groups and a superiority hypothesis. We have other type of designs to answer other type of questions : cross over trial (≠ parallal groups) non-inferiority or equivalence trials (≠ superiority trials) cluster randomization (≠ individual randomization) A. Cross-Over Trial In the parallel group trial, after the randomization, one group ofpatients receive treatment A, and the other group gets treatment B. In the cross-over trial, every patient gets the two treatment compared. The major interest is that the patient is his own control. After the randomization, patients from group 1 receive treatment A, patients from group receive treatment B, and then they switch after a wash-out period. The wash out period allows us to know that the effect observed during the second period is not due to the treatment received during the first period. Necessary conditions : It needs to be a chronic disease, with a stable condition over time, to distinguish the effect of the treatment from the natural history of the disease. Both treatments must provide temporary effect (not surgery which has a permanent effect) The outcome could be repeated in the second period if it occurs in the first period. Ex : mortality can't be mesured twice Wash-out period is necessary To avoid the “carry-over” effect. The duration of this period depends of the tratments. Advantages : Increased power The number of patients needed is divided by two Perfect comparability between groups Analysis : 1) We need to check that there has not been a carry-over effect, because otherwise we can only analyse the first period, and therefore we have a very big loss of power. 2) Analysis for paired data Limits : Use of cross-over trials limited to symptomatic treatments in chronic diseases Carry-over effect Missing data (patients come for the first period but not the second, ...) Learning effect : the patients aswers twice to the same questionnaire, he knows the questions, and can then change his answers. B. Non inferiority and equivalence trials Sometimes we need to determine if two treatments are equivalent in terms of efficacy: if one of the treatments is less expensive than another, or better tolerated, or easier to use (pills vs injections). Because two treatments can't be exactly equal, we talk about “equivalence” ( → not too widely different from a predifined limit) We estimate the difference between the two groups with a 95% confidence interval. In a superiority trial, we have to be sure that the 95% confidence interval of the difference between the two groups doesn't include 0. In an equivalence trial, we set a margin, a limit. Within this limit, we consider that the treatments are equivalent. In a non-inferiority trial, there is only one limit : the new treatment can be superior to the usual one. Setting the margin is very difficult : Often driven by feasibility : the smaller the margin, the larger the sample size. Is a 10%, 5% or 2% difference acceptable as equivalent in the interest of patients? Is it ethical to consider that 1% more death is non-inferior? The Biocreep effect : The control treatment should be the one having proven its efficacy compared to placebo. Otherwise there is a risk of biocreep : Treatment A is the control treatment for treatment B, which is considered as notinferior, even though we can see a slight difference. But if treatment B becomes the reference for treatment C, etc etc, treatment F will be considered as not-inferior to treatment E, but compared to treatment A it is definitely inferior, and maybe not even superior to the placebo anymore. → treatment A should always be the reference C. Cluster trials It's a trial in which we don't randomize patients, we randomize groups of patients. We call those groups “clusters”. We can therefore test families (high risk of contamination), hospitals, … Advantages : Useful if an intervention can only be administred to the group Avoid contamination between groups Consequences from a statistical point of view : In a cluster, data are more correlated than in a usual randomization: this should be taken into account when planning the trial (sample size calculation) and when analyzing the data.
© Copyright 2026 Paperzz