Building Efficient Comparative Effectiveness Trials through Adaptive Designs, Utility Functions, and Accrual Rate Optimization: Finding the Sweet Spot Byron J. Gajewski, PhD Scott M. Berry, PhD Mamatha Pasnoor, MD Mazen Dimachkie, MD Laura Herbelin, BS Richard Barohn, MD 1 2 http://en.wikipedia.org/wiki/Bad_(album) Bayesian Adaptive Designs (BAD) • No longer “a dream for statisticians only” • Published not only in biostatistical journals but also clinical epidemiology and medical journals • Save time and money and lean towards more ethical studies • Scientific contribution to the design, implementation, and analysis of comparative effectiveness clinical trials • PCORI advocates for their use 3 Comparative Effectiveness • NASCAR • cryptogenic sensory polyneuropathy (CSPN) – What treatment for pain is the best? Off label and approved drugs used in practice • We built a BAD with efficiency for finding the best treatment in mind and found three key trial aspects – the Bayesian adaptive design parameters – the utility function for weighing endpoints – the patient accrual rate • These three developmental parameters are vital for building adaptive, cost-effective comparative effectiveness designs. 4 Comparative Effectiveness • NASCAR • Cryptogenic sensory polyneuropathy (CSPN) – What treatment for pain is the best? Off label and approved drugs used in practice • We built a BAD with efficiency for finding the best treatment in mind and found three key trial aspects – the Bayesian adaptive design parameters – the utility function for weighing endpoints – the patient accrual rate • These three developmental parameters are vital for building adaptive, cost-effective comparative effectiveness designs. 5 Comparative Effectiveness Non-diabetic • NASCAR • Cryptogenic sensory polyneuropathy (CSPN) – What treatment for pain is the best? Off label and approved drugs used in practice • We built a BAD with efficiency for finding the best treatment in mind and found three key trial aspects – the Bayesian adaptive design parameters – the utility function for weighing endpoints – the patient accrual rate • These three developmental parameters are vital for building adaptive, cost-effective comparative effectiveness designs. 6 Comparative Effectiveness Non-diabetic • NASCAR • Cryptogenic sensory polyneuropathy (CSPN) – What treatment for pain is the best? Off label and approved drugs used in practice • We built a BAD with efficiency for finding the best treatment in mind and found three key trial aspects – the Bayesian adaptive design parameters – the utility function for weighing endpoints – the patient accrual rate • These three developmental parameters are vital for building adaptive, cost-effective comparative effectiveness designs 7 Comparative Effectiveness Non-diabetic • NASCAR • Cryptogenic sensory polyneuropathy (CSPN) – What treatment for pain is the best? Off label and approved drugs used in practice • We built a BAD with efficiency for finding the best treatment in mind and found three key trial aspects – the Bayesian adaptive design parameters – the utility function for weighing endpoints – the patient accrual rate • These three developmental parameters are vital for building adaptive, cost-effective comparative effectiveness designs 8 Comparative Effectiveness Non-diabetic • NASCAR • Cryptogenic sensory polyneuropathy (CSPN) – What treatment for pain is the best? Off label and approved drugs used in practice • We built a BAD with efficiency for finding the best treatment in mind and found three key trial aspects Performance Adaptive Investigation of – the Bayesian adaptive design parameters Neuropathic – the utility function for weighing endpoints – the patient accrual rate Pain-Comparison of Treatments in •Real-Life These three developmental parameters are vital Situations for building adaptive, cost-effective comparative (PAIN-CONTRoLS effectiveness designs) 9 What has been done on BAD? • Phase I-III clinical trials – dose finding studies – assessment of safety and efficacy in the presence of historical prior information. • In many cases these studies have a functional form that is unique to • Classical pharmaceutical clinical trials (e.g. control group or dose). 10 What has not been done on BAD? • A different challenge in comparative effectiveness trials – there is typically no control group – investigating the relative effectiveness – No dose structure to our problem • We discuss the unique framework of BAD under this setting. 11 What we address here • Combine endpoints with a utility function • Optimize accrual • Ex: PAIN-CONTRoLS – Endpoints – Models – Simulation • We find the “sweet spot” balancing – Average number of patients needed – Average length of time to finish the study 12 PAIN-CONTRoLS • Five different drugs (e.g. Lyrica; Cymbalta; Tramadol; Nortriptyline; Gabapentin) • Multi-site trial (20 sites); accrual about 4-8 patients/week • Nmax=600 (1.5-3.0 years) • Endpoints: – Efficacy: ½ or better drop in VAS score (baseline to 12 weeks) – Quit/Dropout: Drop treatment after 12 weeks 13 Combining Endpoints • Combining two endpoints (Berry et al., 2010) – We detail the building of a utility function here • Scenario: Drug B > Drug A but higher quit rate – What would that “quit rate” have to be in order for Drug B to be clinically the same as Drug A? 14 15 Quit Combining Endpoints • Utility for Efficiency: 1 for 100% efficacy and utility of 0 for 0% efficacy. • Utility for quit/ discontinue endpoint we used utility of 0.75 at 0% quit/discontinue with a drop to 0 at 100% quit/discontinue. • Utility combination U(E,Q)=E+.75-.75Q. 16 Statistical Details 17 Basic analytic examples • Example 1: one arm • Example 2: two arms 18 Example 1: one arm • Consider a tolerability endpoint for the PAIN-CONTRoLS study and suppose the endpoint is measured immediately after randomization (Qi=1) or not (Qi=0) – n=85 patients (fixed) – SQ=Σqi – θ quit rate (unobserved but random) – Δ max. tolerated quit rate (fixed & known) • Stopping rule: P(θ <Δ |SQ)> γ 19 Example 1: one arm Period 2 (T2) n2 Period 1 (T1) n1 Time 20 Example 1: one arm Period 2 (T2) n2 Period 1 (T1) n1 Time Stop if P(θ <Δ |SQ)> γ (Uniform-Binomial) 21 Example 1: one arm Period 2 (T2) n2 Period 1 (T1) n1 Time Stop if P(θ <Δ |SQ)> γ (Uniform-Binomial) Otherwise move on 22 Example 1: operating characteristics 1. Sampling distribution is Qi|θ0~Bern(θ0), where θ0 is the true quit/discontinue rate. 2. Probability of stopping the trial early at period 1: n n1 S n1 SQ SQ n1 SQ 1 Q P1 I d 0 1 0 1 , S S n SQ 0 Q 0 Q 1 n1 where I(x>y) is 1 if x>y and 0 otherwise. a. Expected time (T) of the trial is E(T)=P1T1+(1-P1)T2 b. Expected sample size (N) of the trial is E(N)=P1n1+(1-P1)85. 23 Example 1: operating characteristics 1. Sampling distribution is Qi|θ0~Bern(θ0), where θ0 is the true quit/discontinue rate. 2. Probability of stopping the trial early at period 1: n n1 S n1 SQ SQ n1 SQ 1 Q P1 I d 0 1 0 1 , S S n SQ 0 Q 0 Q 1 n1 where I(x>y) is 1 if x>y and 0 otherwise. a. Expected time (T) of the trial is E(T)=P1T1+(1-P1)T2 b. Expected sample size (N) of the trial is E(N)=P1n1+(1-P1)85. 24 Example 1: operating characteristics 1. Sampling distribution is Qi|θ0~Bern(θ0), where θ0 is the true quit/discontinue rate. 2. Probability of stopping the trial early at period 1: n1 S n1 n1 SQ SQ n1 SQ Q 1 P1 I d 0 1 0 , S S n SQ 0 Q 0 Q 1 n1 where I(x>y) is 1 if x>y and 0 otherwise. a. Expected time (T) of the trial is E(T)=P1T1+(1-P1)T2 b. Expected sample size (N) of the trial is E(N)=P1n1+(1-P1)85. 25 Example 1: operating characteristics 1. Sampling distribution is Qi|θ0~Bern(θ0), where θ0 is the true quit/discontinue rate. 2. Probability of stopping the trial early at period 1: n n1 S n1 SQ SQ n1 SQ 1 Q d 0 1 0 1 P1 I , S S n SQ 0 Q 0 Q 1 n1 where I(x>y) is 1 if x>y and 0 otherwise. a. Expected time (T) of the trial is E(T)=P1T1+(1-P1)T2 b. Expected sample size (N) of the trial is E(N)=P1n1+(1-P1)85. 26 Example 1: operating characteristics 1. Sampling distribution is Qi|θ0~Bern(θ0), where θ0 is the true quit/discontinue rate. 2. Probability of stopping the trial early at period 1: n n1 S n1 SQ SQ n1 SQ 1 Q P1 I d 0 1 0 1 , S S n SQ 0 Q 0 Q 1 n1 where I(x>y) is 1 if x>y and 0 otherwise. a. Expected time (T) of the trial is E(T)=P1T1+(1-P1)(T1+T2) b. Expected sample size (N) of the trial is E(N)=P1n1+(1-P1)85. 27 One arm: size and cost (n1+n2=85) “virtual response” θ0=.2 Δ =.3, and γ =.8, n1=30, 35, 40,…,80 T1=T2=28 days. The probability of stopping early varies from 0.4275 for n1=30 and jumps up to 0.7621 for n2=80. E(n) or E(T) in days 90 E(N) 80 70 60 E(T) 50 40 30 25 35 45 55 n1 65 75 85 28 Example 2: two arms • Similar notation, but stop if P({θ1 < θ2 | SQ1,SQ2})> γ or if P ({θ1 > θ2 | SQ1,SQ2})>γ • Operations – Complicated closed form (Kawasaki & Miyaoka, 2012) – Then using a double sum across SQ1 and SQ2 would allow similar calculations for E(T) and E(N) as done in Example 1 29 Gets more complicated fast • Five arms and two endpoints • Accrual patterns tend to be random and staggered; not fixed • Quickly complicate things for closed-form analytic solutions • Therefore, as advocated by Berry et al. (2011), we utilize simulations 30 PAIN-CONTRoLS • • • • • Virtual subject response for five arms Accrual patterns Design Adaptive randomization: allocation Simulation Algorithm 31 Virtual subject response for five arms • Null case θe0 .3, .3, .3, .3, .3 and θ0q .2, .2, .2, .2, .2 • Alternative case θe0 .3, .3, .3, .4, .5 and θ0q .3, .3, .3, .25, .15 32 Accrual patterns 1. mean number of accrued patients per week: ΛT. 2. {NT NT 1}| T ~ Poisson T , where T=1,2,3,…, and N 0 =0. The patterns of ΛT depend on two factors: a. the number of sites actively enrolling patients into the study and b. how fast the sites can enroll, which we assume is a constant λ0/2 for each: 0 , 0 T <2 2 , 2 T <4 0 T 30 , 4 T <6 . 100 , 20 T 33 Design 1. Likelihood: SEjT|njT~Bino(njT ,θej) and SQjT|njT~Bino(njT ,θqj). 2. Priors, logit ej ~ N 0,1002 and logit jq ~ N 0,1002 . 3. Posterior distributions, MCMC. 4. Our stopping criteria: a. minimum of 200 subjects allocated. b. Stop the trial if the probability the arm with the maximum utility > 0.90. c. Utility U jT je | S EjT 0.75 0.75 jq | SQjT with maximum utility U max,T max U1T ,U 2T ,U 3T ,U 4T ,U 5T . d. The evaluation criteria: probability the arm with the maximum utility > 0.90. 34 Adaptive randomization: allocation V * j Pr U jT U max,T Var U jT n jT 1 35 Sweet Spot Algorithm (SSA) • • • • Step 0: Set b=0 Step 1: Set b=b+1. Step 2: Simulate the initial observed data. Step 3: estimate posterior parameters via simulation and calculate the stopping rule and the possible next allocation. • Step 4: repeat steps 2 and 3 after collecting four more weeks of data. • Step 5: evaluate all of the data after collecting all of the endpoints. • Step 6: go to step 1 unless b=100, then stop. 36 Results 37 Pmax, N, and T predictive distributions “Alternative Case” (Λ20=8) 100 60 Count Count 40 50 0 0.2 0.4 0.6 Pmax 0.8 1 60 80 T 100 120 20 0 200 300 400 n 500 600 30 Count 20 10 0 40 38 Alternative Case • Success: – 95% of the trials had early success – 1% late success (trial goes to the maximum sample size of 600) – 4% of incomplete solutions. • Sample size: – E(N)=302.2 subjects – 80% of the trials being 362 or smaller. • Length: – E(T)=61.4 weeks – longest trial taking 100 weeks. 39 Expected size, time, and cost for five arms (effect scenario) E(N)= 7.4466Λ20 + 241.53 E(n) or E(T) in weeks 350 300 250 200 150 E(T)= 254.82(Λ20)-0.694 100 50 0 0 2 4 6 8 10 12 Λ20 40 Expected size, time, and cost for five arms (effect scenario) E(N)= 7.4466Λ20 + 241.53 E(n) or E(T) in weeks 350 300 E(Cost)= 7.4466 Λ 20 +241.53+ 1.25(254.82 Λ 20 -0.694) 250 Taking derivative w.r.t. 200 150 ˆ 1.25*254.82*.694 /7.4466 1/1.694 Λ 20 =7.4 E(T)= 254.82(Λ20)-0.694 100 Λ 20 and solving we get 50 0 0 2 4 6 8 10 12 Λ20 41 Expected size, time, and cost for five arms (effect scenario) E(N)= 7.4466Λ20 + 241.53 300 250 E(Cost)= 7.4466 Λ 20 +241.53+ 1.25(254.82 Λ 20 -0.694) Taking derivative w.r.t. 200 150 Λ 20 and solving we get ˆ 1.25*254.82*.694 /7.4466 1/1.694 Λ 20 =7.4 E(T)= 254.82(Λ20)-0.694 100 50 0 0 2 4 6 8 10 12 600 Λ20 550 E(C) E(n) or E(T) in weeks 350 500 450 400 350 0 2 4 6 Λ20 8 10 42 12 If we accrue faster will we get less efficacy per unit? • No! • For the accrual patterns the proportion times we are successful (i.e. proportion ) is between 0.96 and 1.00 • The margin of error if the true success rate is about 0.98 (+/-1.96*sqrt(.98*.02/100)=+/-.0274). 43 Expected size, time, and cost for five arms (null scenario) E(n) or E(T) in weeks 700 E(N) = 0.3863Λ20 + 586.5 600 500 400 300 200 E(T) = 584.27(Λ20)-0.879 100 0 0 2 4 6 8 10 12 Λ20 44 Discussion: relative to fixed trial • Classical framework: fixed sample size but get various endpoint efficacy knowledge • We “flip” the approach to clinical trials design – The effect we learn is fixed – While sample size varies depending on the data – BAD approach is a proxy for the scientific knowledge 45 Discussion: various extensions of SSA • Vary the number arms (say 2, 6, or more), • One endpoint instead of two • Change the maximum sample size from 600 to higher • Change to a minimal efficacy or a futility stopping rule • The accrual pattern could change 46 Discussion: Accrual • Starts off small and grows (Anisimov, 2011) • Adaptive accrual => accrual prediction models (e.g. Anisimov & Federov 2007; Gajewski, Simon, and Carlson, 2008; Zhang and Long, 2010; & Anisimov, 2011) => update accrual patterns are in real time • For example,overpromise and under deliver (e.g. Breau, 2006) 47 Discussion: Generalize • SSA algorithm extensions – Time to event endpoints – Ordinal or continuous or a mix of the two endpoints – Dynamic linear models for dose finding studies – Various types of hierarchical models 48 Discussion • Sweet spot same for all drugs? – Subjects cost differently by drug • Generalizability to other Bayesian adaptive clinical trials: – adaptation rule – utility function – accrual should all be parameters considered for optimizing the design of comparative effectiveness research 49 Benefits of BAD: “B.A. Baracus” trial design • Hard work up-front but worth it later • Fit • Efficient • Very good at getting answers • Bad A** http://www.a-team-inside.com/ba/bosco-b-a-baracus 50 Acknowledgements • Frontiers: The Heartland Institute for Clinical and Translational Research CTSA UL1TR000001 (Barohn & Aaronson) • Department of Biostatistics (e.g. matching Frontiers effort) (Mayo) 51 QUESTIONS? 52
© Copyright 2025 Paperzz