1 / 33

Introduction to Adaptive Designs: Definitions and Classification

Introduction to Adaptive Designs: Definitions and Classification. Recent DIJ Publications by PhRMA Working Group on Adaptive Designs ( Drug Information Journal, Vol. 40, 2006). P. Gallo, M. Krams Introduction V. Dragalin Adaptive Designs: Terminology and Classification

Download Presentation

Introduction to Adaptive Designs: Definitions and Classification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Adaptive Designs: Definitions and Classification AD course for Philadelphia ASA Chapter

  2. Recent DIJ Publications by PhRMA Working Group on Adaptive Designs(Drug Information Journal, Vol. 40, 2006) • P. Gallo, M. Krams Introduction • V. Dragalin Adaptive Designs: Terminology and Classification • J. Quinlan, M. Krams Implementing Adaptive Designs: Logistical and Operational Considerations • P. Gallo. Confidentiality and trial integrity issues for adaptive designs • B.Gaydos, M. Krams, I. Perevozskaya, F.Bretz; Q. Liu, P. Gallo, D. Berry; C. Chuang-Stein, J. Pinheiro, A. Bedding.Adaptive Dose Response Studies • J. Maca, S. Bhattacharya, V. Dragalin, P. Gallo, M. Krams, Adaptive Seamless Phase II / III Designs – Background, Operational Aspects, and Examples • C. Chuang-Stein, K.Anderson, P. Gallo, S. Collins. Sample Size Re-estimation: A Review and Recommendations

  3. Outline • Adaptive design: evolution if the term • Adaptive vs. static designs • Some adaptive designs were known under different names • Formal classification effort: • Structure and key elements • Classification by Objective and Phase or Stage • Adaptive designs “ahead of others” (where effort should be focused) • dose response • seamless II/III • Sample size re-estimation

  4. Adaptive vs. Traditional Designs • In traditional drug development, most designs used (especially Phase II and III) are “static”: • Key elements driving the designs are specified in advance: • Hypotheses to be tested • Population of interest • Maximum information to be collected (translated into power, SS, and detectable treatment effect) • Randomization scheme • Early stopping rules

  5. Adaptive vs. Traditional Designs (cont.) • “Static” designs framework: • Results observed during trial are not used to guide it’s course • This setup provides solid inferential procedures • But leaves some space for improvement in terms of efficiency • Different ways to improve efficiency have been proposed over time, allowing dynamic modification of trial’s design during its course based on accumulating data • That lead to formation of a broad group of methods known today as “adaptive designs”

  6. Adaptive vs. Traditional Designs (cont.) Definition: (from An Executive Summary of PhRMA Working Group): Adaptive design refers to a clinical study design that uses accumulating data to decide on how to modify aspects of the study as it continues, without undermining the validity and integrity of the trial • Essential components: • changes are made by designs and not on an ad-hoc basis • adaptation is a design feature and not a remedy for poor planning

  7. Adaptive Designs: Evolution of the Term • Many of designs we call “adaptive” today existed for quite some time as a “class of their own” • (e.g. group-sequential designs, response-adaptive randomization, flexible designs, sample size re-estimation ) • These designs • Aim at improving some feature of a rigid traditional design (such as cost efficiency or addressing an ethical dilemma) • Share a common feature of mid-course adaptation(s) • As the number of such designs grew, so did the confusion… • Strong need for a unified structured approach to terminology has emerged

  8. Key Reference: V. Dragalin “Adaptive designs: Terminology and Classification“. Drug Information Journal (2006), Vol 40, pp 425-435 • First attempt to develop a unified approach to AD • Reflects discussions within PhRMA working group on adaptive designs • Major source of AD review to follow • Provides: • general definition of adaptive designs • structure (key components) • Classification (by objective) • mapping against drug-development process

  9. Review of “AD: Terminology and Classification”Adaptive Design Definition Adaptive design refers to a multistage clinical study design that uses accumulating data to decide on how to modify aspects of the study without undermining the validityand integrity of the trial • Validity: • Correct statistical inference • Ensuring consistency across different parts • Minimizing operational basis • Integrity: • Providing results convincing to the scientific community • Adequate pre-planning and blinding procedures

  10. Key Elements of anAdaptive Design • Allocation Rule • Sampling Rule • Stopping Rule • Decision Rule Examples: • Group sequential designs (stopping ) • Response-adaptive allocation (allocation) • Sample size reassessment (sampling) • Flexible designs (all) One or more may be applied during interim looks

  11. Key Elements of anAdaptive Design (cont.) 1. Allocation Rules: • Determine how patients are assigned to available treatments at each stage • Can be fixed (static) or adaptive (dynamic) • Fixed allocation examples: • Complete randomization • Stratified randomization • Restricted randomization • Adaptive allocation examples • Covariate-adaptive randomization • Response-adaptive randomization • Bayesian response-adaptive randomization (Berry, 2001) • Drop-the-loser type (Sampson, 2005) Rosenberger and Lachin, (2001)

  12. Key Elements of anAdaptive Design (cont.) 2. Sampling Rules • How many subject will be sampled at the next stage? • Examples of designs with SR: • Blinded SS re-estimation • Adjustment of SS based on estimate of a nuisance parameter • Unblinded SS re-estimation • Adjustment of SS based on information about trt effect • Traditional group sequential • fixed sampling rule • Flexible SSR based on conditional power • Probability of rejecting null at the end of study given first-stage data • Calculated for the originally specified treatment effect

  13. Key Elements of anAdaptive Design (cont.) 3. Stopping rules • Intended to protect patients from unsafe drug or to expedite the approval of a beneficial treatment. • Based on satisfying power requirements in hypothesis testing framework • “Crossing a boundary” methodology • Superiority • Harm • Futility • Examples: classical group-sequential (Jenisson&Turnbull, 2000)

  14. Key Elements of anAdaptive Design (cont.) 4. Decision rules: • Changing test statistics • Redesigning multiple endpoints • Selecting hypotheses to be tested or their hierarchy • Changing patient population • Choosing the number of interim analyses based on current information • For dose-response studies-selecting next dose assignment

  15. Classification of Adaptive Designs Ref: V. Dragalin “Adaptive designs: Terminology and Classification“. DIJ (2006) • Key elements of AD define structure and describe algorithms of AD • Allocation Rule • Sampling Rule • Stopping Rule • Decision Rule • Another way to classify AD is by • what their objectives are • applicability to a particular stage of clinical development

  16. Classification of Adaptive Designs (cont.) • Single-arm trials • Comparing two treatments • Comparing more than two treatments • Model-based dose-response assessment • Seamless Phase II/III

  17. 1. Adaptive Designs for Single-Arm Trials • Applicability: Phase-I/POC/Phase II • Screening trials for 1 trt-used to screen candidate components based on short-term response • Employ small sample sizes • Hypothesis testing: minimum acceptable probability of response pre-specified • Allow early stopping due to futility • Ex1: Two-stage designs (Gehan, 1961 ) • Ex2: Bayesian designs (Thall & Simon, 1994)

  18. 1. Adaptive Designs for Single-Arm Trials (cont.) • Designs for entire screening program • Minimize time to identify promising compound • Control Type I and Type II risk for the entire program • Ref: Wang&Leung, 1998; Yao&Venkatraman, 1998; Hardwick & Stout, 2002;

  19. 2. Adaptive Designs for Comparing Two Treatments • Applicability: predominantly Phase III, but some can be used in Phase I-II • Fully sequential design • Check boundary crossing after each patient • Group-sequential Design • Check boundary crossing after a group of patients • Adaptive group-sequential designs • Extend the GSD methodology: allow  in SS • Methodology based on P-value combination tests • Flexible designs • Wide spectrum of decision rules can be applied after 1st stage • Recursive application of 2-stage combination tests • Allow many mid-trial adaptations; not all prespecified (in theory….)

  20. 3. Adaptive Designs for Comparing More Than Two Treatments • Applicability: dose-response assessment studies (mostly phase II, full range I-III) • “Late stage dose-response development” • group-sequential designs (Stallard & Todd, 2003) • Flexible designs (Bauer & Kieser, 1999) • “Early exploratory development” • Dose-escalation studies (Phase I; Ex. CRM) • Model-based dose-response assessment • D-optimal designs • Bivariate response • Penalized (constrained) designs • Bayesian dose-finding designs • Reviewed in depth in (Gaydos et al., 2006)

  21. 4. Seamless Phase II/III designs • Combine traditional Phase IIb and Phase III • “learning and confirming” governed by one protocol • Can be • operationally seamless • inferentially seamless • Explored in depth in Maca et al., 2006

  22. Dose-Finding AD Example:Continual Reassessment Method (Ex.1) • Bayesian dose-escalation design • Designed to converge to MTD • For a predefined set of doses to be studied and a binary response, estimates dose level (MTD) that yields a particular proportion of responses • Updates MTD distribution after each patient’s response • Next dose is selected as the one with predicted probability closest to the target level of response • Procedure stops after N patients enrolled

  23. Continual Reassessment Method (cont.) Choose initial estimate of response distribution & choose initial dose Update Dose Response Model & estimate Prob. (Resp.) @ each dose Obtain next Patient’s Observation Next Pt. Dose = Dose w/ Prob. (Resp.) Closest to Target level Stop. MTD = Dose w/ Prob. (Resp.) Closest to Target level Max N Reached? no yes

  24. CRM Design example (1) • Post-anesthetic care patients received a single IV dose of 0.25, 0.50, 0.75, or 1.00 μg/kg nalmefene. • Response was Reversal of Analgesia (ROA) = increase in pain score of two or more integers above baseline on 0-10 NRS after nalmefene • Patients entered sequentially, starting with the lowest dose • The maximum tolerated dose = dose, among the four studied, with a final mean posterior probability of ROA closest to 0.20 (i.e., a 20% chance of causing reversal) • Modified continual reassessment method (iterative Bayesian proc) selected the dose for each successive pt. as that having a mean posterior probability of ROA closest to the preselected target 0.20. • 1-parameter logistic function for probability of ROA used to fit the data at each stage Dougherty,et al.ANESTHESIOLOGY (2000)

  25. CRM example (1) results * including the 1st patient treated (MTD), i.e., estimated mean posterior probability closest to 0.20 target ^ extrapolated

  26. CRM example (1) results Posterior ROA Probability (with 95% probability intervals) 1.0 0.8 0.6 0.4 0.2 0.0

  27. Continual Reassessment Method (cont.) • Allocation rule: model-based • Sampling rule: cohort size • Stopping rule: max N or no rule • Decision rule: posterior update, select next dose

  28. Example 2: Comparing 2 treatments Adaptive GS (Flexible) design Redesigned trial example from (Cui et al., 1999) • Actual design: group sequential design • Proposed design: sample size re-estimation + combination test statistic • Phase III trial for prevention of MI in patients undergoing coronary artery bypass graft surgery • N=600 per treatment group to detect 50% reduction of incidence (predicted 22% for placebo vs. 11% for drug) with 95% power • Interim analysis at 50% data: • N=300 per treatment group • Observed incidence for pbo was ~16.5%, drug~11% • Given observed data, power is 40% to detect 25% reduction

  29. Example 2 (cont.) • Sponsor wanted to increase 2nd stage sample size to detect smaller effect • Type I error rate would be inflated with usual group sequential test • Trial continued with planned sample size and ended with non-significant statistical result • Instead, authors proposed to  SS and use combination test • Simulations were performed: • Increase total sample size to 1400 per treatment group • Maintain Type I error rate; 93% power to detect 25% reduction

  30. Example 2 (cont.) • Allocation rule: fixed randomization • Sampling rule: sample size of next stage depends on results from previous stage • Stopping Rule: p-value combination test • Decision Rule: adapting alternative hypothesis and test statistics

  31. Summary: adaptive designs where attention needs to be focused • Dose-ranging studies: • B.Gaydos, M. Krams, I. Perevozskaya, F.Bretz; Q. Liu, P. Gallo, D. Berry; C. Chuang-Stein, J. Pinheiro, A. Bedding.Adaptive Dose Response Studies • Seamless Phase II/III • J. Maca, S. Bhattacharya, V. Dragalin, P. Gallo, M. Krams, Adaptive Seamless Phase II / III Designs – Background, Operational Aspects, and Examples • Sample Size Re-estimation • C. Chuang-Stein, K.Anderson, P. Gallo, S. Collins. Sample Size Re-estimation: A Review and Recommendations

  32. Conclusions • Adaptive designs provide an opportunity to redesign trials based on accumulating data • In some situations, may be more efficient than implementing traditional designs • There is no “ one-size-fits-all” recommendation for the choice of AD • In fact, it may not be the best solution at all • That decision will depend on: • Trials objectives • Regulatory guidelines • Logistic and practical consideration • Those are collectively determined by clinicians, regulatory, statisticians and data management => complicated process • As a result, implementation may be the biggest challenge • However, there are successful examples out there and that should be encouraging!!!

  33. Additional References • Rosenberger WF, Lachin JM. Randomization in Clinical Trials: Theory and Practice. New York: Wiley; 2002. • Berry D. Adaptive trials and Bayesian statistics in drug development. Biopharm Rep. 2001;9:1–11. • Sampson AR, Sill MW. Drop-the-Losers design: normal case. Biometrical J. 2005;47:257–268. • Cui L, Hung HMJ, Wang SJ. Modification of sample size in group sequential clinical trials. Biometrics. 1999;55:853–857 • Jennison C, Turnbull BW. Group Sequential Methods With Applications to Clinical Trials. Boca Raton, FL: Chapman and Hall; 2000. • Gehan EA. The determination of number of patients in a follow-up trial of a new chemotherapeutic agent. J Chronic Dis. 1961;13:346–353. • Wang YG, Leung DHY. An optimal design for screen trials. Biometrics. 1998;54:243–250. • Yao TJ, Venkatraman E. Optimal two-stage design for a series of pilot trials of new agents. Biometrics.

More Related