1 / 58

Clinical Trials of Predictive Medicine New Challenges and Paradigms

Clinical Trials of Predictive Medicine New Challenges and Paradigms. Richard Simon, D.Sc. Chief, Biometric Research Branch National Cancer Institute http://brb.nci.nih.gov. Biometric Research Branch Website brb.nci.nih.gov. Powerpoint presentations Reprints BRB-ArrayTools software

lexiss
Download Presentation

Clinical Trials of Predictive Medicine New Challenges and Paradigms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Clinical Trials of Predictive MedicineNew Challenges and Paradigms Richard Simon, D.Sc. Chief, Biometric Research Branch National Cancer Institute http://brb.nci.nih.gov

  2. Biometric Research Branch Websitebrb.nci.nih.gov • Powerpoint presentations • Reprints • BRB-ArrayTools software • Data archive • Web based Sample Size Planning • Clinical Trials • Development of gene expression based predictive classifiers

  3. Why We Need Prognostic & Predictive Biomarkers • Most cancer patients don’t benefit from the systemic treatments they receive • Being able to predict which patients are likely to benefit would • Benefit patients • Control medical costs • Improve the success rate of clinical drug development

  4. Predictive biomarkers • Measured before treatment to identify who will or will not benefit from a particular treatment • ER, HER2, KRAS • Prognostic biomarkers • Measured before treatment to indicate long-term outcome for patients untreated or receiving standard treatment • Used to identify who does not require more intensive treatment • OncotypeDx

  5. Prognostic and Predictive Biomarkers in Oncology • Single gene or protein measurement • ER protein expression • HER2 amplification • KRAS mutation • Index or classifier that summarizes expression levels of multiple genes • OncotypeDx recurrence score

  6. Most Prognostic Factors are not Used • They are developed in unfocused studies not designed to address an intended medical use • The studies are based on convenience samples of heterogeneous patients for whom tissue is available • Although they correlate with a clinical endpoint, they have no demonstrated medical utility • They are not “actionable”

  7. Types of Validation for Prognostic and Predictive Biomarkers • Analytical validation • Accuracy compared to gold-standard assay • Robust and reproducible if there is no gold-standard • Clinical validation • Does the biomarker predict what it’s supposed to predict for independent data • Clinical/Medical utility • Does use of the biomarker result in patient benefit • Is it actionable? • Generally by improving treatment decisions

  8. Clinical Trials Should Be Science Based • Cancers of a primary site are generally composed of a heterogeneous group of diverse molecular diseases • The molecular diseases vary fundamentally with regard to the oncogenic mutations that cause them, and in their responsiveness to specific drugs

  9. Standard Clinical Trial Approaches • Based on assumptions that • Qualitative treatment by subset interactions are unlikely • “Costs” of over-treatment are less than “costs” of under-treatment • Have led to widespread over-treatment of patients with drugs to which few benefit

  10. Predictive Biomarkers • In the past often studied as exploratory post-hoc subset analyses of RCTs. • Numerous subsets examined • Not focused pre-specified hypothesis • No control of type I error

  11. How Can We Develop New Drugs in a Manner More Consistent With Modern Tumor Biology and Obtain Reliable Information About What Regimens Work for What Kind of Tumors?

  12. Prospective Drug Development With a Companion Diagnostic • Develop a completely specified genomic classifier of the patients likely to benefit from a new drug • Establish analytical validity of the classifier • Use the completely specified classifier to design and analyze a new clinical trial to evaluate effectiveness of the new treatment and how it relates to the classifier

  13. Guiding Principle • The data used to develop the classifier must be distinct from the data used to test hypotheses about treatment effect in subsets determined by the classifier • Developmental studies are exploratory • Studies on which treatment effectiveness claims are to be based should be definitive studies that test a treatment hypothesis in a patient population completely pre-specified by the classifier

  14. Targeted Design • Restrict entry to the phase III trial based on the binary predictive classifier

  15. Develop Predictor of Response to New Drug Using phase II data, develop predictor of response to new drug Patient Predicted Responsive Patient Predicted Non-Responsive Off Study New Drug Control

  16. Applicability of Targeted Design • Primarily for settings where the classifier is based on a single gene whose protein product is the target of the drug • eg trastuzumab • With a strong biological basis for the classifier, it may be unacceptable to expose classifier negative patients to the new drug • Analytical validation, biological rationale and phase II data provide basis for regulatory approval of the test

  17. Evaluating the Efficiency of Targeted Design • Simon R and Maitnourim A. Evaluating the efficiency of targeted designs for randomized clinical trials. Clinical Cancer Research 10:6759-63, 2004; Correction and supplement 12:3229, 2006 • Maitnourim A and Simon R. On the efficiency of targeted clinical trials. Statistics in Medicine 24:329-339, 2005. • reprints and interactive sample size calculations at http://linus.nci.nih.gov

  18. Relative efficiency of targeted design depends on • proportion of patients test positive • effectiveness of new drug (compared to control) for test negative patients • When less than half of patients are test positive and the drug has little or no benefit for test negative patients, the targeted design requires dramatically fewer randomized patients

  19. TrastuzumabHerceptin • Metastatic breast cancer • 234 randomized patients per arm • 90% power for 13.5% improvement in 1-year survival over 67% baseline at 2-sided .05 level • If benefit were limited to the 25% assay + patients, overall improvement in survival would have been 3.375% • 4025 patients/arm would have been required

  20. Web Based Software for Designing RCT of Drug and Predictive Biomarker • http://brb.nci.nih.gov

  21. Develop Predictor of Response to New Rx Predicted Responsive To New Rx Predicted Non-responsive to New Rx New RX Control New RX Control Biomarker Stratified Design

  22. Do not use the diagnostic to restrict eligibility, but to structure a prospective analysis plan • Having a prospective analysis plan is essential • “Stratifying” (balancing) the randomization is useful to ensure that all randomized patients have tissue available but is not a substitute for a prospective analysis plan • The purpose of the study is to evaluate the new treatment overall and for the pre-defined subsets; not to modify or refine the classifier • The purpose is not to demonstrate that repeating the classifier development process on independent data results in the same classifier

  23. R Simon. Using genomics in clinical trial design, Clinical Cancer Research 14:5984-93, 2008

  24. Analysis Plan B(Limited confidence in test) • Compare the new drug to the control overall for all patients ignoring the classifier. • If poverall 0.03 claim effectiveness for the eligible population as a whole • Otherwise perform a single subset analysis evaluating the new drug in the classifier + patients • If psubset 0.02 claim effectiveness for the classifier + patients.

  25. Analysis Plan C • Test for difference (interaction) between treatment effect in test positive patients and treatment effect in test negative patients • If interaction is significant at level int then compare treatments separately for test positive patients and test negative patients • Otherwise, compare treatments overall

  26. Sample Size Planning for Analysis Plan C • 88 events in test + patients needed to detect 50% reduction in hazard at 5% two-sided significance level with 90% power • If 25% of patients are positive, when there are 88 events in positive patients there will be about 264 events in negative patients • 264 events provides 90% power for detecting 33% reduction in hazard at 5% two-sided significance level

  27. Simulation Results for Analysis Plan C • Using int=0.10, the interaction test has power 93.7% when there is a 50% reduction in hazard in test positive patients and no treatment effect in test negative patients • A significant interaction and significant treatment effect in test positive patients is obtained in 88% of cases under the above conditions • If the treatment reduces hazard by 33% uniformly, the interaction test is negative and the overall test is significant in 87% of cases

  28. Does the RCT Need to Be Significant Overall for the T vs C Treatment Comparison? • No • It is incorrect to require that the overall T vs C comparison be significant to claim that T is better than C for test + patients but not for test – patients • That requirement has been traditionally used to protect against data dredging. It is inappropriate for focused trials of a treatment with a companion test.

  29. Biomarker Adaptive Threshold Design Wenyu Jiang, Boris Freidlin & Richard Simon JNCI 99:1036-43, 2007

  30. Biomarker Adaptive Threshold Design • Randomized trial of T vs C • Previously identified a biomarker score B thought to be predictive of patients likely to benefit from T relative to C • Eligibility not restricted by biomarker • No threshold for biomarker determined • Time-to-event data

  31. Procedure A • Compare T vs C for all patients • If results are significant at level .04 claim broad effectiveness of T • Otherwise proceed as follows

  32. Procedure A • Test T vs C restricted to patients with biomarker B > b • Let S(b) be log likelihood ratio statistic • Repeat for all values of b • Let S* = max{S(b)} • Compute null distribution of S* by permuting treatment labels • If the data value of S* is significant at 0.01 level, then claim effectiveness of T for a patient subset • Compute point and bootstrap interval estimates of the threshold b

  33. Estimated Power of Broad Eligibility Design (n=386 events) vs Adaptive Design A (n=412 events) 80% power for 30% hazard reduction

  34. Generalization of Biomarker Adaptive Threshold Design • Have identified K candidate predictive binary classifiers B1 , …, BK thought to be predictive of patients likely to benefit from T relative to C • Eligibility not restricted by candidate classifiers

  35. Compare T vs C for all patients • If results are significant at level .04 claim broad effectiveness of T • Otherwise proceed as follows

  36. Test T vs C restricted to patients positive for Bk for k=1,…,K • Let S(Bk) be log likelihood ratio statistic for treatment effect in patients positive for Bk (k=1,…,K) • Let S* = max{S(Bk)} , k* = argmax{S(Bk)} • Compute null distribution of S* by permuting treatment labels • If the data value of S* is significant at 0.01 level, then claim effectiveness of T for patients positive for Bk*

  37. Adaptive Signature Design Boris Freidlin and Richard Simon Clinical Cancer Research 11:7872-8, 2005

  38. Adaptive Signature DesignEnd of Trial Analysis • Compare E to C for all patients at significance level 0.04 • If overall H0 is rejected, then claim effectiveness of E for eligible patients • Otherwise

  39. Otherwise: • Using only the first half of patients accrued during the trial, develop a binary classifier that predicts the subset of patients most likely to benefit from the new treatment T compared to control C • Compare T to C for patients accrued in second stage who are predicted responsive to T based on classifier • Perform test at significance level 0.01 • If H0 is rejected, claim effectiveness of T for subset defined by classifier

  40. Treatment effect restricted to subset.10% of patients sensitive, 10 sensitivity genes, 10,000 genes, 400 patients.

  41. Cross-Validated Adaptive Signature Design(to be submitted for publication) Wenyu Jiang, Boris Freidlin, Richard Simon

  42. Cross-Validated Adaptive Signature DesignEnd of Trial Analysis • Compare T to C for all patients at significance level overall • If overall H0 is rejected, then claim effectiveness of T for eligible patients • Otherwise

  43. Otherwise • Partition the full data set into K parts • Form a training set by omitting one of the K parts. The omitted part is the test set • Using the training set, develop a predictive classifier of the subset of patients who benefit preferentially from the new treatment T compared to control C using the methods developed for the ASD • Classify the patients in the test set as sensitive (classifier +) or insensitive (classifier -) • Repeat this procedure K times, leaving out a different part each time • After this is completed, all patients in the full dataset are classified as sensitive or insensitive

  44. Compare T to C for sensitive patients by computing a test statistic S e.g. the difference in response proportions or log-rank statistic (for survival) • Generate the null distribution of S by permuting the treatment labels and repeating the entire K-fold cross-validation procedure • Perform test at significance level 0.05 - overall • If H0 is rejected, claim effectiveness of T for subset defined by classifier • The sensitive subset is determined by developing a classifier using the full dataset

  45. 70% Response to T in Sensitive Patients25% Response to T Otherwise25% Response to C20% Patients Sensitive

  46. Does It Matter If the Randomization in the RCT Was Not “Stratified” By the Test? • No • Stratification improves balance of stratification factors in overall comparisons • Stratification does not improve comparability of treatment (T) and control (C) groups within test positive patients or within test negative patients. • In a fully prospective trial, stratification of the randomization by the test is only useful for ensuring that all patients have adequate test performed

More Related