1 / 71

On the Road to Predictive Oncology Challenges for Statistics and for Clinical Investigation

On the Road to Predictive Oncology Challenges for Statistics and for Clinical Investigation. Richard Simon, D.Sc. Chief, Biometric Research Branch National Cancer Institute http://brb.nci.nih.gov. Biometric Research Branch Website http://brb.nci.nih.gov. Powerpoint presentations Reprints

jberryman
Download Presentation

On the Road to Predictive Oncology Challenges for Statistics and for Clinical Investigation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. On the Road to Predictive OncologyChallenges for Statistics and for Clinical Investigation Richard Simon, D.Sc. Chief, Biometric Research Branch National Cancer Institute http://brb.nci.nih.gov

  2. Biometric Research Branch Websitehttp://brb.nci.nih.gov • Powerpoint presentations • Reprints • BRB-ArrayTools software • Web based tools for clinical trial design with predictive biomarkers

  3. Prediction Tools for Informing Treatment Selection • Most cancer treatments benefit only a minority of patients to whom they are administered • Being able to predict which patients are likely or unlikely to benefit from a treatment might • Save patients from unnecessary complications and enhance their chance of receiving a more appropriate treatment • Help control medical costs • Improve the success rate of clinical drug development

  4. Types of Biomarkers • Predictive biomarkers • Measured before treatment to identify who is likely or unlikely to benefit from a particular treatment • Prognostic biomarkers • Measured before treatment to indicate long-term outcome for patients untreated or receiving standard treatment

  5. Surrogate endpoints • Measured longitudinally to measure the pace of disease and how it is effected by treatment for use as an early indication of clinical effectiveness of treatment

  6. Prognostic & Predictive Biomarkers • Single gene or protein measurement • ER protein expression • HER2 amplification • EGFR mutation • KRAS mutation • Index or classifier that summarizes expression levels of multiple genes • OncotypeDx recurrence score

  7. Validation = Fit for Intended Use • Analytical validation • Accuracy, reproducibility, robustness • Clinical validation • Does the biomarker predict a clinical endpoint or phenotype • Clinical utility • Does use of the biomarker result in patient benefit • By informing treatment decisions • Is it actionable

  8. Pusztai et al. The Oncologist 8:252-8, 2003 • 939 articles on “prognostic markers” or “prognostic factors” in breast cancer in past 20 years • ASCO guidelines only recommended routine testing for ER, PR and HER-2 in breast cancer

  9. Most prognostic markers or prognostic models are not used because although they correlate with a clinical endpoint, they do not facilitate therapeutic decision making; • Most prognostic marker studies are based on a “convenience sample” of heterogeneous patients, often not limited by stage or treatment. • The studies are not planned or analyzed with clear focus on an intended use of the marker • Retrospective studies of prognostic markers should be planned and analyzed with specific focus on intended use of the marker • Prospective studies should address medical utility for a specific intended use of the biomarker • Treatment options and practice guidelines • Other prognostic factors

  10. Potential Uses of Prognostic Biomarkers • Identify patients who have very good prognosis on standard treatment and do not require more intensive regimens • Identify patients who have poor prognosis on standard chemotherapy who are good candidates for experimental regimens

  11. Predictive Biomarkers

  12. Major Changes in Oncology • Recognition of the heterogeneity of tumors of the same primary site with regard to molecular oncogenesis • Availability of the tools of genomics for characterizing tumors • Focus on molecularly targeted drugs • Have resulted in • Increased interest in prediction problems • Need for new clinical trial designs • Increased pace of innovation

  13. p>n prediction problems in which number of variables is much greater than the number of cases • Many of the methods of statistics are based on inference problems • Standard model building and evaluation strategies are not effective for p>n prediction problems

  14. Model Evaluation for p>n Prediction Problems • Goodness of fit is not a proper measure of predictive accuracy • Importance of Separating Training Data from Testing Data for p>n Prediction Problems

  15. Separating Training Data from Testing Data • Split-sample method • Re-sampling methods • Leave one out cross validation • K-fold cross validation • Replicated split-sample • Bootstrap re-sampling

  16. “Prediction is very difficult; especially about the future.”

  17. Prediction on Simulated Null DataSimon et al. J Nat Cancer Inst 95:14, 2003 • Generation of Gene Expression Profiles • 20 specimens (Pi is the expression profile for specimen i) • Log-ratio measurements on 6000 genes • Pi ~ MVN(0, I6000) • Can we distinguish between the first 10 specimens (Class 1) and the last 10 (Class 2)? • Prediction Method • Compound covariate predictor built from the log-ratios of the 10 most differentially expressed genes.

  18. Cross Validation • With proper cross-validation, the model must be developed from scratch for each leave-one-out training set. This means that feature selection must be repeated for each leave-one-out training set. • The cross-validated estimate of misclassification error is an estimate of the prediction error for the model developed by applying the specified algorithm to the full dataset

  19. Permutation Distribution of Cross-validated Misclassification Rate of a Multivariate ClassifierRadmacher, McShane & SimonJ Comp Biol 9:505, 2002 • Randomly permute class labels and repeat the entire cross-validation • Re-do for all (or 1000) random permutations of class labels • Permutation p value is fraction of random permutations that gave as few cross-validated misclassifications as in the real data

  20. Model Evaluation for p>n Prediction Problems • Odds ratios and hazards ratios are not proper measures of prediction accuracy • Statistical significance of regression coefficients are not proper measures of predictive accuracy

  21. Evaluation of Prediction Accuracy • For binary outcome • Cross-validated prediction error • Cross-validated sensitivity & specificity • Cross-validated ROC curve • For survival outcome • Cross-validated Kaplan-Meier curves for predicted high and low risk groups • Cross-validated K-M curves within levels of standard prognostic staging system • Cross-validated time-dependent ROC curves

  22. LOOCV Error Estimates for Linear Classifiers

  23. Cross-validated Kaplan-Meier Curves for Predicted High and Low Risk Groups

  24. Cross-Validated Time Dependent ROC Curve

  25. Is Accurate Prediction Possible For p>n? • Yes, in many cases, but standard statistical methods for model building and evaluation are often not effective • Standard methods may over-fit the data and lead to poor predictions • With p>n, unless data is inconsistent, a linear model can always be found that classifies the training data perfectly

  26. Is Accurate Prediction Possible For p>>n? • Some problems are easy; real problems are often difficult • Simple methods like DLDA, nearest neighbor classifiers and shrunken centroid classifiers are at least as effective as more complex methods for many datasets • Because of correlated variables, there are often many very distinct models that predict about equally well

  27. p>n prediction problems are not multiple testing problems • The objective of prediction problems is accurate prediction, not controlling the false discovery rate • Parameters that control feature selection in prediction problems are tuning parameters to be optimized for prediction accuracy • Optimizaton by cross-validation nested within the cross-validation used for evaluating prediction accuracy • Biological understanding is often a career objective; accurate prediction can sometimes be achieved in less time

  28. Model Instability Does Not Mean Prediction Inaccuracy • Validation of a predictive model means that the model predicts accurately for independent data • Validation does not mean that the model is stable or that using the same algorithm on independent data will give a similar model • With p>n and many genes with correlated expression, the classifier will not be stable.

  29. Traditional Approach to Oncology Clinical Drug Development • Phase III trials with broad eligibility to test the null hypothesis that a regimen containing the new drug is on average not better than the control treatment for all patients who might be treated by the new regimen • Perform exploratory subset analyses but regard results as hypotheses to be tested on independent data

  30. Traditional Clinical Trial Approaches • Have protected us from false claims resulting from post-hoc data dredging not based on pre-defined biologically based hypotheses • Have led to widespread over-treatment of patients with drugs from which many don’t benefit • Are less suitable for evaluation of new molecularly targeted drugs which are expected to benefit only the patients whose tumors are driven by de-regulation of the target of the drug

  31. Molecular Heterogeneity of Human Cancer • Cancers of a primary site in many cases appear to represent a heterogeneous group of diverse molecular diseases which vary fundamentally with regard to • their oncogenecis and pathogenesis • their responsiveness to specific drugs • The established molecular heterogeneity of human cancer requires the use new approaches to the development and evaluation of therapeutics

  32. How Can We Develop New Drugs in a Manner More Consistent With Modern Tumor Biology and ObtainReliable Information About What Regimens Work for What Kinds of Patients?

  33. Develop Predictor of Response to New Drug Using phase II data, develop predictor of response to new drug Patient Predicted Responsive Patient Predicted Non-Responsive Off Study New Drug Control

  34. Evaluating the Efficiency of Enrichment and Stratification Clinical Trial Designs With Predictive Biomarkers • Simon R and Maitnournam A. Evaluating the efficiency of targeted designs for randomized clinical trials. Clinical Cancer Research 10:6759-63, 2004; Correction and supplement 12:3229, 2006 • Maitnournam A and Simon R. On the efficiency of targeted clinical trials. Statistics in Medicine 24:329-339, 2005.

  35. Model for Two Treatments With Binary Response • New treatment T • Control treatment C • 1- proportion marker + • pc control response probability • response probability for T: • Marker + (pc + 1) • Marker - (pc + 0)

  36. Randomized Ratio(normal approximation) • RandRat = nuntargeted/ntargeted • 1= rx effect in marker + patients • 0= rx effect in marker - patients •  =proportion of marker - patients • If 0=0, RandRat = 1/ (1-) 2 • If 0= 1/2, RandRat = 1/(1- /2)2

  37. Randomized Rationuntargeted/ntargeted

  38. Relative efficiency of targeted design depends on • proportion of patients test positive • effectiveness of new drug (compared to control) for test negative patients • When less than half of patients are test positive and the drug has little or no benefit for test negative patients, the targeted design requires dramatically fewer randomized patients

  39. TrastuzumabHerceptin • Metastatic breast cancer • 234 randomized patients per arm • 90% power for 13.5% improvement in 1-year survival over 67% baseline at 2-sided .05 level • If benefit were limited to the 25% assay + patients, overall improvement in survival would have been 3.375% • 4025 patients/arm would have been required

  40. Develop Predictor of Response to New Rx Predicted Responsive To New Rx Predicted Non-responsive to New Rx New RX Control New RX Control Developmental Strategy (II)

  41. Developmental Strategy (II) • Do not use the diagnostic to restrict eligibility, but to structure a prospective analysis plan • Having a prospective analysis plan is essential • “Stratifying” (balancing) the randomization is useful to ensure that all randomized patients have tissue available but is not a substitute for a prospective analysis plan • The purpose of the study is to evaluate the new treatment overall and for the pre-defined subsets; not to modify or refine the classifier

  42. R Simon. Using genomics in clinical trial design, Clinical Cancer Research 14:5984-93, 2008 • R Simon. Designs and adaptive analysis plans for pivotal clinical trials of therapeutics and companion diagnostics, Expert Opinion in Medical Diagnostics 2:721-29, 2008

  43. Analysis Plan B(Fall-back Plan) • Compare the new drug to the control overall for all patients ignoring the classifier. • If poverall 0.03 claim effectiveness for the eligible population as a whole • Otherwise perform a single subset analysis evaluating the new drug in the classifier + patients • If psubset 0.02 claim effectiveness for the classifier + patients.

  44. Analysis Plan C(Interaction Plan) • Test for difference (interaction) between treatment effect in test positive patients and treatment effect in test negative patients • If interaction is significant at level int then compare treatments separately for test positive patients and test negative patients • Otherwise, compare treatments overall

  45. Sample Size Planning for Analysis Plan C • 88 events in test + patients needed to detect 50% reduction in hazard at 5% two-sided significance level with 90% power • If 25% of patients are positive, when there are 88 events in positive patients there will be about 264 events in negative patients • 264 events provides 90% power for detecting 33% reduction in hazard at 5% two-sided significance level

More Related