1 / 153

WHY BAYES? INNOVATIONS IN CLINICAL TRIAL DESIGN & ANALYSIS

WHY BAYES? INNOVATIONS IN CLINICAL TRIAL DESIGN & ANALYSIS. Donald A. Berry dberry@mdanderson.org. Conclusion These data add to the growing evidence that supports the regular use of aspirin and other NSAIDs … as effective chemopreventive agents for breast cancer.

lucius
Download Presentation

WHY BAYES? INNOVATIONS IN CLINICAL TRIAL DESIGN & ANALYSIS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. WHY BAYES?INNOVATIONS IN CLINICAL TRIAL DESIGN & ANALYSIS Donald A. Berry dberry@mdanderson.org

  2. Conclusion These data add to the growing evidence that supports the regular use of aspirin and other NSAIDs … as effective chemopreventive agents for breast cancer.

  3. Results Ever use of aspirin or other NSAIDs … was reported in 301 cases (20.9%) and 345 controls (24.3%) (odds ratio 0.80, 95% CI 0.66-0.97).

  4. Bayesian analysis? • Naïve Bayesian analysis of “Results” is wrong • Gives Bayesians a bad name • Any naïve frequentist analysis is also wrong

  5. What is Bayesian analysis? Bayes' theorem: '(q|X) (q)*f(X|q) • Assess prior  (subjective, include available evidence) • Construct model f for data

  6. Implication: The Likelihood Principle Where X is observed data, the likelihood function LX() = f(X|) contains all the information in an experiment relevant for inferences about 

  7. Short version of LP: Take data at face value • Data: • Among cases: 301/1442 • Among controls: 345/1420 • But “Data” is deceptive • These are not the full data

  8. The data • Methods: • “Population-based case-control study of breast cancer” • “Study design published previously” • Aspirin/NSAIDs? (2.25-hr ?naire) • Includes superficial data: • Among cases: 301/1442 • Among controls: 345/1420 • Other studies (& fact published!!)

  9. Silent multiplicities • Are the most difficult problems in statistical inference • Can render what we do irrelevant —and wrong! 

  10. Which city is furthest north? • Portland, OR • Portland, ME • Milan, Italy • Vladivostok, Russia

  11. Beating a dead horse . . . • Piattelli-Palmarini (inevitable illusions) asks: “I have just tossed a coin 7 times.” Which did I get? 1: THHTHTT 2: TTTTTTT • Most people say 1. But “the probabilities are totally even” • Most people are right; he’s totally wrong! • Data: He presented us with 1 & 2! • Piattelli-Palmarini (inevitable illusions) asks: “I have just tossed a coin 7 times.” Which did I get? 1: THHTHTT 2: TTTTTTT • Most people say 1. But “the probabilities are totally even” • Most people are right; he’s totally wrong! • Data: He presented us with 1 & 2!

  12. THHTHTT or TTTTTTT? • LR = Bayes factor of 1 over 2 = P(Wrote 1&2 | Got 1) P(Wrote 1&2 | Got 2) • LR > 1  P(Got 1|Wrote 1&2) > 1/2 • Eg: LR = (1/2)/(1/42) = 21  P(Got 1|Wrote 1&2) = 21/22 = 95% • [Probs “totally even” if a coin was used to generate the alternative sequence]

  13. Marker/dose interaction Marker negative Marker positive

  14. Proportional hazards model Variable Comp RelRisk P #PosNodes 10/1 2.7 <0.001 MenoStatus pre/post 1.5 0.05 TumorSize T2/T1 2.6 <0.001 Dose –– –– NS Marker 50/0 4.0 <0.001 MarkerxDose –– –– <0.001 This analysis is wrong!

  15. Data at face value? • How identified? • Why am I showing you these results? • What am I not showing you? • What related studies show?

  16. Solutions? • Short answer: I don’t know! • A solution: • Supervise experiment yourself • Become an expert on substance • Partial solution: • Supervise supervisors • Learn as much substance as you can • Danger: You risk projecting yourself as uniquely scientific

  17. A consequence • Statisticians come to believe NOTHING!!

  18. OUTLINE • Silent multiplicities • Bayes and predictive probabilities • Bayes as a frequentist tool • Adaptive designs: • Adaptive randomization • Investigating many phase II drugs • Seamless Phase II/III trial • Adaptive dose-response • Extraim analysis • Trial design as decision analysis

  19. Bayes in pharma and FDA …

  20. http://www.cfsan.fda.gov/~frf/bayesdl.html http://www.prous.com/bayesian2004/

  21. BAYES AND PREDICTIVE PROBABILITY • Critical component of experimental design • In monitoring trials

  22. Example calculation • Data: 13 A's and 4 B's • Likelihood  p13 (1–p)4

  23. Posterior density of p for uniform prior: Beta(14,5)

  24. Laplace’s rule of succession P(A wins next pair|data)= EP(A wins next pair|data, p)= E(p|data)= mean of Beta(14, 5)= 14/19

  25. Updating w/next observation

  26. Suppose 17 more observations P(A wins x of 17 | data) = EP(A wins x | data, p) = E[px(1–p)17–x| data, p] ( ) 17 x 

  27. Best fitting binomial vs. predictive probabilities Binomial, p=14/19 Predictive, p ~ beta(14,5)

  28. Comparison of predictive with posterior

  29. Example: Baxter’s DCLHb & predictive probabilities • Diaspirin Cross-Linked Hemoglobin • Blood substitute; emergency trauma • Randomized controlled trial (1996+) • Treatment: DCLHb • Control: saline • N = 850 (= 2x425) • Endpoint: death

  30. Waiver of informed consent • Data Monitoring Committee • First DMC meeting: DCLHb Saline Dead 21 (43%) 8 (20%) Alive 28 33 Total 49 41 • P-value? No formal interim analysis

  31. Predictive probability of future results (after n = 850) • Probability of significant survival benefit for DCLHb after 850 patients: 0.00045 • DMC paused trial: Covariates? • No imbalance • DMC stopped trial

  32. OUTLINE • Silent multiplicities • Bayes and predictive probabilities • Bayes as a frequentist tool • Adaptive designs: • Adaptive randomization • Investigating many phase II drugs • Seamless Phase II/III trial • Adaptive dose-response • Extraim analysis • Trial design as decision analysis

  33. BAYES AS A FREQUENTIST TOOL • Design a Bayesian trial • Check operating characteristics • Adjust design to get  = 0.05 •  frequentist design • That’s fine! • We have 50+ such trials at MDACC

  34. OUTLINE • Silent multiplicities • Bayes and predictive probabilities • Bayes as a frequentist tool • Adaptive designs: • Adaptive randomization • Investigating many phase II drugs • Seamless Phase II/III trial • Adaptive dose-response • Extraim analysis • Trial design as decision analysis

  35. ADAPTIVE DESIGN • Look at accumulating data … without blushing • Update probabilities • Find predictive probabilities • Modify future course of trial • Give details in protocol • Simulate to find operating characteristics

  36. OUTLINE • Silent multiplicities • Bayes and predictive probabilities • Bayes as a frequentist tool • Adaptive designs: • Adaptive randomization • Investigating many phase II drugs • Seamless Phase II/III trial • Adaptive dose-response • Extraim analysis • Trial design as decision analysis

  37. Giles, et al JCO (2003) • Troxacitabine (T) in acute myeloid leukemia (AML) when combined with cytarabine (A) or idarubicin (I) • Adaptive randomization to: IA vs TA vs TI • Max n = 75 • End point: CR (time to CR < 50 days)

  38. Randomization • Adaptive • Assign 1/3 to IA (standard) throughout (unless only 2 arms) • Adaptive to TA and TI based on current results • Final results 

  39. Drop TI Compare n = 75

  40. Summary of results CR rates: • IA: 10/18 = 56% • TA: 3/11 = 27% • TI: 0/5 = 0% Criticisms . . .

  41. OUTLINE • Silent multiplicities • Bayes and predictive probabilities • Bayes as a frequentist tool • Adaptive designs: • Adaptive randomization • Investigating many phase II drugs • Seamless Phase II/III trial • Adaptive dose-response • Extraim analysis • Trial design as decision analysis

  42. Example: Adaptive allocation of therapies • Design for phase II: Many drugs • Advanced breast cancer (MDA); endpoint is tumor response • Goals: • Treat effectively • Learn quickly

  43. Comparison: Standard designs • One drug (or dose) at a time; no drug/dose comparisons • Typical comparison by null hypothesis: response rate = 20% • Progress is slow!

  44. Standard designs • One stage, 14 patients: • If 0 responses then stop • If ≥ 1 response then phase III • Two stages, first stage 20 patients: • If ≤ 4 or ≥ 9 responses then stop • Else second set of 20 patients

  45. An adaptive allocation • When assigning next patient, find r = P(rate ≥ 20%|data) for each drug [Or, r = P(drug is best|data)] • Assign drugs in proportion to r • Add drugs as become available • Drop drugs that have small r • Drugs with large r  phase III

  46. Suppose 10 drugs, 200 patients • 9 drugs have mix of response rates 20% & 40%, 1 (“nugget”) has 60% • Standard 2-stage design finds nugget with probability < 70% (After 110 patients on average) • Adaptive design finds nugget with probability > 99% (After about 50 patients on average) • Adaptive also better at finding 40%

  47. Suppose 100 drugs, 2000 patients • 99 drugs have mix of response rates 20% & 40%, 1 (“nugget”) has 60% • Standard 2-stage design finds nugget with probability < 70% (After 1100 patients on average) • Adaptive design finds nugget with probability > 99% (After about 500 patients on average) • Adaptive also better at finding 40%

More Related