1 / 40

Decision-Theoretic Views on Switching Between Superiority and Non-Inferiority Testing.

Decision-Theoretic Views on Switching Between Superiority and Non-Inferiority Testing. . Peter Westfall Director, Center for Advanced Analytics and Business Intelligence Texas Tech University. Background.

posy
Download Presentation

Decision-Theoretic Views on Switching Between Superiority and Non-Inferiority Testing.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Decision-Theoretic Views on Switching Between Superiority and Non-Inferiority Testing. Peter Westfall Director, Center for Advanced Analytics and Business Intelligence Texas Tech University

  2. Background • MCP2002 Conference in Bethesda, MD, August 2002 J. Biopharm. Stat. special issue, to appear 2003. • Articles: • Ng,T.-H. “Issues of simultaneous tests for non-inferiority and superiority” • Comment by G. Pennello • Comment by W. Maurer • Rejoinder by T.-H. Ng

  3. Ng’s Arguments • No problem with control of Type I errors in switching from N.I. to Sup. Tests • However, it seems “sloppy”: • Loss of power in replication when there are two options • It will allow “too many” drugs to be called “superior” that are not really superior.

  4. Westfall interjects for the next few slides • Why does switching allow control of Type I errors? Three views: • Closed Testing • Partitioning Principle • Confidence Intervals

  5. Closed Testing Method(s) • Form the closure of the family by including all intersection hypotheses. • Test every member of the closed family by a (suitable) a-level test. (Here, a refers to comparison-wise error rate). • A hypothesis can be rejected provided that • its corresponding test is significant at level a, and • every other hypothesis in the family that implies it is rejected by its a-level test. • Note: Closed testing is more powerful than (e.g.) Bonferroni.

  6. Control of FWE with Closed Tests Suppose H0j1,..., H0jmall are true (unknown to you which ones). You can reject one or more of these only when you reject the intersection H0j1Ç... Ç H0jm Thus, P(reject at least one of H0j1,..., H0jm| H0j1,..., H0jmall are true) £ P(reject H0j1Ç... Ç H0jm| H0j1,..., H0jmall are true) = a

  7. Closed Testing – Multiple Endpoints H0: d1=d2=d3=d4 =0 H0: d1=d2=d3 =0 H0: d1=d2=d4 =0 H0: d1=d3=d4 =0 H0: d2=d3=d4 =0 H0: d1=d4 =0 H0: d2=d4 =0 H0: d1=d2 =0 H0: d1=d3 =0 H0: d2=d3 =0 H0: d3=d4 =0 H0: d1=0 p = 0.0121 H0: d2=0 p = 0.0142 H0: d3=0 p = 0.1986 H0: d4=0 p = 0.0191 Where dj = mean difference, treatment -control, endpoint j.

  8. Closed Testing – Superiority and Non-Inferiority H0: d £ -d0 (Null: Inf.; Alt: Non-Inf) Intersection of the two nulls H0: d £ -d0 (Null: Inf.; Alt: Non-Inf) H0: d £ 0 (Null: not sup.; Alt: sup.) Note: The intersection of the non-inferiority hypothesis and the superiority hypothesis is equal to the non-inferiority hypothesis

  9. Why there is no penalty from the closed testing standpoint • Reject H0: d £ -d0 only if • H0: d £ -d0 is rejected, and • H0: d £ -d0 is rejected. (no additional penalty) • Reject H0: d £ 0only if • H0: d £ 0is rejected, and • H0: d £ -d0 is rejected. (no additional penalty) So both can be tested at 0.05; sequence is irrelevant.

  10. Why there is no need for multiplicity adjustment: The Partitioning View • Partitioning principle: • Partition the parameter space into disjoint subsets of interest • Test each subset using an a-level test. • Since the parameter may lie in only one subset, no multiplicity adjustment is needed. • Benefits • Can (rarely) be more powerful than closure • Confidence set equivalence (invert the tests)

  11. Partitioning Null Sets • H01: d £ -d0 • H02: -d0 <d £ 0 You may test both without multiplicity adjustment, since only one can be true. LFC for H01is d = -d0 ; the LFC for H02is d = 0. Exactly equivalent to closed testing.

  12. Confidence Interval Viewpoint • Contruct a 1-a lower confidence bound on d, call it dL. • If dL > 0, conclude superiority. If dL > -d0, conclude non-inferiority. The testing and interval approaches are essentially equivalent, with possible minor differences where tests and intervals do not coincide (eg, binomial tests).

  13. Back to NgNg’s Loss Function Approach • Ng does not disagree with the Type I error control. However, he is concerned from a decision-theoretic standpoint • So he compares the “Loss” when allowing testing of: • Only one, pre-defined hypothesis • Both hypotheses

  14. Ng’s Example • Situation 1: Company tests only one hypothesis, based on their preliminary assessment. • Situation 2: Company tests both hypotheses, regardless of preliminary assessment,

  15. Further Development of Ng • Out of the “next 2000” products, • 1000 are truly equally efficacious as A.C. • 1000 are truly superior to A.C. • Suppose further that the company either • Makes perfect preliminary assessments, or • Makes correct assessments 80% of the time

  16. Perfect Classification;One Test Only

  17. 80% Correct Classification;One Test Only

  18. No Classification;Both Tests Performed Ng’s concern: “Too many” Type I errors.

  19. Westfall’s generalization of Ng • Three – decision problem: • Superiority • Non-Inferiority • NS (“Inferiority”) • Usual “Test both” strategy: • Claim Sup if 1.96 < Z • Claim NonInf if 1.96 –d0 < Z < 1.96 • Claim NS if Z < 1.96 –d0

  20. Further Development • Assume d0 = 3.24 (Þ 90% power to detect non-inf.). • True States of Nature • Inferiority: d < -3.24 • Non-Inf: -3.24 < d < 0 • Sup: 0 < d

  21. Loss Function Claim Nature

  22. Prior Distribution –Normal + Equivalence Spike

  23. Westfall’s Extension • Compare • Ng’s recommendation to “preclassify” drugs according to Non-Inf or Sup, and • The “test both” recommendation • Use % increase over minimum loss as a criteria. • The comparison will depend on prior and loss!

  24. Probability of Selecting “NonInf” Test Probit function, anchors are P(NonInf| d=0) = ps; P(NonInf| d=3.24) = 1-ps. Ng suggests ps=.80.

  25. Summary of Priors and Losses • d~ p´{I(d=0)} + (1-p)´N(d; md, s2) (3 parms) • P(Select NonInf | d) =F(a + bd), where a,b determined by ps (1 parm) (only for Ng) • Loss matrix (5 parms) • Total: 3 or 4 prior parameters and 5 loss parameters . Not too bad!!!

  26. Baseline Model • d~ (.2)´{I(d=0)} + (.8)´N(d; 1, 42) • P(Select NonInf | d) =F(.84 - .52d) (ps=.8) • Loss matrix: (An attempt to quantify “Loss to patient population”) Claim Nature

  27. Consequence of Baseline Model • Optimal decisions (standard decision theory; see eg Berger’s book): • Classify to NS when z < -1.47 • Classify to NonInf when -1.47 < z < 2.20 • Classify to Sup when 2.20 < z • Ordinary rule: Cutpoints are -1.28, 1.96

  28. Loss Matrix – Select and test only the NonInf hypothesis Outcome Nature

  29. Loss Matrix – Select and test only the Sup hypothesis Outcome Nature

  30. Deviation from Baseline:Effect of p

  31. Deviation from Baseline:Effect of m

  32. Deviation from Baseline:Effect of s

  33. Deviation from Baseline:Effect of Correct Selection, ps

  34. Changing the Loss Function Multiply lower left by c; c>0 Claim Nature

  35. Deviation from Baseline:Effect of c

  36. Conclusions • The simultaneous testing procedure is generally more efficient (less loss) than Ng’s method, except: • When Type II errors are not costly • When a large % of products are equivalent • A sidelight: The optimal rule itself is worth considering: • Thresholds for Non-Inf are more liberal, which allows a more stringent definition of non-inferiority margin

More Related