decision theoretic views on switching between superiority and non inferiority testing l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Decision-Theoretic Views on Switching Between Superiority and Non-Inferiority Testing. PowerPoint Presentation
Download Presentation
Decision-Theoretic Views on Switching Between Superiority and Non-Inferiority Testing.

Loading in 2 Seconds...

play fullscreen
1 / 40

Decision-Theoretic Views on Switching Between Superiority and Non-Inferiority Testing. - PowerPoint PPT Presentation


  • 84 Views
  • Uploaded on

Decision-Theoretic Views on Switching Between Superiority and Non-Inferiority Testing. . Peter Westfall Director, Center for Advanced Analytics and Business Intelligence Texas Tech University. Background.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Decision-Theoretic Views on Switching Between Superiority and Non-Inferiority Testing.' - posy


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
decision theoretic views on switching between superiority and non inferiority testing

Decision-Theoretic Views on Switching Between Superiority and Non-Inferiority Testing.

Peter Westfall

Director, Center for Advanced Analytics and Business Intelligence

Texas Tech University

background
Background
  • MCP2002 Conference in Bethesda, MD, August 2002 J. Biopharm. Stat. special issue, to appear 2003.
  • Articles:
    • Ng,T.-H. “Issues of simultaneous tests for non-inferiority and superiority”
    • Comment by G. Pennello
    • Comment by W. Maurer
    • Rejoinder by T.-H. Ng
ng s arguments
Ng’s Arguments
  • No problem with control of Type I errors in switching from N.I. to Sup. Tests
  • However, it seems “sloppy”:
    • Loss of power in replication when there are two options
    • It will allow “too many” drugs to be called “superior” that are not really superior.
westfall interjects for the next few slides
Westfall interjects for the next few slides
  • Why does switching allow control of Type I errors? Three views:
    • Closed Testing
    • Partitioning Principle
    • Confidence Intervals
closed testing method s
Closed Testing Method(s)
  • Form the closure of the family by including all intersection hypotheses.
  • Test every member of the closed family by a (suitable) a-level test. (Here, a refers to comparison-wise error rate).
  • A hypothesis can be rejected provided that
    • its corresponding test is significant at level a, and
    • every other hypothesis in the family that implies it is rejected by its a-level test.
  • Note: Closed testing is more powerful than (e.g.) Bonferroni.
control of fwe with closed tests
Control of FWE with Closed Tests

Suppose H0j1,..., H0jmall are true (unknown to you which

ones).

You can reject one or more of these only when you reject

the intersection H0j1Ç... Ç H0jm

Thus, P(reject at least one of H0j1,..., H0jm|

H0j1,..., H0jmall are true)

£ P(reject H0j1Ç... Ç H0jm|

H0j1,..., H0jmall are true) = a

slide7

Closed Testing – Multiple Endpoints

H0: d1=d2=d3=d4 =0

H0: d1=d2=d3 =0

H0: d1=d2=d4 =0

H0: d1=d3=d4 =0

H0: d2=d3=d4 =0

H0: d1=d4 =0

H0: d2=d4 =0

H0: d1=d2 =0

H0: d1=d3 =0

H0: d2=d3 =0

H0: d3=d4 =0

H0: d1=0

p = 0.0121

H0: d2=0

p = 0.0142

H0: d3=0

p = 0.1986

H0: d4=0

p = 0.0191

Where dj = mean difference, treatment -control, endpoint j.

closed testing superiority and non inferiority
Closed Testing – Superiority and Non-Inferiority

H0: d £ -d0

(Null: Inf.;

Alt: Non-Inf)

Intersection of the

two nulls

H0: d £ -d0

(Null: Inf.;

Alt: Non-Inf)

H0: d £ 0

(Null: not sup.;

Alt: sup.)

Note: The intersection of the non-inferiority hypothesis and the

superiority hypothesis is equal to the non-inferiority hypothesis

why there is no penalty from the closed testing standpoint
Why there is no penalty from the closed testing standpoint
  • Reject H0: d £ -d0 only if
    • H0: d £ -d0 is rejected, and
    • H0: d £ -d0 is rejected. (no additional penalty)
  • Reject H0: d £ 0only if
    • H0: d £ 0is rejected, and
    • H0: d £ -d0 is rejected. (no additional penalty)

So both can be tested at 0.05; sequence is irrelevant.

why there is no need for multiplicity adjustment the partitioning view
Why there is no need for multiplicity adjustment: The Partitioning View
  • Partitioning principle:
    • Partition the parameter space into disjoint subsets of interest
    • Test each subset using an a-level test.
    • Since the parameter may lie in only one subset, no multiplicity adjustment is needed.
  • Benefits
    • Can (rarely) be more powerful than closure
    • Confidence set equivalence (invert the tests)
partitioning null sets
Partitioning Null Sets
  • H01: d £ -d0
  • H02: -d0 <d £ 0

You may test both without multiplicity adjustment, since only one can be true.

LFC for H01is d = -d0 ; the LFC for H02is d = 0.

Exactly equivalent to closed testing.

confidence interval viewpoint
Confidence Interval Viewpoint
  • Contruct a 1-a lower confidence bound on d, call it dL.
  • If dL > 0, conclude superiority. If dL > -d0, conclude non-inferiority.

The testing and interval approaches are essentially equivalent, with possible minor differences where tests and intervals do not coincide (eg, binomial tests).

back to ng ng s loss function approach
Back to NgNg’s Loss Function Approach
  • Ng does not disagree with the Type I error control. However, he is concerned from a decision-theoretic standpoint
  • So he compares the “Loss” when allowing testing of:
    • Only one, pre-defined hypothesis
    • Both hypotheses
ng s example
Ng’s Example
  • Situation 1: Company tests only one hypothesis, based on their preliminary assessment.
  • Situation 2: Company tests both hypotheses, regardless of preliminary assessment,
further development of ng
Further Development of Ng
  • Out of the “next 2000” products,
    • 1000 are truly equally efficacious as A.C.
    • 1000 are truly superior to A.C.
  • Suppose further that the company either
    • Makes perfect preliminary assessments, or
    • Makes correct assessments 80% of the time
no classification both tests performed
No Classification;Both Tests Performed

Ng’s concern: “Too many” Type I errors.

westfall s generalization of ng
Westfall’s generalization of Ng
  • Three – decision problem:
    • Superiority
    • Non-Inferiority
    • NS (“Inferiority”)
  • Usual “Test both” strategy:
    • Claim Sup if 1.96 < Z
    • Claim NonInf if 1.96 –d0 < Z < 1.96
    • Claim NS if Z < 1.96 –d0
further development
Further Development
  • Assume d0 = 3.24 (Þ 90% power to detect non-inf.).
  • True States of Nature
    • Inferiority: d < -3.24
    • Non-Inf: -3.24 < d < 0
    • Sup: 0 < d
loss function
Loss Function

Claim

Nature

westfall s extension
Westfall’s Extension
  • Compare
    • Ng’s recommendation to “preclassify” drugs according to Non-Inf or Sup, and
    • The “test both” recommendation
  • Use % increase over minimum loss as a criteria.
  • The comparison will depend on prior and loss!
probability of selecting noninf test
Probability of Selecting “NonInf” Test

Probit function, anchors are

P(NonInf| d=0) = ps; P(NonInf| d=3.24) = 1-ps.

Ng suggests ps=.80.

summary of priors and losses
Summary of Priors and Losses
  • d~ p´{I(d=0)} + (1-p)´N(d; md, s2) (3 parms)
  • P(Select NonInf | d) =F(a + bd), where a,b determined by ps (1 parm) (only for Ng)
  • Loss matrix (5 parms)
  • Total: 3 or 4 prior parameters and 5 loss parameters . Not too bad!!!
baseline model
Baseline Model
  • d~ (.2)´{I(d=0)} + (.8)´N(d; 1, 42)
  • P(Select NonInf | d) =F(.84 - .52d) (ps=.8)
  • Loss matrix: (An attempt to quantify “Loss to patient population”)

Claim

Nature

consequence of baseline model
Consequence of Baseline Model
  • Optimal decisions (standard decision theory; see eg Berger’s book):
    • Classify to NS when z < -1.47
    • Classify to NonInf when -1.47 < z < 2.20
    • Classify to Sup when 2.20 < z
  • Ordinary rule: Cutpoints are -1.28, 1.96
changing the loss function
Changing the Loss Function

Multiply lower left by c; c>0

Claim

Nature

conclusions
Conclusions
  • The simultaneous testing procedure is generally more efficient (less loss) than Ng’s method, except:
    • When Type II errors are not costly
    • When a large % of products are equivalent
  • A sidelight: The optimal rule itself is worth considering:
    • Thresholds for Non-Inf are more liberal, which allows a more stringent definition of non-inferiority margin