1 / 20

OPC

OPC. Koustenis, Breiter. General Comments. Surrogate for Control Group Benchmark for Minimally Acceptable Values Not a Control Group Driven by Historical Data Requires Pooling of Different Investigations. (continued). Periodical Re-Evaluation and Updating the OPC’s

portia-gill
Download Presentation

OPC

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. OPC Koustenis, Breiter

  2. General Comments • Surrogate for Control Group • Benchmark for Minimally Acceptable Values • Not a Control Group • Driven by Historical Data • Requires Pooling of Different Investigations

  3. (continued) • Periodical Re-Evaluation and Updating the OPC’s • Policy not yet formalized • Specific Guidance on Methodology to Derive an OPC • Is urgently needed

  4. Bayesian Issues in Developing OPC • Objective means? • Derived from (conditionally?) exchangeable studies • Non-informative hyper-prior • For new Bayesian trials should the OPC be expressed as a (presumably tight) posterior distribution rather than a fixed number? • E.g. logit(opc) ~ normal(?,?), etc

  5. Does OPC Preempt an Informative Prior? • An “objective” informative prior would be derived from some of the same trials used to set the OPC. • This could be dealt with by computing the joint posterior distribution of opc and pnew. But this would be extremely burdensome to implement for anything but an “in-house” OPC (Breiter). • A non-informative prior might be “least burdensome.

  6. Bayesian Endpoints • Superiority: • P(pnew < opc | New Data) • Non-inferiority • P(pnew < opc + D | New Data) • PP(pnew < k∙opc | New Data)

  7. OPC as an Agreed upon Standard • Historical Data + ??? • Are evaluated to produce an agreed upon OPC as a fixed number with no uncertainty. • Can I used some of these same data to develop an informative prior? • Probably yes but needs work. The issue is what claim will be made for a successful device trial.

  8. The prior depends on the Claim • Claim: “The complication rate (say) of the new device is not larger than (say) the median of comparable devices + D.” • If the new device is exchangeable with a subset of comparable devices then the “correct” prior for the new device is the joint distribution of (pnew, opc) prior to the new data. • If the new device is not exchangeable with any comparable devices, then a non-informative prior should be used.

  9. (continued) • Claim: “The complication rate of the new device is not greater than a given number (opc + D)”. • The prior can be based on device trials that are considered exchangeable with the planned trial (e.g. “in house”).

  10. Logic Chopping? • Not necessarily. Consider • “The average male U of IA professor is taller than the average male professor.” vs • The average male U of IA professor is taller than 5’11” • How you or I arrived at the 5’11” is not relevant to the posterior probability.

  11. But perhaps that’s a bit disingenuous • The regulatory goal is clearly to set an OPC that will not permit the reduction of “average” safety or efficacy of a class of devices. • Of necessity, it has to be related to an estimate of some sort of “average”. • So a claim of superiority or non-inferiority to an opc is clearly made at least indirectly with reference to a “control”

  12. Would it Make sense to Express the OPC as a PD? • If the OPC is derived from a hierarchical analysis of exchangeable device trials it would be possible to compute the predictive distribution of xnew. • Could inferiority (superiority) be defined as the observed xnew being below the 5th (above the 95th) percentile of the predictive distribution?

  13. Poolability Roseann White

  14. Binary Response Setup • i = arm (T or C) j = center k = S’s • Response variable yijk ~ bernoulli(pij) • logit(pCj) = gj logit(pTj) = gj + t+ dj • Primary: t> -D • Secondary: dj’s are within clinical tolerance

  15. Specify Secondary Goal ? • “If the difference between the treatment group varies more than twice the non-inferiority margin [D]” • Possible interpretations: • Random CxT interaction: sd < 2D • Multiple comparisons: max |dj – dk| < 2D

  16. (continued) • “Modify ... Liu et. al...” Center j is non-inferior: t + dj > -kt All centers must be non-inferior? ID the inferior centers?

  17. Why Bootstrap Resample? • To increase n of S’s in clusters? --- Probably invalid • To generate a better approximation of the null sampling distribution? --- OK, but what are the details? Do you combine the two arms and resample? • Why not use random-effects Glimmix if you want to stick to frequentist methods.

  18. Bayesian Analysis • Ad-hoc pooling is not necessary • Can produce the posterior distribution of any function of the parameters. • Can use non-informative hyper-priors, so is “objective” = data driven. • Will have the best frequentist operating characteristics (which could be calculated by simulation.)

  19. Bayesian Setup • Define tj = t + dj (logit of p in the T arm) • (gj, tj) ~ iid N((mg,mt),S) • m, S have near non-informative priors • Primary goal: P(mt > -D | Data) (or t-bar) • Secondary goal(s): ?? • P(st < 2D | Data) (or st ) • For each (j,j’) P(|tj – tj’| < 2D | Data) • For each j P(|tj –mt| < 2D | Data) • For each j P(tj > -kmt | Data)

  20. Bayes Could Use the Original Metric • pCj = 1/(1+exp(-gj)) pTj = 1/(1+exp(-gj-tj)) • pC = 1/(1+exp(-mg)) pT = 1/(1+exp(-mg-mt)) • Primary: P(pT – pC > D | Data) • Secondary: • e.g. P(pTj – pCj > k(pT – pC ) | Data)

More Related