1 / 23

Establishment of acceptance criteria for comparability studies.

Establishment of acceptance criteria for comparability studies. Richard K Burdick Elion Labs, a division of KBI Biopharma, Inc. Midwest Biopharmaceutical Statistics Workshop May 14 – 16, 2018. Objective. Demonstrate an approach for selecting comparability strategies and acceptance criteria.

Download Presentation

Establishment of acceptance criteria for comparability studies.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Establishment of acceptance criteria for comparability studies. Richard K Burdick Elion Labs, a division of KBI Biopharma, Inc. Midwest Biopharmaceutical Statistics Workshop May 14 – 16, 2018

  2. Objective • Demonstrate an approach for selecting comparability strategies and acceptance criteria. • Employ the approach in the context of a site transfer • Old (O) site versus new (N) site. • Objective is compare O and N in some definable manner. 2

  3. Criteria for Selecting a Comparability Strategy • Protect patients from consequences of concluding comparability when products are not comparable. • Protect sponsors from consequences of concluding lack of comparability when products are in fact comparable (these consequences include a lack of patient access to treatments) • Incentivize sponsors to acquire process knowledge concerning N. 3

  4. Notation • mO is the mean of the old process • sO is the standard deviation of the old process • mNis the mean of the new process • sNis the standard deviation of the new process 4

  5. Example • Because we have a long history with O, assume mO and sO are known. • In particular, mO=100 and sO=10. • The process has specifications of LSL=70 and USL=130 providing an out-of-specification rate (OOS) of 0.0027. • Assume that analytical method error is the same for both sites, so any difference between sO and sN is due to process variation. 5

  6. Four Proposed Comparability Strategies • Equivalence test of means with equivalence acceptance criterion equal to 1.5sO • Noninferiority of process capability with alternative stating out-of-specification rate is less than 0.0668 (aligning with 1.5sO shift in O). 6

  7. Four Proposed Comparability Strategies • Heuristic rule: 90% two-sided prediction interval (PI) computed with N data must fall within a 2.5sO range around mO. • Heuristic rule: All individual N values must fall in a 2.15sO quality range around mO. (QR) • All strategies provide 0.05 type I error rate with nN=10 runs. 7

  8. Populations of N mO=100 sO=10 Patients at risk if Designs 5-6 “Pass”. Sponsor at risk if Designs 1-3 “Fail”. 8

  9. Protect patients from consequences of concluding comparability when products are not comparable. • This goal requires an ability to ensure a small probability of demonstrating comparability when product differences are of practical importance. • The two statistical tests (Equiv, OOS) control this probability by defining type 1 error to be 0.05. • The two heuristic tests (PI, QR) are calibrated to achieve the same type 1 error rate in Design 4. 9

  10. Populations of N Probability of passing in Designs 4-6 should be less than or equal to 0.05 to satisfy Criterion 1. 10

  11. Control of Patient Risk All methods calibrated at this point. Equivalence test of means does not satisfy criterion 1

  12. Protect sponsors from consequences of concluding lack of comparability when products are in fact comparable. • This criterion requires an ability to ensure a large probability of demonstrating comparability when differences in products are of no practical importance. 12

  13. Populations of N The greater the probability of passing in Designs 1-3, the better the procedure relative to Criterion 2. 13

  14. Control of Sponsor Risk • Only OOS uniformly increases probability of passing as OOSN decreases and satisfies Criterion 2. • Large differences in all but OOS when N is most capable. 14

  15. 3. Incentivize sponsors to acquire process knowledge concerning N. • Increase probability of passing for a given type 1 error and acceptance criterion by increasing sample sizes of N. • To demonstrate, N sample size increased to 15. • PI recalibrated from 90% to 88% to maintain 0.05 risk to patient. • QR recalibrated from range of a 2.15sO around mOto a range of 2.4sO around mO to maintain 0.05 risk to patient. 15

  16. Populations of N To satisfy Criterion 3, probability of passing in Designs 1-3 should increase as nT increases (with probability of passing Design 4 equal to 0.05). 16

  17. Incentivize Sponsors All strategies satisfy Criterion 3 17

  18. Summary of Criteria 18

  19. Elephant in the Room • Practicality of sample sizes need to be considered relative to the criticality of the attribute. • Scientific relevance of acceptance criterion (if possible) is always desired. • If power is too low for practical sample sizes, acceptance criterion must be loosened or type 1 error increased. 19

  20. Increasing Type 1 Error It would seem to be a better statement of risk to keep definition of criticality fixed and modify the type 1 error rate.

  21. Need for Calibration of Heuristic Rules • Often heuristic rules are not properly calibrated. • For example, suppose nN=3 individual values are required to fall in 2.15sO range. (this is same rule as nN=10, but not recalibrated.) 21

  22. Conclusions • Proposed criteria can be used for evaluation of both statistical tests and heuristic rules. • If process capability is important, approaches must consider both location and spread. Equivalence of means does not appear to be sufficient. • Calibration of heuristic rules are needed in order to protect patients. • Bayesian intervals and other procedures that incorporate both location and spread of the distributions can be considered. • e.g., distribution overlap as discussed in Inman and Bradley (1989) and proportion of similar response as discussed in Giacoletti and Heyse (2011) could be used to form a statistical test. 22

  23. References • Giacoletti KED, Heyse J (2011) Using proportion of similar response to evaluate correlates of protection for vaccine efficacy. Statistical Methods in Medical Research, DOI: 10.1177/0962280211416299, published online August 2011. • Inman HF, Bradley EL Jr (1989) The overlapping coefficient as a measure of agreement between probability distributions and point estimation of the overlap of two normal densities. Communications in Statistics - Theory and Methods, 18(10):3851-3874.

More Related