1 / 48

Statistical Tools, Performance Verification

Statistical Tools, Performance Verification. Presented by: Karen S. Ginsbury For: IFF February 2011. Tools.

devin
Download Presentation

Statistical Tools, Performance Verification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Statistical Tools, Performance Verification Presented by: Karen S. Ginsbury For: IFF February 2011

  2. Tools Statistics is a science pertaining to the collection, analysis, interpretation or explanation, and presentation of valuable / useful data where the decision regarding what is collected is made up front. Process Validation uses statistics, sampling and testing to predict process variability / uncertainty

  3. Definitions Statistics is a mathematical science pertaining to the collection, analysis, interpretation or explanation, and presentation of data Statisticians improve the quality of data with the design of experiments and survey sampling Statistics provides tools for prediction and forecasting using data and models

  4. What is a confidence level Confidence Level is the likelihood - expressed as a percentage - that the results of a test are real and repeatable, and not just random The idea is based on the concept of the "normal distribution curve," which shows that variation in almost any data (such as the heights of all fourth-graders, or the amount of rainfall in January) tends to be clustered around an average value, with relatively few individual measurements at the extremes A confidence level of 50% means there is a 50:50 chance that your result is WRONG 75% means that one in four results will be WRONG In pharma industry we usually want a minimum confidence level of 95% and that helps in selecting a sampling plan

  5. Probability Probability, or chance, is a way of expressing knowledge or belief that an event will occur or has occurred Statistics is a means of assessing or predicting probability At the process validation stage of product development we have a lot of uncertainty and wish to increase the probability of success through process understanding

  6. Statistical Based Sampling Plan: From the 2008 Guide Protocol should address the sampling plan including sampling points, number of samples, and the frequency of sampling for each unit operation and attribute The number of samples should be adequate to provide sufficient statistical confidence of quality both within a batch and between batches The confidence level selected can be based on risk analysis as it relates to the particular attribute under examination Sampling during this stage should be more extensive than is typical during routine production

  7. Acceptance Criteria Criteria that provide for a rational conclusion of whether the process consistently produces quality products. The criteria should include:A description of the statistical methods to be used in analyzing all collected data (e.g., statistical metrics defining both intra-batch and inter-batch variability)

  8. Acceptance Criteria - Variability • Critical Quality Attributes ?(Product Specification) • Critical Process Parameters • Trends • Inter and Intra-batch variability: • Paired/ unpaired t-test • Shewhart control charts • Upper and Lower control limits • Process capability

  9. Process PerformanceQualification Typically will include: • Commercial batches manufactured with the qualified utilities, facilities, production equipment, approved components, master production and control record, and trained production personnel in place. • Usually run at target/nominal operating parameters within proven acceptable range or design space. • Extensively tested, i.e., combination of samples analytically tested and increased process control monitoring beyond typical routine QC levels.

  10. Process Performance qualification (PPQ) • A series of tests which confirm that the system or process does perform consistently and predictably and results meet predetermined specifications • PQ documents that: • processes operate as required at the normal operating limits of critical parameters • systems operate consistently and reliably • appropriate challenges are employed

  11. The Guide:Continued Process Verification An ongoing program to collect and analyze product and process data that relate to product quality must be established (§ 211.180(e) Data collected should include relevant process trends and quality of incoming materials or components, in-process material, and finished products The data should be statistically trended and reviewed by trained personnel The information collected should verify that the critical quality attributes are being controlled throughout the process

  12. The Guide:Continued Process Verification We recommend that a statistician or person with adequate training in statistical process control techniques develop the data collection plan and statistical methods and procedures used in measuring and evaluating process stability and process capability

  13. The Guide:Continued Process Verification Procedures should describe how trending and calculations are to be performed Procedures should guard against overreaction to individual events as well as against failure to detect process drift Production data should be collected to evaluate process stability and capability The quality unit should review this information. If done properly, these efforts can identify variability in the process and/or product; this information can be used to alert the manufacturer that the process should be improved

  14. The Guide:Continued Process Verification Good process design and development should anticipate significant sources of variability and establish appropriate detection, control, and/or mitigation strategies, alert and action limits However, a process is likely to encounter sources of variation that were not previously detected or to which the process was not previously exposed Many tools and techniques, some statistical and others more qualitative, can be used to detect variation, characterize it, and determine the root cause We recommend that the manufacturer use quantitative, statistical methods whenever feasible

  15. The Guide:Continued Process Verification We recommend that it scrutinize intra-batch as well as inter-batch variation as part of a comprehensive continued process verification program We recommend continued monitoring and/or sampling at the level established during the process qualification stage until sufficient data is available to generate significant variability estimates Sampling and/or monitoring should be adjusted to a statistically appropriate and representative level with process variability periodically assessed

  16. Practical Implications and Applications in Process Validation • process average and process variability estimates used for determination of appropriate specifications • Average: • how many batches • Moving average or once determined and that’s it?

  17. Practical Implications and Applications in Process Validation • Suitable Statistical Methods: • Sample size (how many units from a total population) needs to be tied in with confidence level • Representative sample: what do we mean: • beginning / middle / end? • n +1 • MIL STD

  18. Preparing for PPQ • Activities and studies resulting in product understanding should be documented • Documentation should reflect the basis for decisions made about the process • e.g. manufacturers should document the variables studied for a unit operation and the rationale for (the controls exercised over) those variables identified as significant • This information can be used during PQ

  19. Process Qualification questions Does running three batches of a product or three processes mean that that process is valid? …Does it mean the process is effective? Can you explain why what you do provides assurance that the process will produce the same result each time it is run? …or that the process is under control?

  20. Process qualification questions What are the process variables? …the things that will cause the process outcome to vary. Are these variables understood and adequately controlled?

  21. Transition: PQ to Ongoing Verification Prepare a summary report for PQ Report is basis for ongoing protocol Risk assessment focuses on “uncertainty”from stages 1 and 2

  22. Transition: PQ to Ongoing Verification • Select CQAs and CPPs for increased scrutiny: • CQA’s = tests • CPP’s = data analysis • Statistician: • # of runs • # of samples • Confidence level

  23. Why is Continued Process Verification Needed? How much do we know after we have completed the Performance Qualification lots? • Answer: Only a fraction of what we will know over the course of time.

  24. Continued Process Verification(CPV) • On-going monitoring of the commercial process to demonstrate that it remains in a state of control • Systems for detecting unplanned departures from the process are essential to accomplish this goal

  25. CPV approach Develop a rationalized continued process verification strategy The extent of verification and the extent of documentation should be based on risk to product quality and patient safety, as well as the complexity and novelty of the manufacturing system

  26. Variation DetectionSources of feedback Goal: Improve and Optimize the Process • Complaints • Out-of-specification reports • Process deviation reports • Process trending

  27. Variation DetectionSources of feedback • Batch records • Incoming raw material records • Equipment and utility monitoring • Production line operators/Quality staff interviews • Operator error trending

  28. FDA Guidance on Sampling Continued monitoring and/or sampling at the level established during the process qualification stage until sufficient data is available to generate significant variability estimates. Once the variability is known, sampling and/or monitoring should be adjusted to a statistically appropriate and representative level. Process variability should be periodically assessed and sampling and/or monitoring adjusted accordingly.

  29. Maintaining Equipment Qualification Once established, equipment qualification status must be maintained through routine monitoring, maintenance, and calibration procedures and schedules (21 CFR part 211, subparts C and D). The data should be assessed periodically to determine whether re-qualification should be performed and the extent of that re-qualification.

  30. Process Changes • Data gathered during continued process verification might suggest ways to improve and/or optimize the process by altering some aspect of the process or product such as: • the operating conditions (ranges and set-points) • process controls • manufacturing instructions • component, or in-process material characteristics

  31. Process Changes • If so, document: • A description of the planned change, • a well-justified rationale for the change, • an implementation plan, and • quality unit approval before implementation • Depending on the significance to product quality, modifications may warrant performing additional process design and process qualification activities.

  32. CPPs and CQAs and Statistics Analyze the data for CPPs for (representative) batches and tie in with data for CQAs It is about converting data into knowledge i.e. how do CPPs affect CQA’s (if at all)

  33. How it works Very basic statistics: Average = Standard Deviation =

  34. How it works Very basic statistics: Specification vs Control Upper Specification Limit (USL) Lower Specification Limit (LSL) Upper Control Limit (UCL) Lower Control Limit (LCL)

  35. Process Capability Process capability compares the output of an in-control process to the specification limits by using capability indices. The comparison is made by forming the ratio of the spread between the process specifications (the specification "width") to the spread of the process values, as measured by 6 process standard deviation units (the process "width") • Can it work? • Will it work? • Will it always work?

  36. Process Capability Process capability compares the output of an in-control process to the specification limits by using capability indices The comparison is made by forming the ratio of the spread between the process specifications (the specification "width") to the spread of the process values, as measured by 6 process standard deviation units (the process "width")

  37. Process Capability

  38. Process Capability Most capability indices estimates are valid only if the sample size used is 'large enough'. Large enough is generally thought to be about 50 independent data values

  39. Process Capability

  40. Process Capability The idea is to push your process closer to the mean and to have the mean in the middle of the USL and LSL i.e.REDUCE VARIABILITY

  41. Example – Comments ??

  42. Selection of Methods • A description of the statistical methods to be used in analyzing all collected data (e.g., statistical metrics defining both intra-batch and inter-batch variability • Various choices: • T test (paired or unpaired) • Shewart control charts Now it is worth speaking to a statistician Select method upfront and include in protocol

  43. Using Statisticians • FDA says there has always been a requirement for the use of statistics in pharmaceutical manufacturing and control and especially in process validation • We don’t necessarily have to go to levels where we need statisticians • Probably every company should have a consultant statistician on – hand for tricky questions

  44. How Much Sampling and Statistics We recommend continued monitoring and / or sampling at the level established during the process qualification stage until sufficient data is available to generate significant variability estimates The Product Control Strategy should establish appropriate sampling levels and process validation should demonstrate that it works. How much sampling is going to be expected?

  45. What About R&D Data Will data from pilot runs be acceptable as constituting some of the process validation data and would that mean that in some cases, if adequate scientific evidence is available – less than three commercial batches might be acceptable or concurrently released batches ?

  46. Continuous Improvement: Implementing Change to Minimize Unintended Consequences • Every change has the potential to invalidate your validation • Every change has the potential to result in non-conforming product • Thereforesufficient initial validation to fully understand the particular equipment item: the strengths and weaknesses and those areas where particular care is needed before, during and after making change

  47. new paradigm Do and only do what is necessary… …to assure that the process is under control and will produce quality product each time. …validation is not an event, but a continuous process

  48. And in Conclusion • The Process Validation Guide makes it clear that industry can no longer sit back with “3 batches and I’m done” • The guidance has far-reaching implications for industry particularly upstream (product development) • Drug product manufacturers would do well to familiarize themselves with 21CFR 820 – QSR for medical devices – sections on design controls

More Related