ten deadly statistical traps in pharmaceutical quality control n.
Skip this Video
Loading SlideShow in 5 Seconds..
Ten Deadly Statistical Traps in Pharmaceutical Quality Control PowerPoint Presentation
Download Presentation
Ten Deadly Statistical Traps in Pharmaceutical Quality Control

Ten Deadly Statistical Traps in Pharmaceutical Quality Control

177 Views Download Presentation
Download Presentation

Ten Deadly Statistical Traps in Pharmaceutical Quality Control

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Ten Deadly Statistical Traps in Pharmaceutical Quality Control Lynn Torbeck Pharmaceutical Technology 29 March 2007

  2. Your Morning Mantra “In theory there is no difference between theory and practice, but in practice there is.” Yogi Berria

  3. The Ten Deadly Sins • Graphs • Normal Distribution • Statistical Significance • Xbar 3S • %RSD

  4. The Ten Deadly Sins • Control Charts • Setting Specifications • Cause and Effect • Variability • Sampling Plans

  5. Graph? What &%$# Graph? • Q#1 “Have you graphed the data?” • I have solved many statistical problems by simply graphing the data. • Always, always, always plot your data. • No ink on the page that isn’t needed. • Cause and effect on the same page. • Make the answer appear obvious. • Read Edward Tufte’s books

  6. Anscombe’s Astounding Graphs

  7. Anscombe’s Astounding Graphs • N=11 • Average of X’s = 9.0 • Average of the Y’s = 7.5 • Regression Line Y=3+0.5X • R2 = 0.67 • Std Error of the Slope = 0.118 • Residual Sums of Squares = 13.75

  8. Prolonged Acting Pro-Stuff • An ulcer drug from the late 1960’s. • In 1980 a change in a raw material resulted in more rejects. • In-process control using a UV assay • Composite of 5 tablets assayed

  9. Prolonged Acting Pro-Stuff • Sample from the top of each can • Specs were 95% to 105% • If value in spec, accept the can • If value out of spec, reject the can • Accepting and rejecting specific cans • About 50% of the cans were rejected

  10. Prolonged Acting Pro-Stuff • No good cans or bad cans. • Some “good” cans when retested are now out of specifications. • The cans accepted are just as bad or good as the cans rejected. • 45% of the values are OOS • The product was taken off the market. • A personal story

  11. Shipping Decision

  12. A Little Normal History • The concept of the Normal is basic. • Also called Gaussian or Bell Curve. • First published in November 12, 1733. • First set of tables in 1799 ! • Used by the astronomer Laplace for errors. • First called the Normal in 1893 by the statistician Karl Pearson.

  13. They Were Blown Away • “I know of scare anything so apt to impress the imagination as the wonderful form of cosmic order expressed by the ‘Law of Frequency of Error.’” Francis Galton in Natural Inherence, 1888

  14. Hunting the Elusive Normal • I have never met a real Normal distribution. Gotten close a couple of times. • There are no real Normal distributions • It’s a theoretical fiction that is useful part of the time. • We must separate reality from theory.

  15. “Normal Distribution” +6 -6 -3 +3 Mean

  16. Normal Facts • In theory, the tails of the distribution stretch from minus infinity to plus infinity, but there are real physical limits. • It is unique in that it is fully described by just its mean, mu, , and its standard deviations, sigma, , which are almost never actually known for certain. • Probabilities are represented by areas.

  17. What’s Normally Normal? • Tablet and capsule weights • Most manufactured parts • Student test scores, the ‘bell curve’ again • Things that grow in nature: • Apples • Bird eggs • Flowers • Peoples heights

  18. Ain’t Never Gonna be Normal • Particle sizes • LAL, EU/mL • Bioburden, cfu/mL • Failures of most anything • Telephone calls per unit of time • Church contributions • Floods

  19. Watch Out! • The tails are the most volatile and unstable • But, that is often the area of most interest! • Difficult to tell if data are normally distributed by looking at a small sample. • Crude rule is that we need at least 100 representative data values to determine if it is even approximately normal.

  20. Statistical Significance:Who Cares ? • The role of statistical analysis is as an additional tool to assist the scientist in making scientific interpretations and conclusions and not an end in itself.

  21. Differences • A scientific analysis often takes the form of looking for significant differences. • Is drug A different from drug B? • Is the increase in yield significantly better with the new centrifuge? • A difference can be significant in two ways, practical and statistical.

  22. Practical Significance • Practical significance comes form comparing a difference to an absolute reference or absolute truth. • How big a difference can you accept for: • Number of seconds of tooth pain? • Number of phone rings before hanging up? • How long will you wait for a bus? • How big your next raise is?

  23. Statistical Significance • Statistical significance testing is one of the great tools of statistics and science. • Statistical significance comes from comparing a difference, a signal, to a relative reference of random variability or the best estimate of noise in the data.

  24. Practical vs.Statistical • Practical Significance always wins and takes precedence over statistical significance! • In most applications, statistical significance should not be tested until practical significance is found.

  25. Sam 98.2 99.3 99.7 Xbar=99.1 Spec= 90.0 to 110.0 Barb 100.2 100.5 100.8 Xbar=100.5 Two Sided t, P=0.04 Are The Analysts Different?

  26. Signal to Noise • All statistical significance testing is only a comparison of the signal to the noise. • If the signal can be shown to be larger than the noise, than we would expect by chance variation alone, we say it is significant. • Bigger signal more significant. • Smaller noise more significant.

  27. Significance?

  28. Why Do It To It? • The primary purpose of statistical tests of significance is to prevent a us from accepting an apparent result as real when it could be just due to random chance. • Statistical significance without practical significance could in some circumstances be a lead to finding new relationships. • What if the spec was changed to 98.0 to 102.0? • We may want to find out why different

  29. The Biggest Lie in Statistics? • Your statistics professor mislead or lied. • Is Xbar±3S ever Correct? • For ever complex problem there is a solution that is quick, simple, understandable and absolutely wrong! • More grief has been perpetuated by this formula than any in statistics.

  30. The Biggest Lie in Statistics? • What is true is that   3  will bracket 99.73% of the area under the normal cures. • Note that this assumes we know the true values for the mean mu, , and standard deviation, sigma, , which we never do of course. We have to estimate them with the small samples we take. • Thus, there is uncertainty in the estimates.

  31. Side Line • Did you hear about the statistician’s wife who said her husband was just average? • She was being mean.

  32. So, What Do I Do Now? • Don’t use Xbar±3S as generalized monkey wrench and apply it to all of your statistical questions. Use the right tool for the job. • Use Confidence Intervals to bracket the unknown mean. • Use Tolerance Intervals to bracket a given percentage of the individual data values.

  33. %RSD: Friend or Foe? • S= SQRT[(X-Xbar)2/(n-1)] • %RSD = (100 * S) / Xbar • They are two different summary statistics • They measure two different concepts • They are not substitutes for each other • We need to report both.

  34. Control Charts • Having just told you not to use Xbar±3S, I now have to tell you that is how control charts define the control limits. • This is an artifact of history. • Control charts were developed by Dr. Walter Shewhart in 1924 while working at Western Electric in Cicero Ill.

  35. Control Chart • Add Xbar 3S limits to a line plot. • A chart for the response. • A chart for the moving range to estimate variability.

  36. Do You Trust YourControl Chart? • Control charts are crude tools and not exact probability statements. • They don’t take into account the number of samples in the data set for the limits. • They are intended as early warning devices and not accept/reject decision tools. • Don’t use for large $$ decisions.

  37. Oh Wow, I Don’t Believe It ! You did what to set the specification criteria for your million dollar product?

  38. Setting Specifications • A specification is a document that contains methods and accept/reject criteria • Criteria can be determined several ways • Wishful thinking • Clinical results • Compendial standards • Historical data and statistics

  39. Million $$ Decisions? • Regulatory Limits - External • Release: accept/reject - Internal • Action limits • Alert • Warning limits • Trend limits • Validation limits

  40. Idealized Specification Limits

  41. Calculating Criteria • Don’t use Confidence Intervals, they shrink toward zero with large sample sizes. • Don’t use X bar ± 3 S. They are too narrow for small sample sizes • Use Tolerance Intervals, preferably 99%/99%. This will take into consideration the sample size and uncertainty of the average and the standard deviation.

  42. Setting Specification Criteria • For action limits, expect the average to vary and widen the Tolerance Limits • For accept/reject limits, add a further allowance for stability. • Consider the clinical results when possible as part of the justification for limits.