1 / 27

Multivariate Statistical Process Control for Fault Detection using Principal Component Analysis .

Multivariate Statistical Process Control for Fault Detection using Principal Component Analysis . APACT Conference ’04 Bath. Personnel. Outline. Process Monitoring and Fault Detection and Isolation. Implement Statistical Quality Control prog. Maximise Yield through Statistical Data Analysis

trygg
Download Presentation

Multivariate Statistical Process Control for Fault Detection using Principal Component Analysis .

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multivariate Statistical Process Control for Fault Detection using Principal Component Analysis. APACT Conference ’04 Bath

  2. Personnel

  3. Outline • Process Monitoring and Fault Detection and Isolation. • Implement Statistical Quality Control prog. • Maximise Yield through Statistical Data Analysis • Application of RWM • Development of NOC model • Inference and Conclusions

  4. Real World Methodologies • Statistical Process / Quality Control (SP/QC) • Statistical process monitoring (uni & multivariate) • Fault Detection & Isolation (FDI) • Principal Component Analysis (PCA) • Latent structures modelling (PLS) • Exponentially Weighted Moving Average (EWMA) and MEWMA • Batchwise or Run2Run strategies (R2R)

  5. Statistical Control • The objective of SPC is to minimise variation and aim to run in a ‘state of statistical control’. • Distinction between common cause (stochastic) variations and assignable cause • Where process is operating efficiently • When product is yielding sufficiently • MSPC more realistic representation but more complex • Performance enhancement • Monitoring • Improvement

  6. FDI • Distinguish between product and test • Consistently high quality product/process is a challenge • FDI scheme: a specific application of SPC, where a distinction needs to be made between normal process operation and faulty operation. i.e. bullet pt. 1 • Key points • Process knowledge • Fault classification

  7. Plant Overview • IBM Microelectronics Division • Testing vendor supplied μchips • Many combinations (product & process) • (wafer/lot/batch/tester/handler) • Large data sets (inherent redundancy) • This leads to the following pertinent question: • Chip fault or evolving test unit malfunction??

  8. Batch Process • Finite duration • non-linear behaviour & system dependent • ‘Open loop’ wrt to product quality • no feedback is applied to the process to reduce error through batch run • 3-way data structure (batch x var x time) • Parametric and non-std data formats • Differing test times • Yield is calculated as a % of starts/goods • Yield is a logical AND of test metrics

  9. PROCESS GOOD BAD GOOD PRODUCT BAD Test Matrix False Fail Pass Genuine Fails

  10. Data Structure • Unusual data set, complex in nature • Different data structures (HP, Teradyne) • Large data matrix (avg. batch ≈ 7-10K cycles) • ≈ 180 metrics/μchip/cycle (MS/RF) • Correlation/redundancy • Analogue and Digital test vectors

  11. PCA Theory • Rank reduction or data compression method • Singular Value Decomposition (SVD) • variance-covariance matrix • Variance - eigenvalues (λ) • Loadings - eigenvectors (PC’s) • Linear transform equation yields scores • 1st PC has largestλ, sub. smaller • How many components? Subjective process • Disregard λ < 1 • Scree plots [too many = over parameterise, noise] • 70 – 90 % var [too few = poor model, incomplete]

  12. DB link pre-processing data set X (n x m) normalisation cov matrix SVD model eig% score & loading vector T2 & Q stat MEWMA Fault Detection PCA flowchart

  13. NOC Model • Pre-process the data • normalise N~(0,1) • apply limit files (separate components) • partition data and work with subset of known goods • SVD on subset • eigenvalue contribution to model (≈70%) • Post-multiply PC’s with normal batch data • batch data normalised with model statistics (µ,σ) • model results can be used to identify shift from normal

  14. Pass Data Only

  15. Zoom of scores cluster

  16. HP 1836 data NOC Model scores cluster

  17. HP 1836 data NOC & Batch 1836 scores cluster

  18. HP 1836 data NOC & Batch 1836 scores cluster (Close Up)

  19. t2036 statistics • 75% eigenvalue contribution (14 PC’s) • no. faults = 117 • Batch size = 2135 • NOC model shows fault clusters

  20. This fault cluster represent the same fault (8)

  21. MEWMA • Rational • The PCA is used for a preconditioning, data reduction tool • The scores (subjective level) are used as input to a MEWMA scheme • Create single multivariate chart • Weighted average nature is sensitive to subtle faults • Robust to auto correlated data, Non-normal data

  22. SPC PCA MEWMA Supervisory Scheme Batch loop Yield calc DUT DIB Testprog Production Data Summary Stats Product Handler Tester DB Loop n times Schematic

  23. Conclusions • Process at ‘cell level’ • Reduction of large data sets • Generation of NOC model • Tester specific NOC model • Product specific NOC model • Tested with production batch data • MEWMA method under development • Single fault statistic to max. DUT FPY

  24. Acknowledgements • IBM Microelectronics Division, Ireland • Trinity College Dublin, Ireland • APACT 04, Bath.

More Related