1 / 19

Comparison of Dimensionality Reduction Techniques for Face Recognition

Comparison of Dimensionality Reduction Techniques for Face Recognition. Berkay Topçu 6950 Sabanci University. Out line. Motivation Face Detection Database (M2VTS) Different Dimensionality Reduction Techniques PCA, LDA, aPAC, Normalized PCA, Normalized LDA Classification Results

ernst
Download Presentation

Comparison of Dimensionality Reduction Techniques for Face Recognition

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Comparison of Dimensionality Reduction Techniques for Face Recognition Berkay Topçu 6950 Sabanci University

  2. Outline • Motivation • Face Detection • Database (M2VTS) • Different Dimensionality Reduction Techniques • PCA, LDA, aPAC, Normalized PCA, Normalized LDA • Classification Results • Conclusion

  3. Motivation • Face recognition: active research area specialising on how to recognize faces within images or videos • Dimensionality reduction: to be able to represent/classify with less amount of data  Linear Transforms

  4. Overall System Dimensionality Reduction Face Detection PCA Dimension Reduction Classification Pattern Recognition

  5. Face Detection • Automatic face detection of OpenCV library • Using Haar-like features • Resized to 64x48

  6. Database • M2VTS database used for “Audio-visual speech recognition” • Lip detection  suitable for face recognition

  7. Database • 40 pictures of each 37 subjects • 32 pics for training & 8 pics for testing • 64x48 pixels  3072 pixels • Unnecessary to use the whole image in recognition system • Is it possible to represent with less information?

  8. PCA (Principal Component Analysis) • Weaknesses: • Translation variant • Scale variant • Background variant • Lighting variant • Advantages: • Fast and needs lesser amount of memory

  9. PCA (Principal Component Analysis) • Principal component analysis (PCA) seeks a computational model that best describes a face by extracting the most relevant information contained in that face. • Findsa lower dimensional subspace whose basis vectors correspond to the maximum variance direction in the original image space. Solution is the eigenvectors of the scatter matrix

  10. LDA (Linear Discriminant Analysis) • Finds the vectors in the underlying space that best discriminate among classes. • The goal is to maximize between-class scatter (covariance) while minimizing within-class scatter. maximize the ratio Solution is the eigenvectors of

  11. aPAC (Approximate Pairwise Accuracy Criterion) • Drawbacks of LDA: • Maximizing the squared distances between pairs of classes, outliers dominate the eigenvalue decomposition. • So, LDA tends to over-weight the influence of classes that are already well seperated. • Solution is generalization of LDA by weighting the contribution of each class due to Mahanalobis distance between classes.

  12. aPAC (Approximate Pairwise Accuracy Criterion) • K-class LDA can be decomposed into two-class LDA. • Introducing a weighting of the contributions of individual class pairs to the overall criterion. • Weighting function depends on the Bayes error rate* between classes. • Altough it is generalization of LDA, no additional complexity in computation. * Bayes error rate: theoretical minimum to the error any classifier can make.

  13. nPCA (Normalized PCA) • PCA computes the projection that maximizes the preservation of pairwise distances in the projected space. • Weighting this sum of the squared distances by introducing symmetric pairwise dissimilarities. • Proposed weights:

  14. nPCA (Normalized PCA) Solution is the eigenvectors of where is a matrix containing pairwise dissimilarities.

  15. nLDA (Normalized LDA) • Drawbacks of the LDA can be overcome by • Appropriately chosen weights to reduce the dominance of large distances • Pairwise similarities together with the pairwise dissimilarities  Attraction between elements of the same class and repulsion between elements of different classes.

  16. Classification (Training & Testing) • Classification in MATLAB – PrTools (Pattern Recognition Toolbox) • Nearest Mean Classifier (nmc) & Linear Classifier (ldc) • 40 images from 37 subject  1480 images • 32x37 = 1184 images for training • 8x37 = 296 images for testing

  17. Training and Testing Training Classifier Training Detected faces from different people Statictical data for face images Dimension Reduction Testing Score calculation for each method Unknown detected faces Dimension Reduction Recognition Rates

  18. Test Results Reduced dimension = 32 Reduced dimension = 16 Recognition rate prior to dimension reduction (using all pixels) is 79.05

  19. Conclusion • Face recognition in the lower dimension • Improved recognition rates for several dimensionality reduction techniques • Further work: • Analysis of low recognition rates in some cases • Block PCA and LDA

More Related