1 / 27

Face Recognition Using New Image Representations

Face Recognition Using New Image Representations. Zhiming Liu and Qingchuan Tao 2009 IEEE. Outline. Introduction Motivation New Image Representation Via PCA Transformation Experiments Conclusion. Introduction.

oro
Download Presentation

Face Recognition Using New Image Representations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Face Recognition Using New Image Representations Zhiming Liu and Qingchuan Tao 2009 IEEE

  2. Outline • Introduction • Motivation • New Image Representation Via PCA Transformation • Experiments • Conclusion

  3. Introduction • While the commonly used gray-scale image is derived from the linear combination of R, G, and B color component images, the new Image representations are derived from the Principal Component Analysis (PCA) tranform upon the hybrid configurations of different color component images.

  4. Introduction • We propose to encode the facial information from the new image representations by using an effective Local Binary Pattern (LBP) feature extraction method, which extracts and fuses the multi-resolution LBP features.

  5. Motivation • For color face image recognition, the RGB color space is commonly used in some methods. • As YIQ, HSV, and YCbCr transformed from the RGB space, are adopted to perform face recognition.

  6. Motivation • First, we calculate the correlation coefficients contained between the individual components in RGB, YIQ, and YCbCr color spaces.

  7. Motivation • Based on the within-class scatter matrix Swand the between-class scatter matrix Sbof the training database, we can evaluate the class separability by using the Fisher criterion: J4 = tr(Sb)/tr(Sw).

  8. Motivation • Sw:類別內散佈矩陣(within-class scatter matrix ) • Sb:類別間散佈矩陣(between-class scatter matrix )

  9. Motivation • Table II gives thecalculation results, which indicate that the color componentsG and B have the weakest power of imageclassification,at least for the FRGC training database.

  10. New Image Representation Via PCA • We assume that , , and arecoloumn vectors:where N=mxn. • Wecan form a data matrix using all the trainingimages: • where l is the number of training images.

  11. New Image Representation Via PCA • The covariancematrix of may be formulated as follows : • where is the expectation operator, t denotes the transposeoperation, and.

  12. New Image Representation Via PCA • The PCA of a randomvector X factorizes the covariance matrix into thefollowing form: • where is anorthonormaleigenvectormatrixand is a diagonal eigenvalue matrix with diagonal elements indecreasing order .

  13. New Image Representation Via PCA • Then a new image representation can be derived by projecting three color component images of an image onto :

  14. Experiments • In particular, the training set contains 12,776 images that are either controlled or uncontrolled. • The target set has 16,028 controlled images and the query set has 8,014 uncontrolled images.

  15. Experiments • A. Effectiveness of New Image Representations for Face Recognition • Some new image representations, such as URCrQ , URCbQ, and so on, can be generated by using the transformation derived from PCA. • Note that before transformation, in (4) are normalized to have zero mean and unit variance, respectively.

  16. Experiments • Table III shows the face verification rates (FVR) at 0.1% false accept rate (FAR) , where only image representations with FVR beyond 60% are listed, and R, Y, and URGB are also included for comparison.

  17. Experiments • Fig. 1 shows some color component images and the resulting new image representations by using the transform coefficients.

  18. Experiments • Table IV show that there are strong decorrelations between UYCbQ and UYCrQ, URCrQ.

  19. Experiments • The fused classification results are detailed in Table V, which indicates that the best performance 77.10% , can be reached by fusing UYCrQ and UYCbQ, as expected.

  20. Experiments • B. LBP-based Face Recognition Using New Image Representation • In this section, we present an effective method to use LBP features for face recognition. • The LBP operator is defined as follows:

  21. Experiments • After extensions, LBP can be expressed as: , where P and R mean P sampling points on a circle of radius R. • A LBP multi-resolution feature fusion is proposed as shown in Fig. 2.

  22. Experiments • The third set of experiments evaluates face recognition performance by using the proposed multi-resolution LBP feature fusion on new image representations.

  23. Experiments • The proposed LBP method is implemented to UYCrQ, UYCbQ, R, and Y images, and the corresponding experimental results are shown in Table VI.

  24. Experiments • The final results are given in Table VII, which indicates that the best FVR of 83.41% at 0.1% FAR is achieved by fusing the classification outputs of UYCrQand Y images.

  25. Experiments • Fig . 3 shows the corresponding ROC curves for the best FVR obtained by our method.

  26. Conclusion • The experiments show the satisfactory results have been achieved by using these new images and LBP features. • The future work will be focused on seeking the more reliable criteria to choose the color component images, as well as the new learning methods to derive the color transformation.

  27. Thank you for your listening

More Related