120 likes | 236 Views
This paper presents a comprehensive review of five projection-based face recognition techniques—PCA, LDA, 2DPCA, 2DLDA, and SVD—focusing on their mathematical underpinnings and practical implications for accuracy, storage, and computation. Using experimental results from datasets ORL, UMIST, and NTHU, the study demonstrates that SVD, 2DPCA, and 2DLDA outperform PCA and LDA regarding retrieval rates. The findings underscore the importance of method selection in optimizing face recognition systems, crucial for applications in access control across various sectors.
E N D
FACE IMAGE RETRIEVAL BY PROJECTION-BASED FEATURES Chaur-Chin Chen, Yu-Shu Shieh, Hsueh-Ting Chu Department of Computer Science Institute of Information Systems & Applications National Tsing Hua University, Hsinchu,Taiwan 30013 E-mail: cchen@cs.nthu.edu.tw
ABSTRACT Face and fingerprint image have been used as an access control for the entry and exit of countries such as in Japan and America. The issues storage space and computations are as important as the accuracy of verification and/or identification. This paper reviews five projection-based face recognition methods: PCA, LDA, 2DPCA, 2DLDA, and SVD. We give a mathematical review of the aforementioned methods, discuss the usage of storage space, practicality of computation. Experimental Results of Comparison on three databases: ORL, UMIST, and NTHU show that SVD, 2DPCA, 2DLDA are superior to PCA and LDA.
PROJECTION-BASED METHODS • The five methods based on the following projections are reviewed and compared. • Principal Component Analysis (PCA) • Linear Discriminant Analysis (LDA) • 2D PCA • 2D LDA • Singular Value Decomposition (SVD)
(SVD) Training Face Images • Let Fij be the ith face image of m by n from the jth subject, 1≦i≦Nj and 1≦j≦K, N1+N2+….+Nk=Nbe thetraining face images. Define the mean image S as
Singular Value Decomposition S=UDVt • Do S=UDVt where U and V are orthogonal. • Select r,c with r≦m, c≦n such that d11+d22+…+dhh ≧ 85% of trace(D), where h=min{r,c} • Let Ur =[u1,u2,...,ur], Vc =[v1,v2,…,vc] • Where U is an m by m orthogonal matrix • V is an n by n orthogonal matrix
Convert a face image into features • For each training image Ak, we represent this Ak as xk =(Ur)tAVc , an r by c feature image • For each test image T, we represent T by y=(Ur)tTVc , an r by c feature image
Distance between training feature images and a test feature image • Compute d(y,xk) by Fröbenius norm • The smaller Fröbenius norm, the closer • Rank the norms in an ascending order • Determine the recognition rates from ranks 1, 2, 3, ...,8 and plot the curve