1 / 34

Kernel Discriminant Analysis Based on Canonical Difference for Face Recognition in Image Sets

Kernel Discriminant Analysis Based on Canonical Difference for Face Recognition in Image Sets. Wen-Sheng Chu ( 朱文生 ) Ju-Chin Chen ( 陳洳瑾 ) Jenn-Jier James Lien ( 連震杰 ) Robotics Lab, CSIE NCKU http://robotics.csie.ncku.edu.tw CVGIP 2007. pose. facial expression. illumination.

tavia
Download Presentation

Kernel Discriminant Analysis Based on Canonical Difference for Face Recognition in Image Sets

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Kernel Discriminant Analysis Based on Canonical Difference for Face Recognition in Image Sets Wen-Sheng Chu (朱文生) Ju-Chin Chen (陳洳瑾) Jenn-Jier James Lien (連震杰) Robotics Lab, CSIE NCKU http://robotics.csie.ncku.edu.tw CVGIP 2007

  2. pose facial expression illumination Motivation • Challenges of face recognition • Facial variations • Face recognition using image sets • Surveillance • Video retrieval

  3. Person A Person A A or B? Why Multi-view Image Sets? • Multiple facial imagescontain more information thana single image. Person B Person B Multiple input patterns(Many-to-many) Single input pattern(Single-to-many)

  4. … … … … Training/Testing Data: Facial Expression For subjecti Image Set 1 Training Image Set 2 Image Set 3 Image Set 4 Testing Image Set 5

  5. … … … … More Training/Testing Data: Illumination (Yale B) For subjectj Image Set 1 Training Image Set 2 Image Set 3 Image Set 4 Testing Image Set 5

  6. ... Subject N ... ... Xm ... ... Subject 1 ... X3 Xm-1 X2 Xm-2 ... X1 System Overview Training Data Testing Data Training image sets {X1,…,Xm} Testing image set Xtest Testing Process Kernel SubspaceGeneration Ptest Training Process Kernel SubspaceGeneration (Total m subspaces) Kernel DiscriminantTransformation (KDT) Pi Reference Subspace Reftest X T Reference Subspace: Refi=TTPi Output Identification result

  7. 32 32 32 Xi={ , ,…, } 32 32 32 ni 1 2 ... ni ~= 100 Subject N ... ... Xm ... ... Subject 1 ... X3 Xm-1 X2 Xm-2 ... X1 Training Process Training Data Training image sets {X1,…,Xm} Testing image set Xtest Testing Process Kernel SubspaceGeneration Ptest Training Process Kernel SubspaceGeneration Pi (Total m subspaces) Kernel DiscriminantTransformation (KDT) Pi Pi={ei1,…,eid} Reference Subspace Reftest X Reference Subspace: Refi=TTPi Identification result

  8. Nonlinear mapping function … h ni Kernel subspace of Xi (d<ni) Kernel matrix nj … ni Kernel Subspace Generation (KSG) … 32 x 32 ni

  9. X1={ , ,…, } n1 1 2 Xm={ , ,…, } Pm= nm 1 2 i-th image set Dimensionality may be ∞ ! where s-th image of i-th image set KSG: Kernel PCA (KPCA) • From the theory ofreproducing kernels, Image SetXi KernelMatrixKii KernelSubspacePi K11 P1= ,d < ni … … … Kmm SVD KPCA: B Scholkopf, A Smola, KR Muller - Advances in Kernel Methods-Support Vector Learning, 1999

  10. ... Subject N ... ... Xm ... ... Subject 1 ... X3 Xm-1 X2 Xm-2 ... X1 Training Process:Kernel Discriminant Transformation (KDT) Training image sets {X1,…,Xm} Testing image set Xtest Testing Process Kernel SubspaceGeneration Ptest Kernel SubspaceGeneration (Total m subspaces) Kernel DiscriminantTransformation (KDT) Pi Reference Subspace Reftest X T Reference Subspace: Refi=TTPi Identification result

  11. Subspace d x w transformation matrix (KDT matrix) How to measure similarity? KDT: Main Idea • Based on the concept of LDA, KDT is derived to find a transformation matrix T. • We proposed an iterative process to optimize T. • Dimensionality of T is assumed to be w. Within Subjects w-dim 32x32-dim d-dim KPCA 1 1 1 T Between Subjects KPCA 2 2 2

  12. KDT: Canonical Difference (CD) – Similarity Measurement Kernel subspace P1↓Canonical subspace C1 • Capture more common views and illumination than eigenvectors. Kernel subspace P2↓Canonical subspace C2 d1 d2 u1 u2 v1 v2

  13. A similarity measurement of two subspaces KDT: CD – Canonical Vector v.s. Eigenvector (cont.) eigenvectors B1 B2 B1-B2 canonical vectors C1 C2 C1-C2

  14. eigenvector 0 ≦ Eigenvalue = cos2θi ≦ 1 Similarity measurement canonical subspaces (also orthonormal) KDT: CD – Canonical Subspace (cont.) • Consider SVD on orthonormal basis matrices B1 and B2: d-dimensinoal orthonormal basis matrices SVD T.K. Kim, J. Kittler and R. Cipolla, “Discriminative Learning and Recognition of Image Set Classes Using Canonical Correlations”, IEEE Trans. on PAMI, 2007

  15. KernelSubspace ReferenceSubspace CanonicalSubspace Based on LDA Kernelsubspace CanonicalDifference Iterative learning KDT: KDT Matrix Optimization KDT Matrix T • Orthonormal basis matrices are required to obtain canonical subspaces Ci. • Is Refi normalized? Usually not!

  16. SVD KDT: Kernel Subspace Normalization • QR-decomposition is performed to obtain two orthonormal basis matrices. d × d invertible upper triangular matrix w × d orthonormal matrix

  17. Canonical Subspace Qi = TTPiRi-1 Form of LDA KDT: Formulation Derivation SB, Sw

  18. Contain the info of  Dimensionality may be ! KDT: Solution T T={t1,…,tq,…,tw}

  19. Replace using kernel trick Following similar steps, we can obtain Derivation Using the theory of reproducing kernels again:

  20. KDT: Numerical Issues • is solved by simply computing the leading eigenvectors of U-1V. • To make sure that U is positive-definite, we regularize U by Uμ(μ=0.001) where

  21. ... Subject N ... ... Xm ... ... Subject 1 ... X3 Xm-1 X2 Xm-2 ... X1 Refi= TTPi where each element is given by Training Process Training image sets {X1,…,Xm} Testing image set Xtest Testing Process Kernel SubspaceGeneration Ptest Kernel SubspaceGeneration (Total m subspaces) Kernel DiscriminantTransformation (KDT) Pi Reference Subspace Reftest X T Reference Subspace: Refi=TTPi Identification result

  22. ... Subject N ... ... Xm ... ... Subject 1 ... X3 Xm-1 X2 Xm-2 ... X1 Testing Process Training image sets {X1,…,Xm} Testing image set Xtest Testing Process Kernel SubspaceGeneration Ptest Kernel SubspaceGeneration (Total m subspaces) T Kernel DiscriminantTransformation (KDT) Pi X X Reference Subspace Reftest=TTPtest T Reference Subspace: Refi=TTPi Identification result

  23. Training List

  24. Training: Convergence of Jacobian Value • J(α) tends to converge to a specified value under different initializations.

  25. Testing: Comparison with Other Methods • The proposed KDT is compared to 3 related methods under 10 randomly chosen experiments. • KMSM (avg=0.837) • KCMSM (0.862) • DCC (0.889) • KDT (0.911)

  26. Conclusions • Canonical differences is provided as a similarity measurement between two subspaces. • Based on canonical difference, we derived a KDT and applied it to a proposed face recognition system. • Our system is capable of recognizing faces using image sets against facial variations.

  27. Thanks for your attention

  28. Related Works • Mutual subspace method (MSM) • Constrained MSM (CMSM) Subspace V Subspace U θ project project ConstrainedSubspace θc Uc Vc • Discriminantive canonical correlation (DCC) • Kernel MSM (KMSM), Kernel CMSM (KCMSM)

  29. … Mutual Subspace Method (MSM) • Utilize the canonical angles for similarity. Subspace B1 Subspace B2 u1 Eigenvectors u2 v1 θ2 v2 u v θ1 θ K. Fukui and O. Yamaguchi, “Face Recognition Using Multi-viewpoint Patterns for Robot Vision”, ISRR 2003

  30. Perform KDT on Subspace? • By KPCA, we can obtain s.t. • Multiply T to both sides of equal sign, • It can be observed that the kernel subspace of transformed mapped image sets is equivalent to applying T to the original kernel subspace.

  31. KDT Optimization • Using the theory of reproducing kernels again: • Following similar steps, we can obtain That is, T={t1,…,tq,…,tw} where

  32. Training: Dimensionality w of KDT V.S. Identification Rate • The identification rate is guaranteed to be greater than 90% after w > 2,200.

  33. Similarity Id Number Id Number Training: Similarity Matrix 32 0 1 0 1 • Similarity matrix behaves better after 10-times iterative learning. 1 32 10th iteration 1st iteration

  34. nj ... nj 1 2 r Kij ni … KSG: Kernel Matrix • Gaussian kernel function: • Kernel matrix Kij: the correlation between i-th image set and j-th image set. j-th image set i-th image set Kernel trick s 1 2 ni

More Related