1 / 24

Facial Expression Recognition using KCCA with Combining Correlation Kernels and Kansei Information

Facial Expression Recognition using KCCA with Combining Correlation Kernels and Kansei Information. Yo Horikawa Kagawa University , Japan. 1. Purpose of this study Apply kernel canonical correlation analysis (kCCA) with correlation kernels to facial expression recognition

cosima
Download Presentation

Facial Expression Recognition using KCCA with Combining Correlation Kernels and Kansei Information

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Facial Expression Recognition using KCCA with Combining Correlation Kernels and Kansei Information Yo Horikawa Kagawa University , Japan

  2. 1. Purpose of this study Apply kernel canonical correlation analysis (kCCA) with correlation kernels to facial expression recognition ・Combining multi-order correlation kernels ・Use of Kansei information

  3. 2. Facial expression recognition Facial images → Expressions Basic six expressions: happiness, sadness, surprise, anger, disgust, fear Happy Sad Surprised Angry Disgusted Fearful Neutral

  4. u h(x) h’(y) v 3. Kernel canonical correlation analysis (KCCA)Pairs of feature vectors of sample objects: (xi, yi) (1 ≤ i ≤ n)Canonical variates (u, v): projections with the maximum correlation between (implicit) nonlinear functions h(xi ) and h’(yi ).u = wφ・h(x) v = wθ・h’(y)

  5. Canonical variates are calculated with the kernel function. u = ∑i=1n fiφ(xi, x) v = ∑i=1n giθ(yi, y) Kernel functions: the inner products of implicit functions of x and yφ(xi, xj) = h(xi )・h(xj ) θ(yi, yj) = h’(yi )・h’(yj )f = t(f1, ∙∙∙, fn) and g = t(g1, ∙∙∙, gn): the eigenvectors of the generalized eigenvalue problem: Φ=Φij =φ(xi, xj) Θ=Θij =θ(yi, yj) (1 ≤ i, j ≤ n) I: Identity matrix of n×n

  6. 4. KCCA for classification problems and use of Kansei information 4.1 CCA for classification problems Use an indicator vector (IV) as the second feature vector y. y = (y1, ∙∙∙, ync) corresponding to x: yc = 1 if x belongs to class cyc = 0 otherwise (nc: the number of classes) In the recognition of 6 basic facial expressions (happiness, sadness, surprise, anger, disgust, fear, neutral)y = ( 1 , 0 , 0 , 0 , 0 , 0 , 0 )

  7. Canonical variatesur (1 ≤ r ≤ nc-1) for a new object (x, ?) are calculated by ur =∑i=1n friφ(xi, x)(1 ≤ r ≤ nc-1)Standard classification methods are applied in the canonical variate space (u1, …, unc-1). x Image KCCA Canonical variates (u1, …, u6) IV (1, 0, 0, 0, 0, 0, 0) y

  8. 4.2 Kansei information and its use in KCCA Semantic ratings on the expressions of facial images by human Ratings on the basic six expressions: (happiness, sadness, surprise, anger, disgust, fear) ( 5 , 2 , 3, 1, 0, 1 ) Semantic rating vector (SRV) x Image KCCA Canonical variates (u1, …, u5) SRV (4.39, 1.35, 2.29, 1.16, 1.23, 1.26) y Classifiers

  9. 5. Correlation kernel Correlation kernel: an inner product of the autocorrelation functions of the feature vectors. rxi(t1) =∫xi(t)xi(t+t1)dt rxj(t1) =∫xj(t)xj(t+t1)dt φ(xi, xj) =∫rxi(t1)・rxj(t1) dt1 xi(t) xj(t)

  10. The kth-order autocorrelation of data xi(t): rxi(t1, t2, ∙∙∙ , tk-1) = ∫xi(t)xi(t+t1)・・・xi(t+tk-1)dt The inner product between rxi and rxj is calculated with the k-th power of the 2nd-order cross-correlation function: rxi・rxj =∫{ccxi, xj(t1)}kdt1 ccxi, xj(t1) =∫xi(t)xj(t+t1)dt The calculation of explicit values of the autocorrelations is avoided. → Higher-order autocorrelations are tractable with practical computational cost. Linear correlation kernel:φ(xi(t), xj(t)) = rxi・rxj

  11. Calculation of correlation kernels rxi・rxj for 2-dimensional image data: x(l, m) (1≤ l ≤ L, 1≤ m ≤ M) ・Calculate the cross-correlations between xi(l, m) and xj(l, m): ccxi, xj(l1, m1) = ∑l=1L-l1∑m=1M-m1xi(l, m)xj(l+l1, m+m1)/(LM) (1 ≤ l1 ≤ L1, 1 ≤ m1 ≤ M1) ・Sum up the kth-power of the cross-correlations: rxi・rxj = ∑l1=0L1-1∑m1=0M1-1 {ccxi, xj(l1, m1)}k /(L1M1)

  12. Modified correlation kernels Higher-order and odd-order correlation kernels are less performed. Correlation kernel (C) : ∑l1, m1ccxi, xj(l1, m1)k (14) → Modified correlation kernels with the kth root and absolute values Lp norm kernel (P): rxi∙rxj = sgn(ccxi, xj(l1, m1))|∑l1,m1{ccxi, xj(l1, m1)}k|1/k (15) Absolute correlation kernel (A): rxi∙rxj = ∑l1, m1 |ccxi, xj(l1, m1)|k (16) Absolute Lp norm kernel (AP): rxi∙rxj = |∑l1, m1{ccxi, xj(l1, m1)}k |1/k (17) Absolute Lp norm absolute kernel (APA): rxi∙rxj = |∑l1, m1|ccxi, xj(l1, m1)|k|1/k (18) Max norm kernel (Max): rxi∙rxj = max l1, m1ccxi, xj(l1, m1) (19) Max norm absolute kernel (MaxA): rxi∙rxj = max l1, m1 |ccxi, xj(l1, m1)| (20)

  13. Classifier 1 ‘7’ Classifier 2 ‘9’ Classifier 3 ‘7’ Classifier 4 ‘7’ ・ ・ ・ Classifier M ‘1’ Objects Final decision ‘7’ 6. Combining correlation kernels Multiple classifiers may give higher performance than a single classifiers. Cartesian spaces of the canonical variates obtained with a set of the kernel functions e.g., U = (u1, ···, unc-1), U’ = (u’1, ···, u’nc-1), U” = (u”1, ···, u”nc-1) → U⊗U’⊗U” = (u1, ···, unc-1, u’1, ···, u’nc-1, u”1, ···, u”nc-1)

  14. Happy Sad Surprised Angry Disgusted Fearful Neutral 7. Facial expression recognition experiment Object: JAFFE (Japanese female facial expression) database 213 facial images of 10 Japanese females 3 or 4 examples of each of 6 basic facial expressions (happiness, sadness, surprise, anger, disgust, fear) and a neutral face 8bit gray scale valued of 256×256 pixels Figure 1. Sample images in JAFFE database. From left to right: happiness, sadness, surprise, anger, disgust, fear, neutral.

  15. ALL 213 images in JAFFE database

  16. Happy Sad Surprised Angry Disgusted Fearful Neutral Center regions of 200×200 pixels are taken. They are resized to 20×20 pixels with averaging of 10×10 pixels. Preprocessing: linear normalization with the mean 0 and SD 1.0. Figure 2. Images of 20×20 real valued matrix data of Fig. 1 as the first feature x in kCCA.

  17. Semantic rating vectors (SRVs) in JAFFE database Averages of semantic ratings on 6 expressions (happiness, sadness, surprise, anger, disgust, fear) on 5 point scales obtained from 60 Japanese females Images SRV Images SRV (HAP, SAD, SUR, ANG, DIS, FEA) (HAP, SAD, SUR, ANG, DIS, FEA) (4.39, 1.35, 2.29, 1.16, 1.23, 1.26) (2.87, 1.55, 4.68, 1.52, 1.52,1.65 ) Surprised Happy (4.77, 1.29, 2.45, 1.26, 1.23, 1.23) (1.55, 1.90, 2.10, 4.32, 3.90, 1.81) Happy Angry (1.39, 3.97, 1.68, 2.19, 3.68, 3.61) (3.03, 2.16, 2.06, 1.94, 1.84, 1.87) Sad Neutral

  18. Experiment (Ⅰ) Facial expression recognition with 2 images for each expression per person Sample set: 2 images×7 expressions×10 persons = 140 images Test set: the remaining 73 images Experiment (Ⅱ) Facial expression recognition with a leave one-person out method Sample set: Images of 9 persons (about 190 images) Test set: Images of the remaining 1 person (about 20 images) Averaging 10 tests

  19. The kernel function φ and the second feature vector y is shown with the set (φ, y) using the following symbols. For the kernel function φ of image data Kth-order correlation kernel: Ck Kth-order Lp norm kernel: Pk Kth-order absolute correlation kernel: Ak Max norm kernel: Max etc., Total 44 kinds For the second feature vector y Indicator vector: IV Impression vector: SRV E.g., (C2, SRV) is the 2nd-order correlation kernel and the semantic rating vectors. Classifiers in the canonical space: Nearest neighbor method

  20. 8. Results of the experiment Experiment (Ⅰ) Correct classification rates (CCRs) with single classifiers Highest (SRV) 89% Highest (IV) 94.5% Figure 3. CCR with single kernel functions of 44 kinds in the experiment (I). IV: indicator vector, SRV: semantic rating vector. Highest CCR (the most right-hand side) is obtained with Indicator vectors (IV), not with semantic rating vectors (SRV).

  21. Table 1(Ⅰ). Highest CCR with single kernel functions and combining two and three kernel functions in the experiment (Ⅰ). Combining correlation kernels increases CCRs. Highest CCR (97.3%) is superior to the past studies (94.6%) It is obtained with not only indicator vectors (IV) but also semantic rating vectors (SRV).

  22. Experiment (Ⅱ) Figure 4. Highest CCRs with kernel functions of 44 kinds in 10 tests in the experiment (II).Single kernel (a), combining two (b) and three (c) kernels. IV: indicator vector, SRV: semantic rating vector. Highest average CCR (the most right-hand side in Fig. 4(c)) is again obtained with combining 3 kernels including semantic rating vectors (SRV).

  23. Table 1(Ⅱ). Highest CCR with single kernel functions and combining two and three kernel functions in the experiment (Ⅱ). Highest CCR (67.0%) is again obtained with combining 3 kernels including not only indicator vectors but also semantic rating vectors (SRV).

  24. 9. Conclusion KCCA with multiple correlation kernels and Kansei information was applied to facial expression recognition through the experiment with JAFFE database. High correct classification rates (CCRs) equivalent to the past studies were obtained with correlation kernel CCA without any feature extraction. Combining multiple correlation kernels and Kansei information with the semantic rating vectors contributed to increase CCRs.

More Related