1 / 19

Eigenfaces for Recognition By: Matthew Turk and Alex Pentland 1991 IEEE

Presented by: Shane Brennan 5/02/2005. Eigenfaces for Recognition By: Matthew Turk and Alex Pentland 1991 IEEE. Defining Characteristics. A holistic based method, meaning it uses the whole face region in the recognition system.

Download Presentation

Eigenfaces for Recognition By: Matthew Turk and Alex Pentland 1991 IEEE

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Presented by:Shane Brennan 5/02/2005 Eigenfaces for RecognitionBy:Matthew Turk and Alex Pentland1991 IEEE

  2. Defining Characteristics • A holistic based method, meaning it uses the whole face region in the recognition system. • Emphasizes significant features of the image, which may or may not coincide with features significant in human perception such as the eyes, nose, ears, or mouth. • Performs well under a variety of lighting conditions. • Performs poorly under variations in image scale.

  3. Theory Behind Eigenfaces • Each input image of size NxN (intensity values) can be viewed as a vector of dimension N2. • Images of faces, being similar in configuration, will not be distributed randomly in this N2 dimensional space. • Use PCA to project input image into a lower-dimension subspace. The vectors obtained from PCA are what we call the eigenvectors, or “eigenfaces”.

  4. Calculating Eigenfaces • Take a collection of images, these are the people to be recognized. The images are also used as a sort of training data. • Take the M training image vectors and average them to find Ψ = 1/M * ∑n = 1 to M Γn where Γn is the nth image vector. • Each face differs from the average by Φi = Γi - Ψ.

  5. Calculating Eigenfaces, continued • These M vectors (Φi) then undergo Principal Component Analysis to find a set of M orthonormal vectors, defined as un, and their associated eigenvalues, λn. • These are the eigenvectors of the Covariance Matrix: C = 1/M * ∑n = 1 to M ΦnΦnT = AAT. Where A = [Φ1 ... ΦM].

  6. Finding C • C is an N2 x N2 matrix. To calculate this matrix is computationally expensive. • If M << N2 then there will only be M – 1 eigenvectors which have non-zero eigenvalues. So can solve an M x M matrix instead.Consider the eigenvectors, vi, of ATA such that: ATAvi = μivi which yields AATAvi = μiAvi • From this it can be seen that Avi are the eigenvectors of C = AAT (and μi are the eigenvalues).

  7. Finding C, continued • Following this, form the M x M matrixL = ATA and calculate the M eigenvectors of L, which are referred to as vi. • To form the eigenfaces ui use the following equation: ui = ∑k = 1 to M vikΦk for i = 1 ... M. • This takes the computation down from the order of N2 to the order of M.

  8. Classifying An Image • Given the set of M eigenfaces, choose the M ' eigenfaces that have the highest associated eigenvalues. • M ' can be a small number (on the order or 10 – 50). • Take a new face image, Γ, and project it into “face space” by the operation:Wk = ukT· (Γ - Ψ) for k = 1 to M '. • The weights (Wk) form a vector ΩT = [W1 ... Wk] which describes the contribution of each eigenface in representing the input face image.

  9. Classifying An Image, continued • Take ΩT and determine which face class, if any, best describes the input face. • To do this, find the Euclidian distance εk = ||Ω - Ωk|| where Ωk is the weight vector describing the kth training face image. A face is identified as person k if εk is below some certain threshold Θε. • If each εk is greater than Θε then the input image is determined to not be any known face. • In addition, if the input vector lies far from face space, it can be classified as not even being a face image.

  10. The average face (left), and several eigenfaces (right).

  11. An input face image and its projection onto face space.

  12. Detecting Faces • Given a large input image of size N x N the locations of faces can be found. • For each S x S subregion of the image, project Φ onto the facespace with the operation:Φf = ∑k = 1 to M ' Wkuk. The distance of the local subregion from facespace is: ε(x,y) = ||Φ – Φf||. • Regions of the image with a low ε (below a given threshold) most likely contain a face (which is centered in the S x S subregion). • However, this calculation is extremely expensive (on the order of N2). A more efficient method is needed.

  13. Detecting Faces, continued • ε2 =||Φ – Φf||2= (Φ – Φf)T(Φ – Φf)= ΦTΦ – ΦTΦf + ΦfT(Φ – Φf)= ΦTΦ – ΦfTΦf Since Φf is perpendicular to (Φ – Φf). • Since Φf is a linear combination of the eigenfaces, and the eigenfaces are orthonormal vectors we can caluclate ΦfTΦf as: ΦfTΦf = ∑i = 1 to L Wi2 • So ε2(x,y)=ΦTΦ – ∑i = 1 to M ' Wi2

  14. Detecting Faces, continued • ∑i = 1 to M ' Wi2= ∑i = 1 to M ' ΦTui= ∑i = 1 to M ' [Γ – Ψ]T * ui= ∑i = 1 to M ' [ΓT*ui – ΨT*ui]= ∑i = 1 to M ' [I(x,y) x ui – ΨT*ui]Where I(x,y) is the overall (large) input image.I(x,y) x ui represents the correlation between I(x,y) and ui. • ΦTΦ= [Γ – Ψ]T [Γ – Ψ]= ΓTΓ – 2ΨTΓ+ΨTΨ= ΓTΓ – 2Γ x Ψ + ΨTΨWhere Γ x Ψ is the correlation between Γ and Ψ.

  15. Detecting Faces, continued • Bringing this all together:ε2(x,y)=ΓTΓ - 2Γ x Ψ + ΨTΨ + ∑i = 1 to M ' [I(x,y) x ui – ΨT x ui] • Ψ and ui are fixed, so ΨTΨ and ΨT x ui can be computed ahead of time. • This means that only M '+1 correlations must be computed, as well as the ΓTΓ term. (Note that M ' is typically on the order of 10 - 50) • ΓTΓ is computed by squaring I(x,y) and then summing the squared values in the local subregion.

  16. The input image (left), and the corresponding face map (right). Dark areas indicate the presence of a face.

  17. Additional Features • One can improve recognition by taking training images at different sizes, and at different angles. • This creates a number of different face spaces. • The face can then be recognized under varying conditions of size and rotation. • This also allows the algorithm to identify the orientation of the face. • Use a Gaussian window to remove the background to ensure that features of the background are not significant in the eigenface.

  18. Experiments and Results • In a study using 16 subjects under a variety of lighting, size, and head orientation and Θε set to infinity (no input images rejected as unknown) the following results were obtained:96% correct classification over lighting variation85% correct classification over orientation variation64% correct classification over size variation • Setting Θε for 100% accurate recognition results in some images of known individuals being rejected as unknown, these rates are:19% under variation of lighting39% under variation of orientation60% under variation of size

  19. Thank You!

More Related