Eigenimage Methods for Face Recognition

1 / 65

# Eigenimage Methods for Face Recognition - PowerPoint PPT Presentation

Eigenimage Methods for Face Recognition. Professor Padhraic Smyth CS 175, Fall 2007. Outline of Today’s Lecture. Progress Reports due 9am Monday Eigenimage Techniques Represent an image as a weighted sum of a small number of “basis images” (eigenimages)

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

## PowerPoint Slideshow about 'Eigenimage Methods for Face Recognition' - nili

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

### Eigenimage Methods for Face Recognition

CS 175, Fall 2007

Outline of Today’s Lecture
• Progress Reports
• due 9am Monday
• Eigenimage Techniques
• Represent an image as a weighted sum of a small number of “basis images” (eigenimages)
• Basis images and weights can be found by eigenvector calculations
• Can be used for feature detection in images
• MATLAB code provided
Project Progress Reports
• Full details on Web page
• 2 to 3 pages in length
• write clearly, use figures
• Section 1: List of overall goals of the project
• At least 3 to 5 goals
• Section 2: discuss what parts of goals you have achieved so far
• e.g., getting data, writing feature extraction code, etc
• Section 3: what remains to be done on the project
• Project demo script

Script that runs in less than 1 minute to illustrate progress

• Report is due 9am Monday morning
Basis functions

Recall from linear algebra:

We can write a vector as a linear combination of orthogonal basis functions:

Basis: v1 = (1, 0, 0) v2 = (0, 1, 0), v3 = (0, 0, 1)

=> x = [4.2, 8.7, 3.1]

= 4.2 v1 + 8.7 v2 + 3.1 v3

Basis functions for images

What are possible basis functions for images?

One possible set of basis functions are “individual pixels” :

Basis: v1 = (1, 0, 0, 0) v2 = (0, 1, 0, 0), v3 = (0, 0, 1, 0), v4 = (0, 0, 0, 1)

Basis functions for images

What are possible basis functions for images?

One possible set of basis functions are “individual pixels” :

Basis: v1 = (1, 0, 0, 0) v2 = (0, 1, 0, 0), v3 = (0, 0, 1, 0), v4 = (0, 0, 0, 1)

Visually:

v1

v2

v3

v4

Other basis functions for images?
• Perhaps there are other (better) sets of basis functions?
• Imagine that a set of images are composed of a
• a weighted sum of “basis images”
Other basis functions for images?
• Perhaps there are other (better) sets of basis functions?
• Imagine that a set of images are composed of a
• a weighted sum of “basis images”

Given a set of basis images v1, v2, … vN, we can represent any real

image as a weighted linear combination of basis images, i.e.,

Image I is approximately equal to w1 v1 + w2 v2 + … wn vN

We can represent I exactly if N = number of pixels in image I

We are often interested in representing an image I approximately by a small set of N weights, [w1, w2, …. wN]

Visual example

v1

v2

I = 0.9 v1 + 0.5 v2

Visual example
• Note that with only 2 basis vectors we can only “recreate” a subset

of all possible 3 x 3 images

v1

v2

I = 0.9 v1 + 0.5 v2

Data Compression

Original image

Data Compression

Original image

Approximate image = 0.9 v1 + 0.5 v2

Data Compression

Original image

Approximate image = 0.9 v1 + 0.5 v2

Error image

(note white = 0 in these images, black = 1)

Data Compression
• We can transmit the 2 coefficients 0.9 and 0.5 to approximately represent the original image
• So instead of 9 pixels values we just need to transmit 2 weights
• This is “lossy data compression”
• Note: only works well if the real images can (approximately) be thought of as “superpositions” of a set of basis images

Original image

Approximate image = 0.9 v1 + 0.5 v2

Applying this idea to face images: eigenimages

M. Turk, A. Pentland,

Eigenfaces for Recognition,

Journal of Cognitive Neuroscience,

Vol. 3, No. 1, 1991, pp. 71-86

Singular Value Decomposition (SVD)

c columns

• Let D be a data matrix:

D has n rows (one for each image)

D has c columns (c = number of pixels)

n rows

D

Singular Value Decomposition (SVD)

c columns

• Let D be a data matrix:

D has n rows (one for each image)

D has c columns (c = number of pixels)

Basic result in linear algebra (when n > c)

D = U S V

where U = n x c matrix of weights

S = c x c diagonal matrix

V = c x c matrix, with rows = basis vectors

(also known as singular vectors or eigenvectors)

This is known as the singular value decomposition (SVD) of D

All matrices can be represented in this manner

n rows

D

SVD Representation of a Matrix

c columns

n rows

D

U

S

V

=

c x c

c x c

n x c

Scale

Factors

Basis

vectors

Weights

SVD Example
• Data D = 10 20 10

2 5 2

8 17 7

9 20 10

12 22 11

SVD Example
• Data D = 10 20 10

2 5 2

8 17 7

9 20 10

12 22 11

• Note the pattern in the data above: the center column
• values are typically about twice the 1st and 3rd column values:
• So there is redundancy in the columns, i.e., the column

values are correlated

SVD Example

D = U S V

where U = 0.50 0.14 -0.19

0.12 -0.35 0.07

0.41 -0.54 0.66

0.49 -0.35 -0.67

0.56 0.66 0.27

where S = 48.6 0 0

0 1.5 0

0 0 1.2

and V = 0.41 0.82 0.40

0.73 -0.56 0.41

0.55 0.12 -0.82

• Data D = 10 20 10

2 5 2

8 17 7

9 20 10

12 22 11

SVD Example

D = U S V

where U = 0.50 0.14 -0.19

0.12 -0.35 0.07

0.41 -0.54 0.66

0.49 -0.35 -0.67

0.56 0.66 0.27

where S = 48.6 0 0

0 1.5 0

0 0 1.2

and V = 0.41 0.82 0.40

0.73 -0.56 0.41

0.55 0.12 -0.82

• Data D = 10 20 10

2 5 2

8 17 7

9 20 10

12 22 11

• Note that first singular value
• is much larger than the others
SVD Example

D = U S V

where U = 0.50 0.14 -0.19

0.12 -0.35 0.07

0.41 -0.54 0.66

0.49 -0.35 -0.67

0.56 0.66 0.27

where S = 48.6 0 0

0 1.5 0

0 0 1.2

and V = 0.41 0.82 0.40

0.73 -0.56 0.41

0.55 0.12 -0.82

• Data D = 10 20 10

2 5 2

8 17 7

9 20 10

12 22 11

• Note that first singular value
• is much larger than the others
• First basis function (or eigenvector)

carries most of the information and it “discovers”

the pattern of column dependence

Rows in D = weighted sums of basis vectors
• 1st row of D = [10 20 10]
• Since D = U S V, then D(1,: ) = U(1, :) * S * V
• = [24.5 0.2 -0.22] * V
• V = 0.41 0.82 0.40
• 0.73 -0.56 0.41
• 0.55 0.12 -0.82
• D(1,: ) = 24.5 v1 + 0.2 v2 + -0.22 v3

where v1, v2, v3 are rows of V and are our basis vectors

Thus, [24.5, 0.2, 0.22] are the weights that characterize row 1 in D

In general, the ith row of U*S is the set of weights for the ith row in D

Approximating the matrix D
• We could approximate any row D just using a single weight
• Row 1:
• D(:,1) = 10 20 10
• Can be approximated by D’ = w1*v1 = 24.5*[ 0.41 0.82 0.40] = [10.05 20.09 9.80]
• Note that this is a close approximation of D(:,1)
• Similarly for any other row
• Basis for data compression:
• Sender then sends the receiver a small number of weights
• Receiver then reconstructs the signal using the weights + the basis function
• Results in far fewer bits being sent on average – trade-off is that there is some loss in the quality of the original signal
Summary of SVD Representation

D = U S V

Data matrix:

Rows = data vectors

V matrix:

Rows = our basis functions

U*S matrix:

Rows = weights

for the rows of D

How do we compute U, S, and V?
• SVD decomposition is related to a standard eigenvalue problem
• The eigenvectors of D’ D = the rows of V
• The eigenvectors of D D’ = the columns of U
• The diagonal matrix elements in S are square roots of the eigenvalues of D’ D
• Notation: D’D referred to as the covariance matrix of D

=> finding U,S,V is equivalent to finding eigenvectors of D’D

• Solving eigenvalue problems is equivalent to solving a set of linear equations – time complexity is O(n c2 + c3)
• In MATLAB, we can calculate this using the svd.m function, i.e., [u, s, v] = svd(D);
• If matrix D is non-square, we can use svd(D,0)
Properties of SVD
• n x c matrix has n c-dimensional eigenvectors (rows of V)
• We can represent any vector in D as a weighted sum of these c basis vectors
• Approximating a vector:
• The best k-dimensional approximation to a vector v, k < c,

and where best means “closest in squared error”, is

the weighted sum of the first k eigenvectors

• Can use this for data compression,
• i.e., instead of storing (or transmitting) the full c-dimensional vector (all c numbers)
• Instead just store/send the k “basis weights”
Modification: more columns than rows
• D = n x c matrix: n rows, c columns
• Theory on previous slides holds for n > c
• But if c = number of pixels, n = number of images, we can easily have the situation where n < c
• In the situation where n < c, we compute:

D = U S V

where U = n x n, S = n x n, V = n x c

• Use svd(D,0) or svds(D) function when n < c
• [u, s, v] = svds(D, k) gives the first k basis functions (eigenvectors) for D
• Number of basis functions k must be less than or equal to n
• (this is in effect the number of degrees of freedom we have)
Applying this to images
• Convert each image to vector form
• n = number of images
• c = number of pixels in each image
• D = n x c data matrix
Applying this to images
• Convert each image to vector form
• n = number of images
• c = number of pixels in each image
• D = n x c data matrix
• Run svds on the matrix D
• [u, s, vtranspose] = svds(D, k)
• Where k is the number of “eigenimages” we want
• The columns of d (reformed as images) are our eigenimages
• The diagonal elements of s tell us how much “energy” is in each eigenimage
• The columns of vtranspose are our eigenimages
• The rows of u give us the basis function weights for each row (image) in D
Examples of results in MATLAB
• Ran svd on all 20 dstraight “happy” images, full resolution
• dstraight(1:20,2)
• “cut out” top 10 and bottom 30 rows:
• newimage = image(10:90, 1:128)
• Converted each image to vector form
• Removed mean from each image (from each row)
• Ran svds.m on resulting matrix
• Code to do this is available on class Web page:
• Under “MATLAB code for Projects”: eigenimage_code.zip
• Call [u,s,vtranspose,eigenimages] = pca_image(dstraight(1:20,2));

[u, s, vtranspose] have been described already

eigenimages is the columns of vtranspose (basis vectors) reshaped as eigenimages (ready for display)

Reconstructing images from basis vectors
• D = U S V’
• So each row in D (each image) can be represented as a weighted sum of columns of V (eigenimages)
• Weights for row i in D = row i in U .* diagonal of S
• MATLAB code in eigenimage_code.zip to implement this:
• Call pca_reconstruct.m, e.g.,

pca_reconstruct(3,dstraight(3,2).image,eigenimages,u,s,8)

Individual 3

Reconstruction of First Image with 8 eigenimages (note: uses only 8 weights, instead of 12000 pixels)

Reconstructed

Image

Original Image

Reconstruction of first image with 8 eigenimages

Weights = -14.0 9.4 -1.1 -3.5 -9.8 -3.5 -0.6 0.6

Reconstructed image = weighted sum of 8 images on left

Reconstruction of 7th image with eigenimages

Reconstructed

Image

Original Image

Reconstruction of 7th image with 8 eigenimages

Weights = -13.7 12.9 1.6 4.4 3.0 0.9 1.6 -6.3

Weights for

Image 1 = -14.0 9.4 -1.1 -3.5 -9.8 -3.5 -0.6 0.6

Reconstructed image = weighted sum of 8 images on left

Reconstruction of Image 17

Reconstructed

Image

Original Image

Reconstruction of 17th image with 8 eigenimages

Weights = -24.2 -9.3 6.4 -2.2 -4.3 10.2 2.5 -1.5

Weights for

Image 1 = -14.0 9.4 -1.1 -3.5 -9.8 -3.5 -0.6 0.6

Reconstructed image = weighted sum of 8 images on left

Weights as Features

Weights

1-14.0174 9.3872 -1.0557 -3.5383

2 -21.8113 -1.9416 -6.0173 1.2616

3 -19.5562 1.8604 3.1435 -1.9078

4 -17.9185 1.8457 0.7836 2.0913

5 -24.8861 1.6758 -2.4818 -1.5604

6 -22.3920 -0.9511 3.4839 -2.0369

7 -13.7527 12.9235 1.5948 4.3836

8 -11.4462 1.3205 -1.5536 -8.1587

9 -13.2034 9.2059 2.8529 4.4483

10 -17.2305 0.6142 0.9991 -3.3872

11 -15.9689 5.0422 -0.1318 -8.3479

12 -21.2763 -10.1233 1.8639 5.2177

13 -22.6525 -2.0573 -1.4602 -2.5266

14 -21.1102 -4.0606 0.6635 -0.5346

15 -18.9986 -1.2998 2.1970 -2.3426

16 -17.0313 -1.6442 -11.0145 2.6578

17 -24.2221 -9.3465 6.4540 -2.1699

18 -11.0047 0.8536 -14.5666 1.7401

19 -24.8944 -2.6833 1.2127 6.3758

20 -17.0712 5.8351 5.6467 6.0825

Individuals

Weights as Features

Weights

1-14.0174 9.3872 -1.0557 -3.5383

2 -21.8113 -1.9416 -6.0173 1.2616

3 -19.5562 1.8604 3.1435 -1.9078

4 -17.9185 1.8457 0.7836 2.0913

5 -24.8861 1.6758 -2.4818 -1.5604

6 -22.3920 -0.9511 3.4839 -2.0369

7 -13.7527 12.9235 1.5948 4.3836

8 -11.4462 1.3205 -1.5536 -8.1587

9 -13.2034 9.2059 2.8529 4.4483

10 -17.2305 0.6142 0.9991 -3.3872

11 -15.9689 5.0422 -0.1318 -8.3479

12 -21.2763 -10.1233 1.8639 5.2177

13 -22.6525 -2.0573 -1.4602 -2.5266

14 -21.1102 -4.0606 0.6635 -0.5346

15 -18.9986 -1.2998 2.1970 -2.3426

16 -17.0313 -1.6442 -11.0145 2.6578

17 -24.2221 -9.3465 6.4540 -2.1699

18 -11.0047 0.8536 -14.5666 1.7401

19 -24.8944 -2.6833 1.2127 6.3758

20 -17.0712 5.8351 5.6467 6.0825

Individuals

Weights as Features

Weights

1-14.0174 9.3872 -1.0557 -3.5383

2 -21.8113 -1.9416 -6.0173 1.2616

3 -19.5562 1.8604 3.1435 -1.9078

4 -17.9185 1.8457 0.7836 2.0913

5 -24.8861 1.6758 -2.4818 -1.5604

6 -22.3920 -0.9511 3.4839 -2.0369

7 -13.7527 12.9235 1.5948 4.3836

8 -11.4462 1.3205 -1.5536 -8.1587

9 -13.2034 9.2059 2.8529 4.4483

10 -17.2305 0.6142 0.9991 -3.3872

11 -15.9689 5.0422 -0.1318 -8.3479

12 -21.2763 -10.1233 1.8639 5.2177

13 -22.6525 -2.0573 -1.4602 -2.5266

14 -21.1102 -4.0606 0.6635 -0.5346

15 -18.9986 -1.2998 2.1970 -2.3426

16 -17.0313 -1.6442 -11.0145 2.6578

17 -24.2221 -9.3465 6.4540 -2.1699

18 -11.0047 0.8536 -14.5666 1.7401

19 -24.8944 -2.6833 1.2127 6.3758

20 -17.0712 5.8351 5.6467 6.0825

Individuals

Weights as Features

Weights

1-14.0174 9.3872 -1.0557 -3.5383

2 -21.8113 -1.9416 -6.0173 1.2616

3 -19.5562 1.8604 3.1435 -1.9078

4 -17.9185 1.8457 0.7836 2.0913

5 -24.8861 1.6758 -2.4818 -1.5604

6 -22.3920 -0.9511 3.4839 -2.0369

7 -13.7527 12.9235 1.5948 4.3836

8 -11.4462 1.3205 -1.5536 -8.1587

9 -13.2034 9.2059 2.8529 4.4483

10 -17.2305 0.6142 0.9991 -3.3872

11 -15.9689 5.0422 -0.1318 -8.3479

12 -21.2763 -10.1233 1.8639 5.2177

13 -22.6525 -2.0573 -1.4602 -2.5266

14 -21.1102 -4.0606 0.6635 -0.5346

15 -18.9986 -1.2998 2.1970 -2.3426

16 -17.0313 -1.6442 -11.0145 2.6578

17 -24.2221 -9.3465 6.4540 -2.1699

18 -11.0047 0.8536 -14.5666 1.7401

19 -24.8944 -2.6833 1.2127 6.3758

20 -17.0712 5.8351 5.6467 6.0825

Individuals

Weights as Features

Weights

1-14.0174 9.3872 -1.0557 -3.5383

2 -21.8113 -1.9416 -6.0173 1.2616

3 -19.5562 1.8604 3.1435 -1.9078

4 -17.9185 1.8457 0.7836 2.0913

5 -24.8861 1.6758 -2.4818 -1.5604

6 -22.3920 -0.9511 3.4839 -2.0369

7 -13.7527 12.9235 1.5948 4.3836

8 -11.4462 1.3205 -1.5536 -8.1587

9 -13.2034 9.2059 2.8529 4.4483

10 -17.2305 0.6142 0.9991 -3.3872

11 -15.9689 5.0422 -0.1318 -8.3479

12 -21.2763 -10.1233 1.8639 5.2177

13 -22.6525 -2.0573 -1.4602 -2.5266

14 -21.1102 -4.0606 0.6635 -0.5346

15 -18.9986 -1.2998 2.1970 -2.3426

16 -17.0313 -1.6442 -11.0145 2.6578

17 -24.2221 -9.3465 6.4540 -2.1699

18 -11.0047 0.8536 -14.5666 1.7401

19 -24.8944 -2.6833 1.2127 6.3758

20 -17.0712 5.8351 5.6467 6.0825

Individuals

Weights as Features

Weights

1-14.0174 9.3872 -1.0557 -3.5383

2 -21.8113 -1.9416 -6.0173 1.2616

3 -19.5562 1.8604 3.1435 -1.9078

4 -17.9185 1.8457 0.7836 2.0913

5 -24.8861 1.6758 -2.4818 -1.5604

6 -22.3920 -0.9511 3.4839 -2.0369

7 -13.7527 12.9235 1.5948 4.3836

8 -11.4462 1.3205 -1.5536 -8.1587

9 -13.2034 9.2059 2.8529 4.4483

10 -17.2305 0.6142 0.9991 -3.3872

11 -15.9689 5.0422 -0.1318 -8.3479

12 -21.2763 -10.1233 1.8639 5.2177

13 -22.6525 -2.0573 -1.4602 -2.5266

14 -21.1102 -4.0606 0.6635 -0.5346

15 -18.9986 -1.2998 2.1970 -2.3426

16 -17.0313 -1.6442 -11.0145 2.6578

17 -24.2221 -9.3465 6.4540 -2.1699

18 -11.0047 0.8536 -14.5666 1.7401

19 -24.8944 -2.6833 1.2127 6.3758

20 -17.0712 5.8351 5.6467 6.0825

Individuals

Using eigenmethods for feature extraction
• Typical usage:
• Represent c-dimensional pixel images with k-dimensional eigenweights, where k << c
• Then classify images in the k-dimensional eigenspace
• Can apply this idea to local parts of the images, e.g., derive eigenmouths, eigen-noses, etc
• Note: will only work well if there is systematic variation across the images that can be explained as a linear sum of basis images
• Note: eigenimages are trickier to work with than templates (for example) – so recommended only if you are willing to expend some time in learning how they work
Summary
• Eigenimage techniques
• Represent an image as a weighted sum of a small number of “basis images” (eigenimages)
• Basis images and weights can be found by eigenvector calculations
• Can be used for feature detection in images
• MATLAB code provided
• Progress Report due 9am Monday

### Additional Notes on Finding the Weights for a New Image given an Existing Set of Eigenimages

Reconstructing images from basis vectors
• D = U S V’
• So each row in D (each image) can be represented as a weighted sum of columns of V (eigenimages)
• Weights for row i in D = row i in U .* diagonal of S
Finding projections of new images in basis space
• If an image is a particular row in D, then the weights for this image are just the corresponding row in U multiplied by diagonal elements of S
• What if we have a new image (e.g., a test image) that is not in D?
• How can we find the combining weights?
• Say we are using K eigenimages
• Each image is represented as a set of weights in K-dimensional space
• Finding the weights for a new image is referred to as “projection” into this K-dimensional space
Finding projections of new images in basis space
• Let B be the new image as a column vector, d x 1
• Let V be the set of K basis functions (eigenimages) written as a matrix of dimension d x K
• So each column of V is an eigenimage (basis vector) of size d x 1
• We have K columns in total
• Let w be an unknown row vector of weights
• W has dimension 1 x K
• We can writeB = V w’

Dimensionality? d x 1 = d x K x K x 1

Projection

B = V w’

Dimensionality: d x 1 = d x K x K x 1

In general, we can’t represent B perfectly

why? We are using K weights (e.g., K=10) to represent an original image B with perhaps d=10,000 pixels

• We would like to find w such that Vw’ is as “close as possible” to B
Projection (continued)

B = V w’

Dimensionality: d x 1 = d x K x K x 1

• We would like to find w such that Vw’ is as “close as possible” to B
• Let e = B – Vw’ (a d x 1 matrix of error terms)
• Minimize the quantity e’e (= sum of squared errors, pixel by pixel) as a function of w
• i.e., minimize (B – Vw’)’ (B – Vw’) as a function of w
• This is the least-squares solution for w
Projection (continued)

B = V w’

Dimensionality: d x 1 = d x K x K x 1

• We can find the least-squares solution for w, i.e., the set of weights w that minimize the sum of squared pixel differences between the true B and the approximation Vw’
• There are different techniques from linear algebra we can use to find the least squares solution for w
• “Pseudo-inverse” technique: premultiply each side above by V-1

V-1 B = V-1 V w’ = w’

In MATLAB…..

help mldivide

\ Backslash or left matrix divide.

A\B is the matrix division of A into B, which is roughly the

same as INV(A)*B , except it is computed in a different way.

……….

If A is an M-by-N matrix with M < or > N and B is a column

vector with M components, or a matrix with several such columns,

then X = A\B is the solution in the least squares sense to the

under- or overdetermined system of equations A*X = B.

MATLAB (continued)

function weights = pca_project(orig_image,s,v,k)

% pca_project(orig_image,s,v,k)

%

% A simple function to reconstruct an image using the

% first k eigenvectors (the columns of v)

%

……..

weights = v\orig_image';

new_image = v(:,1:k)*weights(1:k);

a(1).image = reshape(new_image,r2-r1+1,c2-c1+1);

a(2).image = reshape(orig_image,r2-r1+1,c2-c1+1);

a(3).image = reshape((new_image'-orig_image),r2-r1+1,c2-c1+1);

dispset2d(a);

This is the line where the

optimal set of least-squares weights

are computed using the “\” operator

reconstructed (approximate)

image

Reconstructed Image

Error Image

Original Image

Reconstructed Image

Error Image

Original Image