 Download Presentation Eigenimage Methods for Face Recognition

# Eigenimage Methods for Face Recognition - PowerPoint PPT Presentation

Eigenimage Methods for Face Recognition. Professor Padhraic Smyth CS 175, Fall 2007. Outline of Today’s Lecture. Progress Reports due 9am Monday Eigenimage Techniques Represent an image as a weighted sum of a small number of “basis images” (eigenimages) I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation ## Eigenimage Methods for Face Recognition

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
1. Eigenimage Methods for Face Recognition Professor Padhraic Smyth CS 175, Fall 2007

2. Outline of Today’s Lecture • Progress Reports • due 9am Monday • Eigenimage Techniques • Represent an image as a weighted sum of a small number of “basis images” (eigenimages) • Basis images and weights can be found by eigenvector calculations • Can be used for feature detection in images • MATLAB code provided

3. Project Progress Reports • Full details on Web page • 2 to 3 pages in length • write clearly, use figures • Section 1: List of overall goals of the project • At least 3 to 5 goals • Section 2: discuss what parts of goals you have achieved so far • e.g., getting data, writing feature extraction code, etc • Section 3: what remains to be done on the project • Project demo script Script that runs in less than 1 minute to illustrate progress Upload all code and data needed for your script • Report is due 9am Monday morning

4. Eigenimage Methods

5. Basis functions Recall from linear algebra: We can write a vector as a linear combination of orthogonal basis functions: Basis: v1 = (1, 0, 0) v2 = (0, 1, 0), v3 = (0, 0, 1) => x = [4.2, 8.7, 3.1] = 4.2 v1 + 8.7 v2 + 3.1 v3

6. Basis functions for images What are possible basis functions for images? One possible set of basis functions are “individual pixels” : Basis: v1 = (1, 0, 0, 0) v2 = (0, 1, 0, 0), v3 = (0, 0, 1, 0), v4 = (0, 0, 0, 1)

7. Basis functions for images What are possible basis functions for images? One possible set of basis functions are “individual pixels” : Basis: v1 = (1, 0, 0, 0) v2 = (0, 1, 0, 0), v3 = (0, 0, 1, 0), v4 = (0, 0, 0, 1) Visually: v1 v2 v3 v4

8. Other basis functions for images? • Perhaps there are other (better) sets of basis functions? • Imagine that a set of images are composed of a • a weighted sum of “basis images” • plus some additive noise

9. Other basis functions for images? • Perhaps there are other (better) sets of basis functions? • Imagine that a set of images are composed of a • a weighted sum of “basis images” • plus some additive noise Given a set of basis images v1, v2, … vN, we can represent any real image as a weighted linear combination of basis images, i.e., Image I is approximately equal to w1 v1 + w2 v2 + … wn vN We can represent I exactly if N = number of pixels in image I We are often interested in representing an image I approximately by a small set of N weights, [w1, w2, …. wN]

10. Visual example v1 v2

11. Visual example v1 v2 I = 0.9 v1 + 0.5 v2

12. Visual example • Note that with only 2 basis vectors we can only “recreate” a subset of all possible 3 x 3 images v1 v2 I = 0.9 v1 + 0.5 v2

13. Data Compression Original image

14. Data Compression Original image Approximate image = 0.9 v1 + 0.5 v2

15. Data Compression Original image Approximate image = 0.9 v1 + 0.5 v2 Error image (note white = 0 in these images, black = 1)

16. Data Compression • We can transmit the 2 coefficients 0.9 and 0.5 to approximately represent the original image • So instead of 9 pixels values we just need to transmit 2 weights • This is “lossy data compression” • Note: only works well if the real images can (approximately) be thought of as “superpositions” of a set of basis images Original image Approximate image = 0.9 v1 + 0.5 v2

17. Applying this idea to face images: eigenimages M. Turk, A. Pentland, Eigenfaces for Recognition, Journal of Cognitive Neuroscience, Vol. 3, No. 1, 1991, pp. 71-86

18. Singular Value Decomposition (SVD) c columns • Let D be a data matrix: D has n rows (one for each image) D has c columns (c = number of pixels) n rows D

19. Singular Value Decomposition (SVD) c columns • Let D be a data matrix: D has n rows (one for each image) D has c columns (c = number of pixels) Basic result in linear algebra (when n > c) D = U S V where U = n x c matrix of weights S = c x c diagonal matrix V = c x c matrix, with rows = basis vectors (also known as singular vectors or eigenvectors) This is known as the singular value decomposition (SVD) of D All matrices can be represented in this manner n rows D

20. SVD Representation of a Matrix c columns n rows D U S V = c x c c x c n x c Scale Factors Basis vectors Weights

21. SVD Example • Data D = 10 20 10 2 5 2 8 17 7 9 20 10 12 22 11

22. SVD Example • Data D = 10 20 10 2 5 2 8 17 7 9 20 10 12 22 11 • Note the pattern in the data above: the center column • values are typically about twice the 1st and 3rd column values: • So there is redundancy in the columns, i.e., the column values are correlated

23. SVD Example D = U S V where U = 0.50 0.14 -0.19 0.12 -0.35 0.07 0.41 -0.54 0.66 0.49 -0.35 -0.67 0.56 0.66 0.27 where S = 48.6 0 0 0 1.5 0 0 0 1.2 and V = 0.41 0.82 0.40 0.73 -0.56 0.41 0.55 0.12 -0.82 • Data D = 10 20 10 2 5 2 8 17 7 9 20 10 12 22 11

24. SVD Example D = U S V where U = 0.50 0.14 -0.19 0.12 -0.35 0.07 0.41 -0.54 0.66 0.49 -0.35 -0.67 0.56 0.66 0.27 where S = 48.6 0 0 0 1.5 0 0 0 1.2 and V = 0.41 0.82 0.40 0.73 -0.56 0.41 0.55 0.12 -0.82 • Data D = 10 20 10 2 5 2 8 17 7 9 20 10 12 22 11 • Note that first singular value • is much larger than the others

25. SVD Example D = U S V where U = 0.50 0.14 -0.19 0.12 -0.35 0.07 0.41 -0.54 0.66 0.49 -0.35 -0.67 0.56 0.66 0.27 where S = 48.6 0 0 0 1.5 0 0 0 1.2 and V = 0.41 0.82 0.40 0.73 -0.56 0.41 0.55 0.12 -0.82 • Data D = 10 20 10 2 5 2 8 17 7 9 20 10 12 22 11 • Note that first singular value • is much larger than the others • First basis function (or eigenvector) carries most of the information and it “discovers” the pattern of column dependence

26. Rows in D = weighted sums of basis vectors • 1st row of D = [10 20 10] • Since D = U S V, then D(1,: ) = U(1, :) * S * V • = [24.5 0.2 -0.22] * V • V = 0.41 0.82 0.40 • 0.73 -0.56 0.41 • 0.55 0.12 -0.82 • D(1,: ) = 24.5 v1 + 0.2 v2 + -0.22 v3 where v1, v2, v3 are rows of V and are our basis vectors Thus, [24.5, 0.2, 0.22] are the weights that characterize row 1 in D In general, the ith row of U*S is the set of weights for the ith row in D

27. Approximating the matrix D • We could approximate any row D just using a single weight • Row 1: • D(:,1) = 10 20 10 • Can be approximated by D’ = w1*v1 = 24.5*[ 0.41 0.82 0.40] = [10.05 20.09 9.80] • Note that this is a close approximation of D(:,1) • Similarly for any other row • Basis for data compression: • Sender and receiver agree on basis functions in advance • Sender then sends the receiver a small number of weights • Receiver then reconstructs the signal using the weights + the basis function • Results in far fewer bits being sent on average – trade-off is that there is some loss in the quality of the original signal

28. Summary of SVD Representation D = U S V Data matrix: Rows = data vectors V matrix: Rows = our basis functions U*S matrix: Rows = weights for the rows of D

29. How do we compute U, S, and V? • SVD decomposition is related to a standard eigenvalue problem • The eigenvectors of D’ D = the rows of V • The eigenvectors of D D’ = the columns of U • The diagonal matrix elements in S are square roots of the eigenvalues of D’ D • Notation: D’D referred to as the covariance matrix of D => finding U,S,V is equivalent to finding eigenvectors of D’D • Solving eigenvalue problems is equivalent to solving a set of linear equations – time complexity is O(n c2 + c3) • In MATLAB, we can calculate this using the svd.m function, i.e., [u, s, v] = svd(D); • If matrix D is non-square, we can use svd(D,0)

30. Properties of SVD • n x c matrix has n c-dimensional eigenvectors (rows of V) • We can represent any vector in D as a weighted sum of these c basis vectors • Approximating a vector: • The best k-dimensional approximation to a vector v, k < c, and where best means “closest in squared error”, is the weighted sum of the first k eigenvectors • Can use this for data compression, • i.e., instead of storing (or transmitting) the full c-dimensional vector (all c numbers) • Instead just store/send the k “basis weights”

31. Modification: more columns than rows • D = n x c matrix: n rows, c columns • Theory on previous slides holds for n > c • But if c = number of pixels, n = number of images, we can easily have the situation where n < c • In the situation where n < c, we compute: D = U S V where U = n x n, S = n x n, V = n x c • Use svd(D,0) or svds(D) function when n < c • [u, s, v] = svds(D, k) gives the first k basis functions (eigenvectors) for D • Number of basis functions k must be less than or equal to n • (this is in effect the number of degrees of freedom we have)

32. Applying this to images • Convert each image to vector form • n = number of images • c = number of pixels in each image • D = n x c data matrix

33. Applying this to images • Convert each image to vector form • n = number of images • c = number of pixels in each image • D = n x c data matrix • Run svds on the matrix D • [u, s, vtranspose] = svds(D, k) • Where k is the number of “eigenimages” we want • The columns of d (reformed as images) are our eigenimages • The diagonal elements of s tell us how much “energy” is in each eigenimage • The columns of vtranspose are our eigenimages • The rows of u give us the basis function weights for each row (image) in D

34. Examples of results in MATLAB • Ran svd on all 20 dstraight “happy” images, full resolution • dstraight(1:20,2) • “cut out” top 10 and bottom 30 rows: • newimage = image(10:90, 1:128) • Converted each image to vector form • Removed mean from each image (from each row) • Ran svds.m on resulting matrix • Code to do this is available on class Web page: • Under “MATLAB code for Projects”: eigenimage_code.zip • Call [u,s,vtranspose,eigenimages] = pca_image(dstraight(1:20,2)); [u, s, vtranspose] have been described already eigenimages is the columns of vtranspose (basis vectors) reshaped as eigenimages (ready for display)

35. Original 20 images, dstraight(1:20,2)

36. First 16 Eigenimages (ordered columnwise)

37. First 4 eigenimages

38. Reconstructing images from basis vectors • D = U S V’ • So each row in D (each image) can be represented as a weighted sum of columns of V (eigenimages) • Weights for row i in D = row i in U .* diagonal of S • MATLAB code in eigenimage_code.zip to implement this: • Call pca_reconstruct.m, e.g., pca_reconstruct(3,dstraight(3,2).image,eigenimages,u,s,8) Individual 3

39. Reconstruction of First Image with 8 eigenimages (note: uses only 8 weights, instead of 12000 pixels) Reconstructed Image Original Image

40. Reconstruction of first image with 8 eigenimages Weights = -14.0 9.4 -1.1 -3.5 -9.8 -3.5 -0.6 0.6 Reconstructed image = weighted sum of 8 images on left

41. Reconstruction of 7th image with eigenimages Reconstructed Image Original Image

42. Reconstruction of 7th image with 8 eigenimages Weights = -13.7 12.9 1.6 4.4 3.0 0.9 1.6 -6.3 Weights for Image 1 = -14.0 9.4 -1.1 -3.5 -9.8 -3.5 -0.6 0.6 Reconstructed image = weighted sum of 8 images on left

43. Reconstructing Image 6 with 16 eigenimages

44. Reconstruction of Image 17 Reconstructed Image Original Image

45. Reconstruction of 17th image with 8 eigenimages Weights = -24.2 -9.3 6.4 -2.2 -4.3 10.2 2.5 -1.5 Weights for Image 1 = -14.0 9.4 -1.1 -3.5 -9.8 -3.5 -0.6 0.6 Reconstructed image = weighted sum of 8 images on left

46. Weights as Features Weights 1-14.0174 9.3872 -1.0557 -3.5383 2 -21.8113 -1.9416 -6.0173 1.2616 3 -19.5562 1.8604 3.1435 -1.9078 4 -17.9185 1.8457 0.7836 2.0913 5 -24.8861 1.6758 -2.4818 -1.5604 6 -22.3920 -0.9511 3.4839 -2.0369 7 -13.7527 12.9235 1.5948 4.3836 8 -11.4462 1.3205 -1.5536 -8.1587 9 -13.2034 9.2059 2.8529 4.4483 10 -17.2305 0.6142 0.9991 -3.3872 11 -15.9689 5.0422 -0.1318 -8.3479 12 -21.2763 -10.1233 1.8639 5.2177 13 -22.6525 -2.0573 -1.4602 -2.5266 14 -21.1102 -4.0606 0.6635 -0.5346 15 -18.9986 -1.2998 2.1970 -2.3426 16 -17.0313 -1.6442 -11.0145 2.6578 17 -24.2221 -9.3465 6.4540 -2.1699 18 -11.0047 0.8536 -14.5666 1.7401 19 -24.8944 -2.6833 1.2127 6.3758 20 -17.0712 5.8351 5.6467 6.0825 Individuals

47. Weights as Features Weights 1-14.0174 9.3872 -1.0557 -3.5383 2 -21.8113 -1.9416 -6.0173 1.2616 3 -19.5562 1.8604 3.1435 -1.9078 4 -17.9185 1.8457 0.7836 2.0913 5 -24.8861 1.6758 -2.4818 -1.5604 6 -22.3920 -0.9511 3.4839 -2.0369 7 -13.7527 12.9235 1.5948 4.3836 8 -11.4462 1.3205 -1.5536 -8.1587 9 -13.2034 9.2059 2.8529 4.4483 10 -17.2305 0.6142 0.9991 -3.3872 11 -15.9689 5.0422 -0.1318 -8.3479 12 -21.2763 -10.1233 1.8639 5.2177 13 -22.6525 -2.0573 -1.4602 -2.5266 14 -21.1102 -4.0606 0.6635 -0.5346 15 -18.9986 -1.2998 2.1970 -2.3426 16 -17.0313 -1.6442 -11.0145 2.6578 17 -24.2221 -9.3465 6.4540 -2.1699 18 -11.0047 0.8536 -14.5666 1.7401 19 -24.8944 -2.6833 1.2127 6.3758 20 -17.0712 5.8351 5.6467 6.0825 Individuals

48. Weights as Features Weights 1-14.0174 9.3872 -1.0557 -3.5383 2 -21.8113 -1.9416 -6.0173 1.2616 3 -19.5562 1.8604 3.1435 -1.9078 4 -17.9185 1.8457 0.7836 2.0913 5 -24.8861 1.6758 -2.4818 -1.5604 6 -22.3920 -0.9511 3.4839 -2.0369 7 -13.7527 12.9235 1.5948 4.3836 8 -11.4462 1.3205 -1.5536 -8.1587 9 -13.2034 9.2059 2.8529 4.4483 10 -17.2305 0.6142 0.9991 -3.3872 11 -15.9689 5.0422 -0.1318 -8.3479 12 -21.2763 -10.1233 1.8639 5.2177 13 -22.6525 -2.0573 -1.4602 -2.5266 14 -21.1102 -4.0606 0.6635 -0.5346 15 -18.9986 -1.2998 2.1970 -2.3426 16 -17.0313 -1.6442 -11.0145 2.6578 17 -24.2221 -9.3465 6.4540 -2.1699 18 -11.0047 0.8536 -14.5666 1.7401 19 -24.8944 -2.6833 1.2127 6.3758 20 -17.0712 5.8351 5.6467 6.0825 Individuals

49. Weights as Features Weights 1-14.0174 9.3872 -1.0557 -3.5383 2 -21.8113 -1.9416 -6.0173 1.2616 3 -19.5562 1.8604 3.1435 -1.9078 4 -17.9185 1.8457 0.7836 2.0913 5 -24.8861 1.6758 -2.4818 -1.5604 6 -22.3920 -0.9511 3.4839 -2.0369 7 -13.7527 12.9235 1.5948 4.3836 8 -11.4462 1.3205 -1.5536 -8.1587 9 -13.2034 9.2059 2.8529 4.4483 10 -17.2305 0.6142 0.9991 -3.3872 11 -15.9689 5.0422 -0.1318 -8.3479 12 -21.2763 -10.1233 1.8639 5.2177 13 -22.6525 -2.0573 -1.4602 -2.5266 14 -21.1102 -4.0606 0.6635 -0.5346 15 -18.9986 -1.2998 2.1970 -2.3426 16 -17.0313 -1.6442 -11.0145 2.6578 17 -24.2221 -9.3465 6.4540 -2.1699 18 -11.0047 0.8536 -14.5666 1.7401 19 -24.8944 -2.6833 1.2127 6.3758 20 -17.0712 5.8351 5.6467 6.0825 Individuals

50. Weights as Features Weights 1-14.0174 9.3872 -1.0557 -3.5383 2 -21.8113 -1.9416 -6.0173 1.2616 3 -19.5562 1.8604 3.1435 -1.9078 4 -17.9185 1.8457 0.7836 2.0913 5 -24.8861 1.6758 -2.4818 -1.5604 6 -22.3920 -0.9511 3.4839 -2.0369 7 -13.7527 12.9235 1.5948 4.3836 8 -11.4462 1.3205 -1.5536 -8.1587 9 -13.2034 9.2059 2.8529 4.4483 10 -17.2305 0.6142 0.9991 -3.3872 11 -15.9689 5.0422 -0.1318 -8.3479 12 -21.2763 -10.1233 1.8639 5.2177 13 -22.6525 -2.0573 -1.4602 -2.5266 14 -21.1102 -4.0606 0.6635 -0.5346 15 -18.9986 -1.2998 2.1970 -2.3426 16 -17.0313 -1.6442 -11.0145 2.6578 17 -24.2221 -9.3465 6.4540 -2.1699 18 -11.0047 0.8536 -14.5666 1.7401 19 -24.8944 -2.6833 1.2127 6.3758 20 -17.0712 5.8351 5.6467 6.0825 Individuals