1 / 21

Recitation: SVD and dimensionality reduction

Recitation: SVD and dimensionality reduction . Zhenzhen Kou Thursday, April 21, 2005. SVD. Intuition: find the axis that shows the greatest variation, and project all points into this axis. f2. e1. e2. f1. V’. S. xS k. U k. U. r X n. m X k. m X r. r X r. k X k. =.

neena
Download Presentation

Recitation: SVD and dimensionality reduction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Recitation:SVD and dimensionality reduction Zhenzhen Kou Thursday, April 21, 2005

  2. SVD • Intuition: find the axis that shows the greatest variation, and project all points into this axis f2 e1 e2 f1

  3. V’ S xSk Uk U r X n m X k m X r r X r k X k = The reconstructed matrix Xk = Uk.Sk.Vk’ is the closest rank-k matrix to the original matrix R. Vk’ X Xk m X n k X n SVD: Mathematical Background

  4. SVD: The mathematical formulation • Let X be the M x N matrix of M N-dimensional points • SVD decomposition • X= U x Sx VT • U(M x M) • U is orthogonal: UTU = I • columns of U are the orthogonal eigenvectors of XXT • called the left singular vectors of X • V(N x N) • V is orthogonal: VTV = I • columns of V are the orthogonal eigenvectors of XTX • called the right singular vectors of X • S(M x N) • diagonal matrix consisting of r non-zero values in descending order • square root of the eigenvalues of XXT (or XTX) • r is the rank of the symmetric matrices • called the singular values

  5. SVD - Interpretation

  6. x x = v1 SVD - Interpretation • X = U SVT - example:

  7. SVD - Interpretation • X = U S VT - example: variance (‘spread’) on the v1 axis x x =

  8. SVD - Interpretation • X = U SVT - example: • UL gives the coordinates of the points in the projection axis x x =

  9. Dimensionality reduction • set the smallest eigenvalues to zero: x x =

  10. Dimensionality reduction x x ~

  11. Dimensionality reduction x x ~

  12. Dimensionality reduction x x ~

  13. Dimensionality reduction ~

  14. Dimensionality reduction Equivalent: ‘spectral decomposition’ of the matrix: x x =

  15. Dimensionality reduction Equivalent: ‘spectral decomposition’ of the matrix: l1 x x = u1 u2 l2 v1 v2

  16. l1 l2 u1 u2 vT1 vT2 Dimensionality reduction ‘spectral decomposition’ of the matrix: m r terms = + +... n n x 1 1 x m

  17. l1 l2 u1 u2 vT1 vT2 Dimensionality reduction approximation / dim. reduction: by keeping the first few terms (Q: how many?) m = + +... n assume: l1 >= l2 >= ...

  18. l1 l2 u1 u2 vT1 vT2 Dimensionality reduction A heuristic: keep 80-90% of ‘energy’ (= sum of squares of li ’s) m = + +... n assume: l1 >= l2 >= ...

  19. Another example-Eigenface • The PCA problem in HW5 • Face data X • Eigenvectors associated with the first few large eigenvalues of XXT have face-like images

  20. Dimensionality reduction • Matrix V in the SVD decomposition • (X =USVT) is used to transform the data. • XV (= US) defines the transformed dataset. • For a new data element x, xV defines the transformed data. • Keeping the first k (k < n) dimensions, amounts to keeping only the first k columns of V.

  21. Principal Components Analysis (PCA) • Transfer the dataset to the center by subtracting the means: let matrix X be the result. • Compute the matrix XTX. • The covariance matrix except for constants. • Project the dataset along a subset of the eigenvectors of XTX. • Matrix V in the SVD decomposition • (X= U S VT ) contains the eigenvectors of XTX. • Also known as K-L transform.

More Related