1 / 33

SVD: Singular Value Decomposition

SVD: Singular Value Decomposition. Motivation. Assume A full rank. Clearly the winner. row space. column space. Ax=0. A T y=0. Ideas Behind SVD. There are many choices of basis in C(A T ) and C(A), but we want the orthonormal ones. Goal: for A m×n

troycook
Download Presentation

SVD: Singular Value Decomposition

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SVD: Singular Value Decomposition

  2. Motivation Assume A full rank Clearly the winner

  3. row space column space Ax=0 ATy=0 Ideas Behind SVD There are many choices of basis in C(AT) and C(A), but we want the orthonormal ones • Goal: for Am×n • find orthonormal bases for C(AT) and C(A) orthonormal basis in C(A) orthonormal basis in C(AT) A Rm Rn

  4. SVD (2X2) I haven’t told you how to find vi’s (p.9) s: represent the length of images; hence non-negative

  5. SVD 2x2 (cont) Another diagonalization using 2 sets of orthogonal bases Compare When A has complete set of e-vectors, we have AS=SL , A=SLS-1 but S in general is not orthogonal When A is symmetric, we have A=QLQT

  6. ( )-1=( )T Implication: Matrix inversion Ax=b Why are orthonormal bases good?

  7. More on U and V V: eigenvectors of ATA U: eigenvectors of AAT [Find vi first, then use Avi to find ui This is the key to solve SVD

  8. SVD: A=USVT • The singular values are the diagonal entries of the S matrix and are arranged in descending order • The singular values are always real (non-negative) numbers • If A is real matrix, U and V are also real

  9. Example (2x2, full rank) STEPS: • Find e-vectors of ATA; normalize the basis • Compute Avi, get si If si  0, get ui Else find ui from N(AT)

  10. SVD Theory • If sj=0, Avj=0vj is in N(A) • The corresponding uj in N(AT) • [UTA=SVT=0] • Else, vj in C(AT) • The corresponding uj in C(A) • #of nonzero sj = rank

  11. Example (2x2, rank deficient) Can also be obtained from e-vectors of ATA

  12. Example (cont) Bases of N(A) and N(AT) (u2 and v2 here) do not contribute the final result. The are computed to make U and V orthogonal.

  13. Basis of N(A) Basis of N(AT) Extend to Amxn Dimension Check

  14. Extend to Amxn (cont) Summation of r rank-one matrices! Bases of N(A) and N(AT) do not contribute They are useful only for nullspace solutions

  15. N(A) C(A) C(AT)

  16. Summary • SVD chooses the right basis for the 4 subspaces • AV=US • v1…vr: orthonormal basis in Rn for C(AT) • vr+1…vn: N(A) • u1…ur: in Rm C(A) • ur+1…um: N(AT) • These bases are not only , but also Avi=siui • High points of Linear Algebra • Dimension, rank, orthogonality, basis, diagonalization, …

  17. SVD Applications • Using SVD in computation, rather than A, has the advantage of being more robust to numerical error • Many applications: • Inverse of matrix A • Conditions of matrix • Image compression • Solve Ax=b for all cases (unique, many, no solutions; least square solutions) • rank determination, matrix approximation, … • SVD usually found by iterative methods (see Numerical Recipe, Chap.2)

  18. SVD and Ax=b (mn) • Check for existence of solution

  19. Ax=b (inconsistent) No solution!

  20. Ax=b (underdetermined)

  21. Pseudo Inverse (Sec7.4, p.395) • The role of A: • Takes a vector vi from row space to siui in the column space • The role of A-1 (if it exists): • Does the opposite: takes a vector ui from column space to row space vi

  22. Pseudo Inverse (cont) • While A-1 may not exist, a matrix that takes ui back to vi/si does exist. It is denoted as A+, the pseudo inverse • A+: dimension n by m

  23. Pseudo Inverse and Ax=b • Overdetermined case: find the solution that minimize the error r=|Ax–b|, the least square solution A panacea for Ax=b

  24. Ex: full rank

  25. Will show this need not be computed… Ex: over-determined

  26. Over-determined (cont) Same result!!

  27. Ex: general case, no solution

  28. Matrix Approximation making small s’s to zero and back substitute (see next page for application in image compression)

  29. Image Compression • As described in text p.352 • For grey scale images: mn bytes • Only need to store r(m+n+1) Original 6464 r = 1,3,5,10,16 (no perceivable difference afterwards)

More Related