1 / 25

Matrix Factorization Methods Ahmet Oguz Akyuz

Matrix Factorization Methods Ahmet Oguz Akyuz Matrix Factorization Methods Principal component analysis Singular value decomposition Non-negative matrix factorization Independent component analysis Eigen decomposition Random projection Factor analysis Principal Component Anaylsis

liam
Download Presentation

Matrix Factorization Methods Ahmet Oguz Akyuz

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Matrix Factorization MethodsAhmet Oguz Akyuz

  2. Matrix Factorization Methods • Principal component analysis • Singular value decomposition • Non-negative matrix factorization • Independent component analysis • Eigen decomposition • Random projection • Factor analysis

  3. Principal Component Anaylsis

  4. What is PCA? • Simply a change of basis In graphics terms: Translation followed by a rotation

  5. PCA • F = ET N, where N = D - M • D: data matrix • M: mean matrix • ET: transpose of eigenvectors of covariance matrix of N

  6. Statistical Terms Review • Mean: • Standard Deviation: • Variance: • Covariance:

  7. Covariance • Covariance measures correlation between two dimensions • + sign: increase together • - sign: one increases other decreases • zero: dimensions are independent • Magnitude gives the strength of the relationship

  8. Covariance Matrix

  9. Eigenvalues and Eigenvectors is an eigenvector is the associated eigenvalue • Eigenvectors of a matrix are orthogonal

  10. Steps of PCA • Compute covariance matrix C = • Find eigenvalues and eigenvectors of C • Form ET (sort eigenvectors, eigenvector of the largest eigenvalue is in the first row…) • F = ET N

  11. Results • Data is represented in a more suitable basis • Redundancy (noise, etc…) can be reduced by using only a subset of eigenvectors (dimension reduction), compression

  12. Singular Value Decomposition

  13. What is SVD? • A more general means of change of basis • D = U W VT • U is the eigenvectors of DDT (orthogonal) • V is the eigenvectors of DTD (orthogonal) • W is square root of eigenvalues of U and V put in the diagonal (so it’s a sorted diagonal matrix) Note: eigenvalues of DDT and DTD are same

  14. How can we use SVD? • Very useful in solving systems of linear equations • Gives the best possible answer (in a least squared sense) when the exact solution does not exist!

  15. Solution for a System of Linear Equations • Ax = y • x = A-1 y (what if A-1 does not exist) • If A = U W VT then A-1 = V W-1 UT this is called pseudoinverse (valid for nxm matrices) • W-1 = (diag(1/w1, 1/w2 …, 1/wm)) • If wi is 0 for some i then set (1/wi) = 0 • This is same as reducing dimensionality by PCA

  16. Non-negative Matrix Factorization

  17. What is NMF? • Given a non-negative matrix V find non-negative matrix factors W and H such that: V ≈ WH • V = n x m • W = n x r • H = r x m (n+m)r < nm so that data is compressed

  18. NMF • NMF distinguished from other methods by its non-negativity constraint • This allows a parts-based representation because only addition is additive combinations are allowed

  19. Cost Functions • We need to quantify the quality of the approximation.

  20. Update Rules • The Euclidean distance ||V - WH|| is non-increasing under the following update rules:

  21. Update Rules • The divergence D(V || WH) is nonincreasing under the following update rules:

  22. How to perform NFA? • W and H can be seeded with non-negativerandom values • Then NMF is guaranteed to converge to a local minimum (of the used error function) by iteratively applying the update functions

  23. Example Note the sparseness of basis matrix

  24. Other examples • BRDF is factored using NMF (Siggraph2004) • Phase function in volume rendering? • What else?

  25. References • Learning the parts of objects by non-negative matrix factorization, Daniel D. Lee & H. Sebastian Seung, Nature 1999 • Algorithms for non-negative matrix factorization, Daniel D. Lee & H. Sebastian Seung • Efficient BRDF importance sampling using a factored representation, Jason Lawrance, Szymon Rusinkiewicz, Ravi Ramamoorthi • A tutorial on principal component analysis, Jon Shlens • Singular value decomposition and principal component analysis, Rasmus Elsborg Madsen, Lars Kai Hansen, Ole Winther • Non-negative matrix factorization with sparseness constraints, Patrik O. Hoyer

More Related