1 / 16

Anders Eriksson and Anton van den Hengel CVPR 2010

Efficient computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm. Anders Eriksson and Anton van den Hengel CVPR 2010. Y. =. U. V. RXN. MXN. MXR. Usual low rank approximation using L 2 norm– SVD.

lindadawson
Download Presentation

Anders Eriksson and Anton van den Hengel CVPR 2010

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Efficient computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L1 Norm Anders Eriksson and Anton van den Hengel CVPR 2010

  2. Y = U V RXN MXN MXR • Usual low rank approximation using L2 norm– SVD. • Robust low rank approximation using L2 norm- Wiberg Algorithm. • “Robust” low rank approximation in the presence of: • missing data • Outliers • L1 norm • Generalization of Wiberg Algorithm.

  3. Problem W is the indicator matrix, wij = 1 if yij is known, else 0.

  4. Wiberg Algorithm W matrix indicates the presence/absence of elements From: “On the Wiberg algorithm for matrix factorization in the presence of missing components”, Okatani et al, IJCV 2006 ,

  5. Alternating Least Squares • To find the minimum of φ, find derivatives • Considering the two equations independently. • Starting with some initial estimates u0 and v0, update u from v and v from u. • Converges very slowly, specially for missing components and strong noise. From: “On the Wiberg algorithm for matrix factorization in the presence of missing components”, Okatani et al, IJCV 2006 ,

  6. Back to Wiberg • In non-linear least squares problems with multiple parameters, when assuming part of the parameters to be fixed, minimization of the least squares with respect to the rest of the parameters becomes a simple problem, e.g., a linear problem. So closed form solutions may be found. • Wiberg applied it to this problem of factorization of matrix with missing components. From: “On the Wiberg algorithm for matrix factorization in the presence of missing components”, Okatani et al, IJCV 2006 ,

  7. Back to Wiberg • For a fixed u, the L2 norm becomes a linear, least squares minimization problem in v. • Compute optimal v*(u) • Apply Gauss-Newton method to the above non-linear least squares problem to find optimal u*. • Easy to compute derivative because of L2 norm From: “On the Wiberg algorithm for matrix factorization in the presence of missing components”, Okatani et al, IJCV 2006 ,

  8. Linear Programming and Definitions

  9. L1-Wiberg Algorithm Minimization problem in terms of L1 norm Minimization problem in terms of v and u independently Substituting v* into u

  10. Comparing to L2-Wiberg • V*(U) is not easily differentiable • The minimization function (u,v*) is not a least squares minimization problem, so Gauss-Newton can’t be applied directly. • Idea: Let V*(U) denote the optimal basic solution. V*(U) is differentiable assuming problem is feasible, as per Fundamental Theorem of differentiability of linear programs. Jacobian for the G-N :: derivative of solution to a linear prog. problem

  11. Add an additional term to the function and minimize the value of the term ?

  12. Results • Tested on synthetic data. • Randomly created measurement matrices Y drawn from a uniform distribution [-1,1]. • 20% missing, 10% noise [-5,5]. • Real data • Dinosaur sequence from oxford-vgg.

  13. Structure from motion • Projections of 319 points tracked over 36 views. Addition of noise to 10% points. • Full 3d reconstruction ~ low rank matrix approximation. • Above-residual for the visible points. In L2 norm, reconstruction error is evenly distributed among all elements of residual. In L1 norm, error concentrated on few elements.

More Related