1 / 15

Efficient computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L1 Norm

Usual low rank approximation using L2 norm

madison
Download Presentation

Efficient computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L1 Norm

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. Efficient computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L1 Norm Anders Eriksson and Anton van den Hengel CVPR 2010

    2. Usual low rank approximation using L2 norm SVD. Robust low rank approximation using L2 norm- Wiberg Algorithm. Robust low rank approximation in the presence of: missing data Outliers L1 norm Generalization of Wiberg Algorithm.

    3. Problem

    4. Wiberg Algorithm

    5. Alternating Least Squares To find the minimum of f, find derivatives Considering the two equations independently. Starting with some initial estimates u0 and v0, update u from v and v from u. Converges very slowly, specially for missing components and strong noise.

    6. Back to Wiberg In non-linear least squares problems with multiple parameters, when assuming part of the parameters to be fixed, minimization of the least squares with respect to the rest of the parameters becomes a simple problem, e.g., a linear problem. So closed form solutions may be found. Wiberg applied it to this problem of factorization of matrix with missing components.

    7. Back to Wiberg For a fixed u, the L2 norm becomes a linear, least squares minimization problem in v. Compute optimal v*(u) Apply Gauss-Newton method to the above non-linear least squares problem to find optimal u*. Easy to compute derivative because of L2 norm

    8. Linear Programming and Definitions

    10. L1-Wiberg Algorithm

    11. Comparing to L2-Wiberg V*(U) is not easily differentiable The minimization function (u,v*) is not a least squares minimization problem, so Gauss-Newton cant be applied directly. Idea: Let V*(U) denote the optimal basic solution. V*(U) is differentiable assuming problem is feasible, as per Fundamental Theorem of differentiability of linear programs. Jacobian for the G-N :: derivative of solution to a linear prog. problem

    13. Results Tested on synthetic data. Randomly created measurement matrices Y drawn from a uniform distribution [-1,1]. 20% missing, 10% noise [-5,5]. Real data Dinosaur sequence from oxford-vgg.

    15. Structure from motion Projections of 319 points tracked over 36 views. Addition of noise to 10% points. Full 3d reconstruction ~ low rank matrix approximation. Above-residual for the visible points. In L2 norm, reconstruction error is evenly distributed among all elements of residual. In L1 norm, error concentrated on few elements.

More Related