1 / 68

Matrix Martingales in Randomized Numerical Linear Algebra

Explore the concentration of scalar random variables using martingales, Bernstein's inequality, and Freedman's inequality. Also consider the concentration of matrix random variables and Laplacian matrices in randomized numerical linear algebra.

amandat
Download Presentation

Matrix Martingales in Randomized Numerical Linear Algebra

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Matrix Martingales in Randomized Numerical Linear Algebra

  2. Concentration of Scalar Random Variables Random , are independent Is with high probability?

  3. Concentration of Scalar Random Variables Bernstein’s Inequality Random , are independent gives E.g. if and

  4. Concentration of Scalar Martingales Random , are independent

  5. Concentration of Scalar Martingales Random , are independent

  6. Concentration of Scalar Martingales Freedman’s Inequality Bernstein’s Inequality Random , are independent gives

  7. Concentration of Scalar Martingales Freedman’s Inequality Random , are independent gives

  8. Concentration of Matrix Random Variables Matrix Bernstein’s Inequality (Tropp 11) Random symmetric are independent gives

  9. Concentration of MatrixMartingales Matrix Freedman’s Inequality (Tropp 11) Random symmetric are independent gives E.g. if and

  10. Concentration of MatrixMartingales Matrix Freedman’s Inequality (Tropp 11) Predictable quadratic variation

  11. Laplacian Matrices Graph Edge weights matrix

  12. Laplacian Matrices Graph Edge weights matrix

  13. Laplacian Matrices weighted adjacency matrix of the graph diagonal matrix of weighted degrees Graph Edge weights

  14. Laplacian Matrices Symmetric matrix All off-diagonals are non-positive and

  15. Laplacian of a Graph

  16. Laplacian Matrices [ST04]: solving Laplacian linear equations in time [KS16]: simple algorithm

  17. Solving a Laplacian Linear Equation Gaussian Elimination Find , upper triangular matrix, s.t. Then Easy to apply and

  18. Solving a Laplacian Linear Equation Approximate Gaussian Elimination Find , upper triangular matrix, s.t. is sparse. iterations to get -approximate solution .

  19. Approximate Gaussian Elimination Theorem [KS] When is an Laplacian matrix with non-zeros, we can find in time an upper triangular matrix with non-zeros, s.t.w.h.p.

  20. Additive View of Gaussian Elimination Find , upper triangular matrix, s.t

  21. Additive View of Gaussian Elimination Find the rank-1 matrix that agrees with on the first row and column.

  22. Additive View of Gaussian Elimination Subtract the rank 1 matrix. We have eliminated the first variable.

  23. Additive View of Gaussian Elimination The remaining matrix is PSD.

  24. Additive View of Gaussian Elimination Find rank-1 matrix that agrees with our matrix on the next row and column.

  25. Additive View of Gaussian Elimination Subtract the rank 1 matrix. We have eliminated the second variable.

  26. Additive View of Gaussian Elimination Repeat until all parts written as rank 1 terms.

  27. Additive View of Gaussian Elimination Repeat until all parts written as rank 1 terms.

  28. Additive View of Gaussian Elimination Repeat until all parts written as rank 1 terms.

  29. Additive View of Gaussian Elimination What is special about Gaussian Elimination on Laplacians? The remaining matrix is always Laplacian.

  30. Additive View of Gaussian Elimination What is special about Gaussian Elimination on Laplacians? The remaining matrix is always Laplacian.

  31. Additive View of Gaussian Elimination What is special about Gaussian Elimination on Laplacians? The remaining matrix is always Laplacian. A new Laplacian!

  32. Why is Gaussian Elimination Slow? Solvingby Gaussian Elimination can take time. The main issue is fill

  33. Why is Gaussian Elimination Slow? Solvingby Gaussian Elimination can take time. The main issue is fill

  34. Why is Gaussian Elimination Slow? Solvingby Gaussian Elimination can take time. The main issue is fill New Laplacian Elimination creates a clique on the neighbors of

  35. Why is Gaussian Elimination Slow? Solvingby Gaussian Elimination can take time. The main issue is fill New Laplacian Laplacian cliques can be sparsified!

  36. Gaussian Elimination Pick a vertex to eliminate Add the clique created by eliminating Repeat until done

  37. Approximate Gaussian Elimination Pick a vertex to eliminate Add the clique created by eliminating Repeat until done

  38. Approximate Gaussian Elimination Pick a random vertex to eliminate Add the clique created by eliminating Repeat until done

  39. Approximate Gaussian Elimination Pick a random vertex to eliminate Sample the clique created by eliminating Repeat until done Resembles randomized Incomplete Cholesky

  40. Approximating Matrices by Sampling Goal Approach Show concentrated around expectation Gives w. high probability

  41. Approximating Matrices in Expectation Consider eliminating the first variable Remaining graph + clique Rank 1 term Original Laplacian

  42. Approximating Matrices in Expectation Consider eliminating the first variable Remaining graph + sparsifiedclique Rank 1 term

  43. Approximating Matrices in Expectation Consider eliminating the first variable Remaining graph + sparsifiedclique Rank 1 term

  44. Approximating Matrices in Expectation Consider eliminating the first variable Remaining graph + sparsified clique Rank 1 term

  45. Approximating Matrices in Expectation Consider eliminating the first variable Remaining graph + sparsified clique Rank 1 term Suppose

  46. Approximating Matrices in Expectation Consider eliminating the first variable Remaining graph + sparsified clique Rank 1 term Then

  47. Approximating Matrices in Expectation Let be our approximation after eliminations If we ensure at each step Then Sparsified clique Clique

  48. Approximation? Approximate Gaussian Elimination Find , upper triangular matrix, s.t.

  49. Essential Tools Isotropic position Goal is now PSD Order iff for all

  50. Matrix Concentration: Edge Variables Zero-mean variables Isotropic position variables w. probability o.w.

More Related