1 / 119

Concentration of Matrix Martingales and Laplacian Linear Equations

This article explores the concentration properties of matrix martingales and their application in solving Laplacian linear equations. It discusses Bernstein's and Freedman's inequalities, approximation between graphs, sparsification of Laplacian matrices, and Gaussian elimination on Laplacians.

sabinet
Download Presentation

Concentration of Matrix Martingales and Laplacian Linear Equations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Matrix Martingales in Randomized Numerical Linear Algebra

  2. Concentration of Scalar Random Variables Random , are independent Is with high probability?

  3. Concentration of Scalar Random Variables Bernstein’s Inequality Random , are independent gives E.g. if and

  4. Concentration of Scalar Martingales Random , are independent

  5. Concentration of Scalar Martingales Random , are independent

  6. Concentration of Scalar Martingales Freedman’s Inequality Bernstein’s Inequality Random , are independent gives

  7. Concentration of Scalar Martingales Freedman’s Inequality Random , are independent gives

  8. Concentration of Matrix Random Variables Matrix Bernstein’s Inequality (Tropp 11) Random symmetric are independent gives

  9. Concentration of MatrixMartingales Matrix Freedman’s Inequality (Tropp 11) Random symmetric are independent gives E.g. if and

  10. Concentration of MatrixMartingales Matrix Freedman’s Inequality (Tropp 11) Predictable quadratic variation

  11. Laplacian Matrices Graph Edge weights matrix

  12. Laplacian Matrices Graph Edge weights matrix

  13. Laplacian Matrices weighted adjacency matrix of the graph diagonal matrix of weighted degrees Graph Edge weights

  14. Laplacian Matrices Symmetric matrix All off-diagonals are non-positive and

  15. Laplacian of a Graph

  16. Approximation between graphs PSD Order iff for all Matrix Approximation iff and Graphs iff Implies multiplicative approximation to cuts

  17. Sparsification of Laplacians Every graph on vertices for any , has a sparsifier with edges [BSS’09,LS’17] Sampling edges independently by leverage scores w.h.p. gives sparsifier with edges [Spielman-Srivastava’08] Many applications!

  18. Graph Semi-Streaming Sparsification edges of a graph arriving as stream GOAL: maintain -sparsifier, minimize working space (& time) [Ahn-Guha ’09] space edge cut-sparsifier Used scalar martingales

  19. Independent Edge Samples Expectation Zero-mean w. probability o.w.

  20. Matrix Martingale? Zero mean? So what if we just sparsify again? It’s a martingale! Every time we accumulate , sparsify down to , and continue

  21. Independent Edge Samples Expectation Zero-mean w. probability o.w.

  22. Repeated Edge Sampling Expectation Zero-mean w. prob. o.w.

  23. Repeated Edge Sampling Expectation Zero-mean w. prob. o.w.

  24. Repeated Edge Sampling Expectation Zero-mean Martingale w. prob. o.w.

  25. Repeated Edge Sampling It works – and it couldn’t be simpler [K.PPS’17] space edge -sparsifier Shave a log off many algorithms that do repeated matrix sampling…

  26. Approximation? and ?

  27. Isotropic position Goal is now

  28. Concentration of MatrixMartingales Matrix Freedman’s Inequality Random symmetric gives

  29. Concentration of MatrixMartingales Matrix Freedman’s Inequality Random symmetric gives

  30. Norm Control Can there be many edges with large norm? Can afford , , w. probability o.w.

  31. Norm Control If we lose track of the original graph, we can’t estimate the norm Stopping time argument Bad regions

  32. Variance Control But then edges get resampled. Geometric sequence for variance, roughly.

  33. Resparsification Every time we accumulate , sparsify down to , and continue [K.PPS’17] space edge -sparsifier Shave a log off many algorithms that do repeated matrix sampling…

  34. Laplacian Linear Equations [ST04]: solving Laplacian linear equations in time [KS16]: simple algorithm

  35. Solving a Laplacian Linear Equation Gaussian Elimination Find , upper triangular matrix, s.t. Then Easy to apply and

  36. Solving a Laplacian Linear Equation Approximate Gaussian Elimination Find , upper triangular matrix, s.t. is sparse. iterations to get -approximate solution .

  37. Approximate Gaussian Elimination Theorem [KS] When is an Laplacian matrix with non-zeros, we can find in time an upper triangular matrix with non-zeros, s.t.w.h.p.

  38. Additive View of Gaussian Elimination Find , upper triangular matrix, s.t

  39. Additive View of Gaussian Elimination Find the rank-1 matrix that agrees with on the first row and column.

  40. Additive View of Gaussian Elimination Subtract the rank 1 matrix. We have eliminated the first variable.

  41. Additive View of Gaussian Elimination The remaining matrix is PSD.

  42. Additive View of Gaussian Elimination Find rank-1 matrix that agrees with our matrix on the next row and column.

  43. Additive View of Gaussian Elimination Subtract the rank 1 matrix. We have eliminated the second variable.

  44. Additive View of Gaussian Elimination Repeat until all parts written as rank 1 terms.

  45. Additive View of Gaussian Elimination Repeat until all parts written as rank 1 terms.

  46. Additive View of Gaussian Elimination Repeat until all parts written as rank 1 terms.

  47. Additive View of Gaussian Elimination What is special about Gaussian Elimination on Laplacians? The remaining matrix is always Laplacian.

  48. Additive View of Gaussian Elimination What is special about Gaussian Elimination on Laplacians? The remaining matrix is always Laplacian.

  49. Additive View of Gaussian Elimination What is special about Gaussian Elimination on Laplacians? The remaining matrix is always Laplacian. A new Laplacian!

  50. Why is Gaussian Elimination Slow? Solvingby Gaussian Elimination can take time. The main issue is fill

More Related