1 / 39

Graph Algorithms in Numerical Linear Algebra: Past, Present, and Future

Graph Algorithms in Numerical Linear Algebra: Past, Present, and Future. John R. Gilbert MIT and UC Santa Barbara September 28, 2002. Outline. Past (one extended example) Present (two quick examples) Future (6 or 8 examples). A few themes. Paths Locality Eigenvectors

jasia
Download Presentation

Graph Algorithms in Numerical Linear Algebra: Past, Present, and Future

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Graph Algorithms in Numerical Linear Algebra: Past, Present, and Future John R. Gilbert MIT and UC Santa Barbara September 28, 2002

  2. Outline • Past (one extended example) • Present (two quick examples) • Future (6 or 8 examples)

  3. A few themes • Paths • Locality • Eigenvectors • Huge data sets • Multiple scales

  4. PAST

  5. 3 7 1 3 7 1 6 8 6 8 4 10 4 10 9 2 9 2 5 5 Graphs and sparse Gaussian elimination (1961-) Fill:new nonzeros in factor Cholesky factorization: for j = 1 to n add edges between j’s higher-numbered neighbors G+(A)[chordal] G(A)

  6. Fill-reducing matrix permutations • Theory: approx optimal separators => approx optimal fill and flop count • Orderings: nested dissection, minimum degree, hybrids • Graph partitioning: spectral, geometric, multilevel

  7. 2 1 4 5 7 6 3 Directed graph • A is square, unsymmetric, nonzero diagonal • Edges from rows to columns • Symmetric permutations PAPT A G(A)

  8. 2 1 4 5 7 6 3 + L+U Symbolic Gaussian elimination • Add fill edge a -> b if there is a path from a to b through lower-numbered vertices. A G (A)

  9. 1 2 4 7 5 3 6 1 2 2 1 4 7 5 4 5 7 3 6 6 3 Strongly connected components • Symmetric permutation to block triangular form • Solve Ax=b by block back substitution • Irreducible (strong Hall) diagonal blocks • Row and column partitions are independent of choice of nonzero diagonal • Find P in linear time by depth-first search G(A) PAPT

  10. 1 2 3 4 5 = 1 3 5 2 4 Sparse-sparse triangular solve L x b G(LT) • Symbolic: • Predict structure of x by depth-first search from nonzeros of b • Numeric: • Compute values of x in topological order Time = O(flops)

  11. 1 2 3 4 5 3 1 2 3 4 5 1 4 5 2 3 4 5 2 1 Column intersection graph • G(A) = G(ATA) if no cancellation(otherwise) • Permuting the rows of A does not change G(A) A ATA G(A)

  12. 1 2 3 4 5 3 + 1 2 3 4 5 G(A) 1 4 5 2 3 4 2 5 1 + + + Filled column intersection graph • G(A) = symbolic Cholesky factor of ATA • In PA=LU, G(U)  G(A) and G(L)  G(A) • Tighter bound on L from symbolic QR • Bounds are best possible if A is strong Hall A chol(ATA)

  13. 5 1 2 3 4 5 1 2 3 4 5 1 4 2 2 3 3 4 5 1 Column elimination tree • Elimination tree of ATA (if no cancellation) • Depth-first spanning tree of G(A) • Represents column dependencies in various factorizations T(A) A chol(ATA) +

  14. That story continues . . . • [Matlab 4.0, 1992] • Left-looking column-by-column LU factorization • Depth-first search to predict structure of each column • Slow search limited speed • BLAS-1 limited cache reuse . . . • SuperLU: supernodal BLAS-2.5 LU • UMFPACK: multifrontal BLAS-3 LU [Matlab 6.5, 2002] • Ordering for nonsymmetric LU is still not well understood

  15. PRESENT

  16. Support graph preconditioning [http://www.cs.sandia.gov/~bahendr/support.html] • Define a preconditioner B for matrix A • Explicitly compute the factorization B = LU • Choose nonzero structure of B to make factoring cheap (using combinatorial tools from direct methods) • Prove bounds on condition number using both algebraic and combinatorial tools

  17. Support graph preconditioner: example [Vaidya] • A is symmetric positive definite with negative off-diagonal nzs • B is a maximum-weight spanning tree for A (with diagonal modified to preserve row sums) • Preconditioning costs O(n) time per iteration • Eigenvalue bounds from graph embedding: product of congestion and dilation • Condition number at most n2 independent of coefficients • Many improvements exist G(A) G(B)

  18. Support graph preconditioner: example [Vaidya] • can improve congestion and dilation by adding a few strategically chosen edges to B • cost of factor+solve is O(n1.75), or O(n1.2) if A is planar • in experiments by Chen & Toledo, often better than drop-tolerance MIC for 2D problems, but not for 3D. G(A) G(B)

  19. Algebraic framework [Gremban/Miller/Boman/Hendrickson] • The support of B for A is σ(A, B) = min{τ : xT(tB– A)x  0 for all x, all t  τ} • In the SPD case, σ(A, B) = max{λ : Ax = λBx} = λmax(A, B) • Theorem 1: If A, B are SPD then κ(B-1A) = σ(A, B) · σ(B, A) • Theorem 2: If V·W=U, then σ(U·UT, V·VT)  ||W||22

  20. [ ] a2 +b2-a2 -b2 -a2 a2 +c2 -c2-b2 -c2 b2 +c2 [ ] a2 +b2-a2 -b2 -a2 a2 -b2 b2 B =VVT A =UUT ] [ ] ] [ [ 1-c/a1 c/b/b ab-a c-b ab-a c-b-c = x U V W -a2 -c2 -a2 -b2 -b2 σ(A, B)  ||W||22 ||W||x ||W||1 = (max row sum) x (max col sum) (max congestion) x (max dilation)

  21. Open problems I • Other subgraph constructions for better bounds on||W||22? • For example [Boman], ||W||22 ||W||F2= sum(wij2) = sum of (weighted) dilations, and [Alon, Karp, Peleg, West] show there exists a spanning tree with average weighted dilation exp(O((log n loglog n)1/2)) = o(n ); this gives condition number O(n1+) and solution time O(n1.5+), compared to Vaidya O(n1.75) with augmented spanning tree • Is there a construction that minimizes ||W||22 directly?

  22. Open problems II • Make spanning tree methods more effective in 3D? • Vaidya: O(n1.75) in general, O(n1.2) in 2D • Issue: 2D uses bounded excluded minors, not just separators • Analyze a multilevel method in general? • Extend to more general finite element matrices?

  23. 1 2 3 4 5 6 7 1 2 2 1 3 4 5 4 5 7 6 7 6 3 Link analysis of the world-wide web • Web page = vertex • Link = directed edge • Link matrix: Aij = 1 if page i links to page j

  24. Web graph: PageRank (Google) [Brin, Page] • Markov process: follow a random link most of the time; otherwise, go to any page at random. • Importance = stationary distribution of Markov process. • Transition matrix is p*A + (1-p)*ones(size(A)), scaled so each column sums to 1. • Importance of page i is the i-th entry in the principal eigenvector of the transition matrix. • But, the matrix is 2,000,000,000 by 2,000,000,000. An important page is one that many important pages point to.

  25. Web graph: Hubs and authorities [Kleinberg] Authorities trekbikes.com shimano.com campagnolo.com Hubs bikereviews.com phillybikeclub.org yahoo.com/cycling A good hub cites good authorities A good authority is cited by good hubs • Each page has hub score xi and authority score yi • Repeat: y A*x; x AT*y; normalize; • Converges to principal eigenvectors of AAT and ATA(left and right singular vectors of A)

  26. FUTURE

  27. Combinatorial issues in numerical linear algebra • Approximation theory for nonsymmetric LU ordering • Preconditioning • Complexity of matrix multiplication

  28. Biology as an information science “The rate-limiting step of genomics is computational science.” - Eric Lander • Sequence assembly, gene identification, alignment • Large-scale data mining • Protein modeling: discrete and continuous

  29. Linear and sublinear algorithms for huge problems “Can we understand anything interesting about our data when we do not even have time to read all of it?” - Ronitt Rubinfeld

  30. Fast Monte Carlo algorithms for finding low-rank approximations [Frieze, Kannan, Vempala] • Describe a rank-k matrix B0 that is within ε of the best rank-k approximation to A: ||A – B0||F minB ||A - B||F +ε||A||F • Correct with probability at least 1 – δ • Time polynomial in k, 1/ε, log(1/δ); independent of size of A • Idea: using a clever distribution, sample an O(k-by-k) submatrix of A and compute its SVD (Need to be able to sample A with the right distribution)

  31. Approximating the MST weight in sublinear time[Chazelle, Rubinfeld, Trevisan] • Key subroutine: estimate number of connected components of a graph, in time depending on expected error but not on size of graph • Idea: for each vertex v define f(v) = 1/(size of component containing v) • Then Σvf(v) = number of connected components • Estimate Σvf(v) by breadth-first search from a few vertices

  32. Modeling and distributed control • Multiresolution modeling for nanomaterials and nanosystems • Distributed control for systems on micro to global scales Imagining a molecular-scale crossbar switch [Heath et al., UCLA]

  33. “Active surface” airjet paper mover [Berlin, Biegelsen et al., PARC] PC-hosted DSP control @ 1 kHz 12” x 12” board sensors: 32,000 gray-level pixels in 25 linear arrays 576 valves (144 per direction)

  34. A hard question How will combinatorial methods be used by people who don’t understand them in detail?

  35. Matrix division in Matlab x = A \ b; • Works for either full or sparse A • Is A square? no => use QR to solve least squares problem • Is A triangular or permuted triangular? yes => sparse triangular solve • Is A symmetric with positive diagonal elements? yes => attempt Cholesky after symmetric minimum degree • Otherwise => use LU on A(:, colamd(A))

  36. Matching and depth-first search in Matlab • dmperm: Dulmage-Mendelsohn decomposition • Bipartite matching followed by strongly connected components • Square, full rank A: • [p, q, r] = dmperm(A); • A(p,q) has nonzero diagonal and is in block upper triangular form • also, strongly connected components of a directed graph • also, connected components of an undirected graph • Arbitrary A: • [p, q, r, s] = dmperm(A); • maximum-size matching in a bipartite graph • minimum-size vertex cover in a bipartite graph • decomposition into strong Hall blocks

  37. A few themes • Paths • Locality • Eigenvectors • Huge data sets • Multiple scales • Usability

  38. Morals • Things are clearer if you look at them from two directions • Combinatorial algorithms are pervasive in scientific computing and will become more so • What are the implications for teaching? • What are the implications for software development?

  39. Thanks Patrick Amestoy, Erik Boman, Iain Duff, Mary Ann Branch Freeman, Bruce Hendrickson, Esmond Ng, Alex Pothen, Padma Raghavan, Ronitt Rubinfeld, Rob Schreiber, Sivan Toledo, Paul Van Dooren, Steve Vavasis, Santosh Vempala, ...

More Related