1 / 9

Sparse Matrices in Matlab

Sparse Matrices in Matlab. John R. Gilbert Xerox Palo Alto Research Center with Cleve Moler (MathWorks) and Rob Schreiber (HP Labs). Design principles. All operations should give the same results for sparse and full matrices (almost all)

glynis
Download Presentation

Sparse Matrices in Matlab

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Sparse Matrices in Matlab John R. Gilbert Xerox Palo Alto Research Center with Cleve Moler (MathWorks) and Rob Schreiber (HP Labs)

  2. Design principles • All operations should give the same results for sparse and full matrices (almost all) • Sparse matrices are never created automatically, but once created they propagate • Performance is important -- but usability, simplicity, completeness, and robustness are more important • Storage for a sparse matrix should be O(nonzeros) • Time for a sparse operation should be O(flops)(as nearly as possible)

  3. Two storage classes for matrices • Full: • 2-dimensional array of real or complex numbers • matrix uses (nrows * ncols) memory • Sparse: • compressed column storage • matrix uses about (2*nonzeros + nrows) memory

  4. Matrix arithmetic and indexing • A+B, A*B, A.*B, A’, ~A, A|B, . . . • A(2:n, 2:n) = A(2:n, 2:n) + u * v’ • A = [B C ; D E];

  5. Matrix factorizations • Cholesky: • R = chol(A); • simple left-looking column algorithm • Nonsymmetric LU: • [L,U,P] = lu(A); • left-looking “GPMOD”, depth-first search, symmetric pruning • Orthogonal: • [Q,R] = qr(A); • George-Heath algorithm: row-wise Givens rotations

  6. Fill-reducing matrix permutations • Nonsymmetric approximate minimum degree: • p = colamd(A); • column permutation: lu(A(:,p)) often sparser than lu(A) • also for qr factorization • [Davis, Larimore, G, Ng] • Symmetric approximate minimum degree: • p = symamd(A); • symmetric permutation: chol(A(p,p)) often sparser than chol(A) • Reverse Cuthill-McKee • p = symrcm(A); • A(p,p) often has smaller bandwidth than A • similar to Sparspak RCM

  7. Matching and block triangular form • Dulmage-Mendelsohn decomposition: • Bipartite matching followed by strongly connected components • Square, full rank A: • [p, q, r] = dmperm(A); • A(p,q) has nonzero diagonal and is in block upper triangular form • also, strongly connected components of a directed graph • Arbitrary A: • [p, q, r, s] = dmperm(A); • maximum-size matching in a bipartite graph • minimum-size vertex cover in a bipartite graph • decomposition into strong Hall blocks

  8. Solving linear equations x = A \ b; • Is A square? no => use QR to solve least squares problem • Is A triangular or permuted triangular? yes => sparse triangular solve • Is A symmetric with positive diagonal elements? yes => attempt Cholesky after symmetric minimum degree • Otherwise => use LU on A(:, colamd(A))

  9. Extensions (due to various people) • Matlab code (“m-files”) • Iterative linear solvers • Sparse optimization toolbox • . . . • C or Fortran code with Matlab interfaces (“mex-files”) • Sparse eigenvalues (based on ARPACK) • Graph partitioning (based on METIS or Chaco) • Supernodal sparse linear solver (based on SuperLU) • . . .

More Related