1 / 261

James Demmel cs.berkeley/~demmel/cs267_Spr09

Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers. James Demmel www.cs.berkeley.edu/~demmel/cs267_Spr09. Outline. Motivation for Automatic Performance Tuning Results for sparse matrix kernels Sparse Matrix Vector Multiplication (SpMV)

zenda
Download Presentation

James Demmel cs.berkeley/~demmel/cs267_Spr09

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Automatic Performance TuningofSparse-Matrix-Vector-Multiplication (SpMV)andIterative Sparse Solvers James Demmel www.cs.berkeley.edu/~demmel/cs267_Spr09

  2. Outline • Motivation for Automatic Performance Tuning • Results for sparse matrix kernels • Sparse Matrix Vector Multiplication (SpMV) • Sequential and Multicore results • OSKI = Optimized Sparse Kernel Interface • Tuning Entire Sparse Solvers

  3. Berkeley Benchmarking and OPtimization (BeBOP) • Prof. Katherine Yelick • Current members • Kaushik Datta, Mark Hoemmen, Marghoob Mohiyuddin, Shoaib Kamil, Rajesh Nishtala, Vasily Volkov, Sam Williams, … • Previous members • Hormozd Gahvari, Eun-Jim Im, Ankit Jain, Rich Vuduc, many undergrads, … • Many results here from current, previous students • bebop.cs.berkeley.edu

  4. Automatic Performance Tuning • Goal: Let machine do hard work of writing fast code • What does tuning of dense BLAS, FFTs, signal processing, have in common? • Can do the tuning off-line: once per architecture, algorithm • Can take as much time as necessary (hours, a week…) • At run-time, algorithm choice may depend only on few parameters (matrix dimensions, size of FFT, etc.) • Can’t always do tuning off-line • Algorithm and implementation may strongly depend on data only known at run-time • Ex: Sparse matrix nonzero pattern determines both best data structure and implementation of Sparse-matrix-vector-multiplication (SpMV) • Part of search for best algorithm just be done (very quickly!) at run-time

  5. Source: Accelerator Cavity Design Problem (Ko via Husbands)

  6. Linear Programming Matrix

  7. A Sparse Matrix You Encounter Every Day

  8. SpMV with Compressed Sparse Row (CSR) Storage Matrix-vector multiply kernel: y(i) y(i) + A(i,j)*x(j) for each row i for k=ptr[i] to ptr[i+1] do y[i] = y[i] + val[k]*x[ind[k]] Matrix-vector multiply kernel: y(i) y(i) + A(i,j)*x(j) for each row i for k=ptr[i] to ptr[i+1] do y[i] = y[i] + val[k]*x[ind[k]] Only 2 flops per 2 mem_refs: Limited by getting data from memory

  9. Example: The Difficulty of Tuning • n = 21200 • nnz = 1.5 M • kernel: SpMV • Source: NASA structural analysis problem

  10. Example: The Difficulty of Tuning • n = 21200 • nnz = 1.5 M • kernel: SpMV • Source: NASA structural analysis problem • 8x8 dense substructure: exploit this to limit #mem_refs

  11. Best: 4x2 Reference Speedups on Itanium 2: The Need for Search Mflop/s Mflop/s

  12. Register Profile: Itanium 2 1190 Mflop/s 190 Mflop/s

  13. Power3 - 17% 252 Mflop/s Power4 - 16% 820 Mflop/s Register Profiles: IBM and Intel IA-64 122 Mflop/s 459 Mflop/s Itanium 1 - 8% Itanium 2 - 33% 247 Mflop/s 1.2 Gflop/s 107 Mflop/s 190 Mflop/s

  14. Ultra 2i - 11% 72 Mflop/s Ultra 3 - 5% 90 Mflop/s Register Profiles: Sun and Intel x86 35 Mflop/s 50 Mflop/s Pentium III - 21% Pentium III-M - 15% 108 Mflop/s 122 Mflop/s 42 Mflop/s 58 Mflop/s

  15. Another example of tuning challenges • More complicated non-zero structure in general • N = 16614 • NNZ = 1.1M

  16. Zoom in to top corner • More complicated non-zero structure in general • N = 16614 • NNZ = 1.1M

  17. 3x3 blocks look natural, but… • More complicated non-zero structure in general • Example: 3x3 blocking • Logical grid of 3x3 cells • But would lead to lots of “fill-in”

  18. Extra Work Can Improve Efficiency! • More complicated non-zero structure in general • Example: 3x3 blocking • Logical grid of 3x3 cells • Fill-in explicit zeros • Unroll 3x3 block multiplies • “Fill ratio” = 1.5 • On Pentium III: 1.5x speedup! • Actual mflop rate 1.52 = 2.25 higher

  19. Automatic Register Block Size Selection • Selecting the r x c block size • Off-line benchmark • Precompute Mflops(r,c) using dense A for each r x c • Once per machine/architecture • Run-time “search” • Sample A to estimate Fill(r,c) for each r x c • Run-time heuristic model • Choose r, c to minimize time ~Fill(r,c) /Mflops(r,c)

  20. Accurate and Efficient Adaptive Fill Estimation • Idea: Sample matrix • Fraction of matrix to sample: sÎ [0,1] • Control cost = O(s * nnz ) by controlling s • Search at run-time: the constant matters! • Control s automatically by computing statistical confidence intervals, by monitoring variance • Cost of tuning • Lower bound: convert matrix in 5 to 40 unblocked SpMVs • Heuristic: 1 to 11 SpMVs • Tuning only useful when we do many SpMVs • Common case, eg in sparse solvers

  21. Accuracy of the Tuning Heuristics (1/4) See p. 375 of Vuduc’s thesis for matrices NOTE: “Fair” flops used (ops on explicit zeros not counted as “work”)

  22. Accuracy of the Tuning Heuristics (2/4)

  23. Accuracy of the Tuning Heuristics (2/4) DGEMV

  24. Upper Bounds on Performance for blocked SpMV • P = (flops) / (time) • Flops = 2 * nnz(A) • Upper bound on speed: Two main assumptions • 1. Count memory ops only (streaming) • 2. Count only compulsory, capacity misses: ignore conflicts • Account for line sizes • Account for matrix size and nnz • Charge minimum access “latency” ai at Li cache & amem • e.g., Saavedra-Barrera and PMaC MAPS benchmarks • Upper bound on time: assume all references to x( ) miss

  25. Example: Bounds on Itanium 2

  26. Example: Bounds on Itanium 2

  27. Example: Bounds on Itanium 2

  28. Summary of Other Performance Optimizations • Optimizations for SpMV • Register blocking (RB): up to 4x over CSR • Variable block splitting: 2.1x over CSR, 1.8x over RB • Diagonals: 2x over CSR • Reordering to create dense structure + splitting: 2x over CSR • Symmetry: 2.8x over CSR, 2.6x over RB • Cache blocking: 2.8x over CSR • Multiple vectors (SpMM): 7x over CSR • And combinations… • Sparse triangular solve • Hybrid sparse/dense data structure: 1.8x over CSR • Higher-level kernels • A·AT·x, AT·A·x: 4x over CSR, 1.8x over RB • A2·x: 2x over CSR, 1.5x over RB • [A·x, A2·x, A3·x, .. , Ak·x] …. more to say later

  29. Potential Impact on Applications: Omega3P • Application: accelerator cavity design [Ko] • Relevant optimization techniques • Symmetric storage • Register blocking • Reordering, to create more dense blocks • Reverse Cuthill-McKee ordering to reduce bandwidth • Do Breadth-First-Search, number nodes in reverse order visited • Traveling Salesman Problem-based ordering to create blocks • Nodes = columns of A • Weights(u, v) = no. of nz u, v have in common • Tour = ordering of columns • Choose maximum weight tour • See [Pinar & Heath ’97] • 2.1x speedup on IBM Power 4

  30. Source: Accelerator Cavity Design Problem (Ko via Husbands)

  31. Post-RCM Reordering

  32. 100x100 Submatrix Along Diagonal

  33. “Microscopic” Effect of RCM Reordering Before: Green + Red After: Green + Blue

  34. “Microscopic” Effect of Combined RCM+TSP Reordering Before: Green + Red After: Green + Blue

  35. (Omega3P)

  36. Optimized Sparse Kernel Interface - OSKI • Provides sparse kernels automatically tuned for user’s matrix & machine • BLAS-style functionality: SpMV, Ax & ATy, TrSV • Hides complexity of run-time tuning • Includes new, faster locality-aware kernels: ATAx, Akx • Faster than standard implementations • Up to 4x faster matvec, 1.8x trisolve, 4x ATA*x • For “advanced” users & solver library writers • Available as stand-alone library (OSKI 1.0.1h, 6/07) • Available as PETSc extension (OSKI-PETSc .1d, 3/06) • Bebop.cs.berkeley.edu/oski

  37. How the OSKI Tunes (Overview) Application Run-Time Library Install-Time (offline) 1. Build for Target Arch. 2. Benchmark Workload from program monitoring History Matrix Generated code variants Benchmark data 1. Evaluate Models Heuristic models 2. Select Data Struct. & Code To user: Matrix handle for kernel calls Extensibility: Advanced users may write & dynamically add “Code variants” and “Heuristic models” to system.

  38. How to Call OSKI: Basic Usage • May gradually migrate existing apps • Step 1: “Wrap” existing data structures • Step 2: Make BLAS-like kernel calls int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */ double* x = …, *y = …; /* Let x and y be two dense vectors */ /* Compute y = ·y + ·A·x, 500 times */ for( i = 0; i < 500; i++ ) my_matmult( ptr, ind, val, , x, b, y );

  39. How to Call OSKI: Basic Usage • May gradually migrate existing apps • Step 1: “Wrap” existing data structures • Step 2: Make BLAS-like kernel calls int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */ double* x = …, *y = …; /* Let x and y be two dense vectors */ /*Step 1: Create OSKI wrappers around this data*/ oski_matrix_t A_tunable = oski_CreateMatCSR(ptr, ind, val, num_rows, num_cols, SHARE_INPUTMAT, …); oski_vecview_t x_view = oski_CreateVecView(x, num_cols, UNIT_STRIDE); oski_vecview_t y_view = oski_CreateVecView(y, num_rows, UNIT_STRIDE); /* Compute y = ·y + ·A·x, 500 times */ for( i = 0; i < 500; i++ ) my_matmult( ptr, ind, val, , x, b, y );

  40. How to Call OSKI: Basic Usage • May gradually migrate existing apps • Step 1: “Wrap” existing data structures • Step 2: Make BLAS-like kernel calls int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */ double* x = …, *y = …; /* Let x and y be two dense vectors */ /* Step 1: Create OSKI wrappers around this data */ oski_matrix_t A_tunable = oski_CreateMatCSR(ptr, ind, val, num_rows, num_cols, SHARE_INPUTMAT, …); oski_vecview_t x_view = oski_CreateVecView(x, num_cols, UNIT_STRIDE); oski_vecview_t y_view = oski_CreateVecView(y, num_rows, UNIT_STRIDE); /* Compute y = ·y + ·A·x, 500 times */ for( i = 0; i < 500; i++ ) oski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view);/* Step 2 */

  41. How to Call OSKI: Tune with Explicit Hints • User calls “tune” routine • May provide explicit tuning hints (OPTIONAL) oski_matrix_t A_tunable = oski_CreateMatCSR( … ); /* … */ /* Tell OSKI we will call SpMV 500 times (workload hint) */ oski_SetHintMatMult(A_tunable, OP_NORMAL, , x_view, , y_view, 500); /* Tell OSKI we think the matrix has 8x8 blocks (structural hint) */ oski_SetHint(A_tunable, HINT_SINGLE_BLOCKSIZE, 8, 8); oski_TuneMat(A_tunable); /* Ask OSKI to tune */ for( i = 0; i < 500; i++ ) oski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view);

  42. How the User Calls OSKI: Implicit Tuning • Ask library to infer workload • Library profiles all kernel calls • May periodically re-tune oski_matrix_t A_tunable = oski_CreateMatCSR( … ); /* … */ for( i = 0; i < 500; i++ ) { oski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view); oski_TuneMat(A_tunable); /* Ask OSKI to tune */ }

  43. Multicore SMPs Used for Tuning SpMV Intel Xeon E5345 (Clovertown) AMD Opteron 2356 (Barcelona) Sun T2+ T5140 (Victoria Falls) IBM QS20 Cell Blade

  44. Multicore SMPs with Conventional cache-based memory hierarchy Intel Xeon E5345 (Clovertown) AMD Opteron 2356 (Barcelona) Sun T2+ T5140 (Victoria Falls) IBM QS20 Cell Blade

  45. Multicore SMPs with local store-based memory hierarchy Intel Xeon E5345 (Clovertown) AMD Opteron 2356 (Barcelona) Sun T2+ T5140 (Victoria Falls) IBM QS20 Cell Blade

  46. Multicore SMPs with CMT = Chip-MultiThreading Intel Xeon E5345 (Clovertown) AMD Opteron 2356 (Barcelona) IBM QS20 Cell Blade Sun T2+ T5140 (Victoria Falls)

  47. Multicore SMPs: Number of threads Intel Xeon E5345 (Clovertown) AMD Opteron 2356 (Barcelona) 8 threads 8 threads Sun T2+ T5140 (Victoria Falls) IBM QS20 Cell Blade 128 threads 16* threads *SPEs only

  48. Multicore SMPs: peak double precision flops Intel Xeon E5345 (Clovertown) AMD Opteron 2356 (Barcelona) 75 GFlop/s 74 Gflop/s Sun T2+ T5140 (Victoria Falls) IBM QS20 Cell Blade 19 GFlop/s 29* GFlop/s *SPEs only

  49. Multicore SMPs: total DRAM bandwidth Intel Xeon E5345 (Clovertown) AMD Opteron 2356 (Barcelona) 21 GB/s (read) 10 GB/s (write) 21 GB/s Sun T2+ T5140 (Victoria Falls) IBM QS20 Cell Blade 42 GB/s (read) 21 GB/s (write) 51 GB/s *SPEs only

  50. Multicore SMPs with Non-Uniform Memory Access - NUMA Intel Xeon E5345 (Clovertown) AMD Opteron 2356 (Barcelona) Sun T2+ T5140 (Victoria Falls) IBM QS20 Cell Blade *SPEs only

More Related