1 / 52

Runtime Data Flow Graph Scheduling of Matrix Computations

Runtime Data Flow Graph Scheduling of Matrix Computations. Ernie Chan. Teaser. Better. Theoretical Peak Performance. Goals. Programmability Use tools provided by FLAME Parallelism Directed acyclic graph ( DAG) scheduling. Outline. 7. Introduction

dugan
Download Presentation

Runtime Data Flow Graph Scheduling of Matrix Computations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Runtime Data Flow Graph Scheduling of Matrix Computations Ernie Chan

  2. Teaser Better Theoretical Peak Performance Intel MKL talk

  3. Goals • Programmability • Use tools provided by FLAME • Parallelism • Directed acyclic graph (DAG) scheduling Intel MKL talk

  4. Outline 7 • Introduction • SuperMatrix • Scheduling • Performance • Conclusion 6 5 5 4 3 4 3 2 1 Intel MKL talk

  5. SuperMatrix • Formal Linear Algebra Method Environment (FLAME) • High-level abstractions for expressing linear algebra algorithms • Cholesky Factorization Intel MKL talk

  6. SuperMatrix CHOL0 • Cholesky Factorization • Iteration 1 CHOL0 Chol( A0,0 ) Intel MKL talk

  7. SuperMatrix CHOL0 • Cholesky Factorization • Iteration 1 TRSM1 TRSM2 CHOL0 Chol( A0,0 ) TRSM1 A1,0A0,0-T TRSM2 A2,0A0,0-T Intel MKL talk

  8. SuperMatrix CHOL0 • Cholesky Factorization • Iteration 1 TRSM1 TRSM2 SYRK3 GEMM4 SYRK5 CHOL0 Chol( A0,0 ) TRSM1 A1,0 A0,0-T SYRK3 A1,1 – A1,0 A1,0T TRSM2 A2,0 A0,0-T GEMM4 A2,1 – A2,0A1,0T SYRK5 A2,2 – A2,0 A2,0T Intel MKL talk

  9. SuperMatrix CHOL0 • Cholesky Factorization • Iteration 2 TRSM1 TRSM2 SYRK3 GEMM4 SYRK5 CHOL6 CHOL6 Chol( A1,1 ) TRSM7 TRSM7 A2,1 A1,1-T SYRK8 A2,2 – A2,1 A2,1T SYRK8 Intel MKL talk

  10. SuperMatrix CHOL0 • Cholesky Factorization • Iteration 3 TRSM1 TRSM2 SYRK3 GEMM4 SYRK5 CHOL6 TRSM7 CHOL9 Chol( A2,2 ) SYRK8 CHOL9 Intel MKL talk

  11. SuperMatrix • Cholesky Factorization • matrix of blocks Intel MKL talk

  12. SuperMatrix • Separation of Concerns • Analyzer • Decomposes subproblems into component tasks • Store tasks in global task queue sequentially • Internally calculates all dependencies between tasks, which form a DAG, only using input and output parameters for each task • Dispatcher • Spawn threads • Schedule and dispatch tasks to threads in parallel Intel MKL talk

  13. Outline 7 • Introduction • SuperMatrix • Scheduling • Performance • Conclusion 6 5 5 4 3 4 3 2 1 Intel MKL talk

  14. Scheduling 7 • Dispatcher foreach task in DAG do If task is ready then Enqueue task end end while tasks are available do Dequeue task Execute task foreach dependent task do Update dependent task if dependent task is ready then Enqueue dependent task end end end 6 5 5 4 3 4 3 2 1 Intel MKL talk

  15. Scheduling 7 • Dispatcher foreach task in DAG do If task is ready then Enqueue task end end while tasks are available do Dequeue task Execute task foreach dependent task do Update dependent task if dependent task is ready then Enqueue dependent task end end end 6 5 5 4 3 4 3 2 1 Intel MKL talk

  16. Scheduling • Supermarket • lines for each cashiers • Efficient enqueue and dequeue • Schedule depends on task to thread assignment • Bank • 1 line for tellers • Enqueue and dequeue become bottlenecks • Dynamic dispatching of tasks to threads Intel MKL talk

  17. Scheduling • Single Queue • Set of all ready and available tasks • FIFO, priority Enqueue Dequeue PE1 PE0 … PEp-1 Intel MKL talk

  18. Scheduling • Multiple Queues • Work stealing, data affinity Enqueue … Dequeue PE1 PE0 … PEp-1 Intel MKL talk

  19. Scheduling • Work Stealing foreach task in DAG do If task is ready then Enqueue task end end while tasks are available do Dequeue task iftask ≠ Ø then Execute task Update dependent tasks … else Stealtask end end • Enqueue • Place all dependent tasks on queue of same thread that executes task • Steal • Select random thread and remove a task from tail of its queue Intel MKL talk

  20. Scheduling • Data Affinity • Assign all tasks that write to a particular block to the same thread • Owner computes rule • 2D block cyclic distribution • Execution Trace • Cholesky factorization: • Total time: 2D data affinity ~ FIFO queue • Idle threads: 2D ≈ 27% and FIFO ≈ 17% 2 0 0 3 1 1 2 0 0 Intel MKL talk

  21. Scheduling • Data Granularity • Cost of task >> enqueue and dequeue • Single vs. Multiple Queues • FIFO queue increases load balance • 2D data affinity decreases data communication • Combine best aspects of both! Intel MKL talk

  22. Scheduling • Cache Affinity • Single priority queue sorted by task height • Software cache • LRU • Line = block • Fully associative Enqueue Dequeue PE1 PE0 … PEp-1 … $0 $1 $p-1 Intel MKL talk

  23. Scheduling • Cache Affinity • Dequeue • Search queue for task with output block in software cache • If found return task • Otherwise return head task • Enqueue • Insert task • Sort queue via task heights • Dispatcher • Update software cache via cache coherency protocol with write invalidation Intel MKL talk

  24. Scheduling • Multiple Graphics Processing Units • View a GPU as a single accelerator as opposed to being composed of hundreds of streaming processors • Must explicitly transfer data from main memory to GPU • No hardware cache coherency provided • Hybrid Execution Model • Execute tasks on both CPU and GPU Intel MKL talk

  25. Scheduling • Software Managed Cache Coherency • Use software caches developed for cache affinity to handle data transfers! • Allow blocks to be dirty on GPU until it is requested by another GPU • Apply any scheduling algorithm when utilizing GPUs, particularly cache affinity Intel MKL talk

  26. Outline 7 • Introduction • SuperMatrix • Scheduling • Performance • Conclusion 6 5 5 4 3 4 3 2 1 Intel MKL talk

  27. Performance • CPU Target Architecture • 4 socket 2.66 GHz Intel Dunnington • 24 cores • Linux and Windows • 16 MB shared L3 cache per socket • OpenMP • Intel compiler 11.1 • BLAS • Intel MKL 10.2 Intel MKL talk

  28. Performance • Implementations • SuperMatrix + serial MKL • FIFO queue, cache affinity • FLAME + multithreaded MKL • Multithreaded MKL • PLASMA + serial MKL • Double precision real floating point arithmetic • Tuned block size Intel MKL talk

  29. Performance Intel MKL talk

  30. Performance Intel MKL talk

  31. Performance • Inversion of a Symmetric Positive Definite Matrix • Cholesky factorization CHOL • Inversion of a triangular matrix TRINV • Triangular matrix multiplication by its transpose TTMM Intel MKL talk

  32. Performance • Inversion of an SPD Matrix Intel MKL talk

  33. Performance Intel MKL talk

  34. Performance Intel MKL talk

  35. Performance Intel MKL talk

  36. Performance Intel MKL talk

  37. Performance Intel MKL talk

  38. Performance Intel MKL talk

  39. Performance • Generalized Eigenproblem where and is symmetric and is symmetric positive definite • Cholesky Factorization where is a lower triangular matrix so that Intel MKL talk

  40. Performance then multiply the equation by • Standard Form where and • Reduction from Symmetric Definite Generalized Eigenproblem to Standard Form Intel MKL talk

  41. Performance • Reduction from … Intel MKL talk

  42. Performance Intel MKL talk

  43. Performance • GPU Target Architecture • 2 socket 2.82 GHz Intel Harpertown with NVIDIA Tesla S1070 • 4 602 MHz Tesla C1060 GPUs • 4 GB DDR memory per GPU • Linux • CUDA • CUBLAS 3.0 • Single precision real floating point arithmetic Intel MKL talk

  44. Performance Intel MKL talk

  45. Performance Intel MKL talk

  46. Performance Intel MKL talk

  47. Performance • Results • Cache affinity vs. FIFO queue • SuperMatrix out-of-order vs. PLASMA in-order • High variability of work stealing vs. predictable cache affinity performance • Strong scalability on CPU and GPU • Representative performance of other dense linear algebra operations Intel MKL talk

  48. Outline 7 • Introduction • SuperMatrix • Scheduling • Performance • Conclusion 6 5 5 4 3 4 3 2 1 Intel MKL talk

  49. Conclusion • Separation of Concerns • Allows us to experiment with different scheduling algorithms • Port runtime system to multiple GPUs • Locality, Locality, Locality • Data communication is important as load balance for scheduling matrix computations Intel MKL talk

  50. Current Work • Intel Single-chip Cloud Computer • 48 cores on a single die • Cores communicate via message passing buffer • RCCE_send • RCCE_recv • Software managed cache coherency for off-chip shared memory • RCCE_shmalloc Intel MKL talk

More Related