1 / 19

Timothy G. Rogers 1 , Mike O’Connor 2 , Tor M. Aamodt 1

Divergence-Aware Warp Scheduling. Timothy G. Rogers 1 , Mike O’Connor 2 , Tor M. Aamodt 1. 1 The University of British Columbia 2 NVIDIA Research. MICRO 2013 Davis, CA. GPU. 10000’s concurrent threads Grouped into warps Scheduler picks warp to issue each cycle. Main Memory. L2 cache.

efia
Download Presentation

Timothy G. Rogers 1 , Mike O’Connor 2 , Tor M. Aamodt 1

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Divergence-Aware Warp Scheduling Timothy G. Rogers1, Mike O’Connor2, Tor M. Aamodt1 1The University of British Columbia 2NVIDIA Research MICRO 2013 Davis, CA

  2. GPU 10000’s concurrent threads Grouped into warps Scheduler picks warp to issue each cycle Main Memory L2cache Threads Warp … Streaming Multiprocessor Streaming Multiprocessor … W1 W2 … Memory Unit Warp Scheduler L1D

  3. Branch Divergence Memory Divergence 2 Types of Divergence if(…) { … } Effects functional unit utilization Threads 1 0 0 1 Aware of branch divergence Warp … Main Memory Aware of memory divergence … Can waste memory bandwidth AND Warp Focus on improving performance …

  4. Motivation Transfer locality management from SW to HW Software solutions: Complicate programming Not always performance portable Not guaranteed to improve performance Sometimes impossible Improve performance of programs with memory divergence Parallel irregular applications Economically important (server computing, big data)

  5. GPU-Optimized Version Programmability Case Study Sparse Vector-Matrix Multiply 2 versions from SHOC Divergent Version Explicit Scratchpad Use Added Complication Dependent on Warp Size Divergence Each thread has locality Parallel Reduction

  6. W1 WN W2 W3 Previous Work Scheduling used to capture intra-thread locality (MICRO 2012) Stop Go Stop Go Go Go Memory Unit 0 0 1 1 Active Mask Active Mask L1D Lost Locality Detected Previous Work Divergence-Aware Warp Scheduling • Reactive • Detects interference then throttles • Proactive Predict and be Proactive 1 1 0 1 1 1 1 1 • Branch divergence aware • Unaware of branch divergence • All warps treated equally Adapt to branch divergence Case Study: Divergent code <4% slowdown Outperform static solution Outperformed by profiled static throttling Case Study: Divergent code 50% slowdown Warp Scheduler

  7. Divergence-Aware Warp Scheduling How to be proactive • Identify where locality exists • Limit the number of warps executing in high locality regions Adapt to branch divergence • Create cache footprint prediction in high locality regions • Account for number of active lanes to create per-warp footprint prediction. • Change the prediction as branch divergence occurs.

  8. Where is the locality? Examine every load instruction in program Loop Hits/Misses PKI Load 11 Load 8 Load 9 Load 12 Load 13 Load 10 Load 6 Load 3 Load 2 Load 7 Load 1 Load 5 Load 4 Static Load Instructions in GC workload Locality Concentrated in Loops

  9. Hits on data accessed in immediately previous trip Locality In Loops Limit Study How much data should we keep around? Other Fraction cache hits in loops Line accessed last iteration Line accessed this iteration

  10. DAWS Objectives Predict the amount of data accessed by each warp in a loop iteration. Schedule warps in loops so that aggregate predicted footprint does not exceed L1D.

  11. Observations that enable prediction Memory divergence in static instructions is predictable Data touched by divergent loads dependent on active mask Main Memory Divergence … load … Warp 1 Warp 0 Both Used To Create Cache Footprint Prediction Main Memory Main Memory 4 accesses 2 accesses Divergence Divergence Warp Warp 1 0 0 1 1 1 1 1

  12. Online characterization to create cache footprint prediction Detect loops with locality Classify loads in the loop Compute footprint from active mask Some loops have locality Some don’t Limit multithreading here Loop with locality while(…) { load 1 … load 2 } Diverged Not Diverged 1 1 1 1 Loop with locality while(…) { load 1 … load 2 } Warp 0 1 1 4 accesses Diverged Warp 0’s Footprint = 5 cache lines + 1 access Not Diverged

  13. 1stIter. Example Compressed Sparse Row Kernel Cache Cache Cache Hit x30 A[32] int C[]={0,64,96,128,160,160,192,224,256}; void sum_row_csr(float* A, …) { float sum = 0; inti=C[tid]; while(i < C[tid+1]) { sum += A[ i ]; ++i; } … A[0] A[0] Hit Both warps capture spatial locality together Hit x30 A[160] A[64] A[64] Hit DAWS Operation Example Hit x30 A[192] Hit A[96] A[96] Hit x30 A[224] A[128] A[128] Hit Loop Stop 4 Active threads Go Divergent Branch Memory Divergence Go Go Go 1stIter. Warp1 0 1 1 1 Warp0 1 0 0 0 Time0 Time1 Time2 1stIter. Footprint = 3X1 Warp 0 has branch divergence 33rdIter. Locality Detected Warp0 1 1 1 1 Warp1 0 1 1 1 Warp0 1 1 1 1 Cache Footprint Stop Footprint decreased Warp1 2ndIter. 1 Diverged Load Detected Early warps profile loop for later warps 4 No locality detected = no footprint 4 4 Want to capture spatial locality Warp1 Warp0 No Footprint Warp0 Footprint = 4X1

  14. Methodology GPGPU-Sim (version 3.1.0) 30 Streaming Multiprocessors 32 warp contexts (1024 threads total) 32k L1D per streaming multiprocessor 1M L2 unified cache Compared Schedulers Cache-Conscious Wavefront Scheduling (CCWS) Profile based Best-SWL Divergence-Aware Warp Scheduling (DAWS) More schedulers in paper

  15. Sparse MM Case Study Results Performance (normalized to optimized version) Within 4% of optimized with no programmer input Divergent Code Execution time

  16. Sparse MM Case Study Results Properties (normalized to optimized version) <20% increase in off-chip accesses Divergent code off-chip accesses Divergent code issues 2.8x less instructions Divergent version now has potential energy advantages 13.3

  17. Cache-Sensitive Applications Breadth First Search (BFS) Memcached-GPU (MEMC) Sparse Matrix-Vector Multiply (SPMV-Scalar) Garbage Collector (GC) K-Means Clustering (KMN) Cache-Insensitive Applications in paper

  18. Overall 26% improvement over CCWS Outperform Best-SWL in highly branch divergent Results Normalized Speedup GC BFS MEMC SPMV-Scalar KMN HMean

  19. Questions? Summary Divergent loads in GPU programs. Software solutions complicate programming DAWS Captures opportunities by accounting for divergence Overall 26% performance improvement over CCWS Case Study: Divergent code performs within 4% code optimized to minimize divergence Problem Solution Result

More Related