1 / 18

Exploiting Iterative Methods on Many-Core GPUs

This study explores the use of iterative methods on many-core GPUs, focusing on weak execution ordering and optimizing inter-block communication for improved performance. The CUDA programming model is reviewed, and applications with iterative PDE solvers are discussed, along with optimizations and performance results. The study concludes with suggestions for reducing host synchronization overhead, improving inter-block communication, and optimizing block scheduling. (500 characters)

wiesner
Download Presentation

Exploiting Iterative Methods on Many-Core GPUs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Weak Execution Ordering - Exploiting IterativeMethods on Many-Core GPUs Jianmin Chen, Zhuo Huang, Feiqi Su, Jih-Kwon Peir and Jeff Ho University of Florida Lu Peng Louisiana State University

  2. Outline CUDA review & Inter-Block communication and synchronization Host synchronization overhead Applications with iterative PDE solver Optimizations on inter-block communication Performance results Conclusion

  3. CUDA Programming Model Host invoke Kernels/Grids to execute on GPU Kernel/Grid Blocks Threads Thread Application Host execution kernel 0 Block 0 Block 1 Block 2 Block 3 ... ... ... ... Host execution kernel 1 Block 0 Block N … … ... ... …

  4. CUDA GPU Architecture • Blocks assigned to Stream Multiprocessors (SM) composed of 8 Stream processors (SP) and Shared (local) Memory. • Block synchronization must through Host! No synch. among blocks! Block 58 Block 59 Num. of blocks limited by resources Scheduler: WaitingBlocks GPU SM 0 SM 29 Block 0 … … … SP SP Block 60 SP SP Block 61 SP SP Blocks can communicate through GM Data lost when return to host Block 1 SP SP … … Shared Memory Interconnect Network Block N Global Memory

  5. Example: Breath First Search (BFS) Given G(V,E) source (S), compute steps to reach all other nodes. Each thread compute one node Initially all inactive except source node If activated, visit it and activate its unvisited neighbors n-1 steps needed to reach nodes visited in nth iteration Keep iterating until no active node Synchronization needed after each Iteration Inactive Active Visited 1st Iteration S S S C C C A A A B B B 2nd Iteration D D D E E E 3rd Iteration; Done

  6. No-Host vs. Host Synchronization Limit number of nodes to fit in 1 Block – for avoiding host synchronization Host-sync can be replaced by __syncthreads() Avoid multiple kernel initiation overhead Data can stay in shared memory to reduce global accesses for save/restore Reduce intermediate partial data transfer or termination flag to host during host synchronization

  7. No-Host/Host Result • Graph generated by GTgraph with 3K nodes • No-host uses __syncthreads() in each iteration 67% Host overhead

  8. Applications with Iterative PDE solver Partial Differential Equation solver are widely used Weak execution ordering / chaotic PDE using iterative methods Accuracy of the solver is NOT critical Poisson Image Editing 3D Shape from Shading

  9. Newx,y= f(Oldx-1,y, Oldx,y-1, Oldx,y+1, Oldx+1,y) Each block computes a sub-grid. Nodes from neighboring blocks needed for computing boundary nodes Host synchronization: Go back to host after each iteration But, no exact order needed! Basic 3D-Shape in CUDA Block2 Shared Mem . . . . Grid in Global Memory . . . . . . . . . . . . . . . . . . Block 0 Block 1 Block 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Block 3 Block 4 Block 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Block5 Shared Mem

  10. Coarse Synchronization Host synchronization every (n ) iterations Inter-block communicate through GM with neighbor blocks for updated boundary nodes Block2 Shared Mem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Block5 Shared Mem

  11. Coarse vs. Fine Host Synchronization Coarse synchronization Less synchronization overhead, but Need more iterations to converge due to imprecise boundary updates through inter-block comm. Reduce inter-block communication overhead Overlap communication with computation Neighbor communicate: upper/lower vs. 4 neighbors Blocks scheduling strategy: square vs. stripe

  12. Overlap Communication, Computation Separate communication threads to overlap with computation No precise order is needed

  13. Overlap Communication with Computation • Communication frequency: • Execution Time = Time/Iteration * Number of Iteration 13

  14. Neighbor Communication Only communicate upper and lower neighbors Less data communication through global memory Coalesced memory moves Incomplete data communication  slower in convergence Communicate with all four neighbors More and uncoalesced data moves May converge faster

  15. Blocks Scheduling Blocks scheduled in groups due to limited resources. No updated data from inactive blocks. Try to minimize boundary nodes of the whole group Stripe scheduling Square scheduling

  16. Base: 95.35 s

  17. Conclusion Inter-block synchronization Not supported on GPU Significant impact on asynchronous PDE solvers Coarse synchronization and optimizations to improve the overall Performance Separate communication threads to overlap computation Block scheduling and inter-block communication Speedup of 4-5 times compared with fine-granularity host synchronization

  18. Thank You!! Questions?

More Related