1 / 35

CuMAPz: A Tool to Analyze Memory Access Patterns in CUDA

CuMAPz: A Tool to Analyze Memory Access Patterns in CUDA. Yooseong Kim and Aviral Shrivastava Compiler and Microarchitecture Laboratory , Arizona State University DAC2011. Outline. Introduction Preliminaries Motivating examples CuMAPz approach Experimental results and Conclusions.

eleanor-gay
Download Presentation

CuMAPz: A Tool to Analyze Memory Access Patterns in CUDA

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CuMAPz: A Tool to Analyze Memory Access Patterns in CUDA Yooseong Kim and Aviral Shrivastava Compiler and Microarchitecture Laboratory , Arizona State University DAC2011

  2. Outline • Introduction • Preliminaries • Motivating examples • CuMAPz approach • Experimental results and Conclusions

  3. Introduction • Currently, the computational power of Graphics Processing Units (GPUs) has reached teraFLOP scale. • NVIDIA CUDA and OpenCL make GPGPU (General Purpose computation on GPUs) programming more easier. • The performance will be heavily affected by memory performance for the sake of large data size.

  4. Introduction (cont.) • Shared memory is as fast as registers, and is the only fastmemory where both reads and writes are enabled. • Many factors affect performance: data reuse, global memory access coalescing, shared memory, bank conflict, channel skew. • Develops CuMAPz (CUDA Memory Access Pattern analyZer) to analyze the memory performance of CUDA program.

  5. Preliminaries • NVIDIA GPU architecture. • Comparisons between CPU and GPU. • CUDA programming. • Memory coalescing. • Execution of GPU thread

  6. Architecture of Nvidia GTX280 • A collection of 30 multiprocessors, with 8 streaming processors each. • The 30 multiprocessors share one off-chip global memory. • Access time: about 300 clock cycles • Each multiprocessor has a on-chip memory shared by that 8 streaming processors. • Access time: 2 clock cycles

  7. Architecture diagram

  8. About some differences between GPU and CPU

  9. Abstract comparisons of memory between GPU and CPU (cont.)

  10. Memory coalescing • Several memory transactions can be coalescedinto one transaction when consecutive threads access consecutive memory locations. • Due to access time of global memory is relatively large, it is important to achieve this.

  11. CUDA programming • Compute Unified Device Architecture • The CPU code does the sequential part. • Highly parallelized part usually implement in the GPU code, called kernel. • Calling GPU function in CPU code is called kernel launch.

  12. Execution of GPU thread • Threads are grouped into thread blocks. • Each thread block is assigned to a streaming multiprocessors (SMs), which contains multiple scalar processors (SPs), to be executed. • The actual execution of threads on SPs is done in groups of 32 threads, called warps. • SPs execute one warp at a time.

  13. Motivating examples • What to fetch into shared memory? • How to access shared memory? • How to access global memory?

  14. What to fetch into shared memory? • A simple program that does not use shared memory.

  15. What to fetch into shared memory? (cont.) • If we fetch row*MAX+col+1 to the shared memory…

  16. What to fetch into shared memory? (cont.) • Generally, higher data reuse should imply better performance. => But may not be true here. • This counter-intuitive result is mainly caused by global memory access coalescing.

  17. How to access shared memory? • In Figure 2, data is accessed in a column-wise manner, as shown at Line 4, 9, 11, and 16. • What if we change into row-wise manner (i.e. s_in[tIdx.y][tIdx.x]) or skewing the access pattern (i.e. __shared__ float s_in[BLKDIM][BLKDIM+1])?

  18. How to access shared memory? (cont.) • Shared memory bank conflicts occur if there are multiple requests to different addresses in the same bank. In this case, the requests are serialized.

  19. How to access global memory? • A programmer might have designed the global memory write reference at Line 18 in Figure 2 to be in a column-wise manner as in out[col*MAX+row]. • This unexpected slowdown is caused by channel skew. Channel skew is the ratio of the number of concurrent accesses to the most used channel to theleast used channel.

  20. Previous works • [20] modeled the amount of parallelism employed in a program and the efficiency of a single kernel execution in a thread. • Did not consider memory performance and their analysis is only for compute intensive benchmarks. • [8] includes the effect of parallelism to hide global memory access latency. • Does not take into account branch divergence. • [14][15][16][17][18] automate optimization of GPGPU applications. • None of the above work comes up with a comprehensive performance metric to estimate the efficiency of memory access pattern.

  21. CuMAPz overview

  22. Data Reuse Profit Estimation • CuMAPz maintains a counter to count the number of times shared memory buffers are accessed. The degree of data reuse is represented in a term, data reuse, as follows:

  23. Coalesced Access Profit Estimation • Due to coalescing, the actual transfer size that will consume bus width can be different from the size of data requested from threads. CuMAPz calculates the bandwidth utilization as the following:

  24. Channel Skew Cost Estimation • Channel skew refers to the case where the concurrent memory accessesare not evenly distributed to all the channels but focused ononly a few channels. • When a kernel is launched, threads blocks are assignedto SMs in a sequential order so that adjacent blocks are executed onadjacent SMs. Then, it becomes unpredictable after the first round ofschedule since the order in which thread blocks finish the executioncannot be determined [13].

  25. Channel Skew Cost Estimation (cont.) • The impact of channel skew can be stated in figures as the skewness of mapping to channels which can be calculated as follows:

  26. Bank Conflict Cost Estimation • Similarly to global memory channels, shared memory space is divided into multiple banks. Each bank can serve one address at a time. • Efficiency of shared memory access is modeled as follows:

  27. Branch Divergence Cost Estimation • Branches are introduced when there is uncovered region that is not buffered into shared memory, as shown at Line 6 and 13 in Figure 2. • When threads in a warp take different execution paths, then all paths are serialized. • We simply model the impact of branch divergence as follows:

  28. Overall Memory Performance Estimation • Memory performance estimation is calculated by the following formula.

  29. Experimental results • Environments • Using C language. • CUDA driver version 3.2 on NVIDIA Tesla C1060. • Benchmark are from benchmark suites in [6], and CUDA SDK.

  30. Two experiments • Validation: studying the correlation between our memory performance estimation and the performance of the benchmarks for different ways. • Performance Optimization: trying to find the best way to accesses shared and global memory using CuMAPz and the previous technique [8].

  31. Validation

  32. Performance optimization

  33. Runtime Considerations • The timing complexity of the CuMAPz analysis is O(|W|*|R|*|B|), where W, R, and B are the set of all warps, global memory references, and shared memory buffers respectively.

  34. Limitations • Compile-time analysis • Cannot handle any information that can only be determined during run-time. • Assume adequate occupancy • A measure of how many thread blocks can be scheduled on one SM so that the hardware is kept busy.

  35. Conclusions • GPU is a new platform for high-performance computing. • Develops CuMAPz to analyze memory performance of CUDA. • Considering many aspects like channel skew, etc. • Experimental results show very high correlation between the actual execution times and CuMAPz estimation.

More Related