1 / 38

ITCS 5/4145 Parallel computing, B. Wilkinson, April 17, 2014. CUDAMultiDimBlocks

CUDA Grids, Blocks, and Threads. These notes will introduce: One dimensional and multidimensional grids and blocks How the grid and block structures are defined in CUDA Predefined CUDA variables Adding vectors using one-dimensional structures

gbrent
Download Presentation

ITCS 5/4145 Parallel computing, B. Wilkinson, April 17, 2014. CUDAMultiDimBlocks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CUDA Grids, Blocks, and Threads These notes will introduce: • One dimensional and multidimensional grids and blocks • How the grid and block structures are defined in CUDA • Predefined CUDA variables • Adding vectors using one-dimensional structures • Adding/multiplying arrays using 2-dimensional structures ITCS 5/4145 Parallel computing, B. Wilkinson, April 17, 2014. CUDAMultiDimBlocks.ppt

  2. Grids, Blocks, and Threads NVIDIA GPUs consist of an array of execution cores, each of which can support a large number of threads, many more than number of cores. Threads grouped into “blocks” Blocks can be 1, 2, or 3 dimensional Each kernel call uses a “grid” of blocks Grids can be 1, 2, or 3 dimensional (3-D available for recent GPUs) Programmer needs to specify grid/block organization on each kernel call (which can be different each time), within limits set by the GPU

  3. CUDA SIMT Thread Structure Allows flexibility and efficiency in processing 1D, 2-D, and 3-D data on GPU. Linked to internal organization Threads in one block execute together. Can be 1, 2, or 3 dimensions (compute capability => 2 see next) Can be 1, 2 or 3 dimensions CUDA C programming guide, v 3.2, 2010, NVIDIA

  4. Device characteristics -- some limitations NVIDIA defines “compute capabilities”, 1.0, 1.1, … with limits and features supported. Compute capability 1.0 (min) 2.x* 3.0/3.5 Grid: Max dimensionality 2 3 3 Max size of each dimension (x, y, z) 65535 65535 231 – 1 (no of blocks in each dimension) (2,147,483,647) Blocks: Max dimensionality 3 3 3 Max sizes of x- and y- dimension 512 1024 1024 Max size of z- dimension 64 64 64 Max number of threads per block overall 512 1024 1024 coit-grid06 and cci-grid07 have C2050s, compute capability 2.0. cci-grid08.uncc.edu has a K20, compute capability 3.5. Most recent in 2013.

  5. Defining Grid/Block Structure Need to provide each kernel call with values for: • Number of blocks in each dimension • Threads per block in each dimension myKernel<<< B, T >>>(arg1, … ); B – a structure that defines number of blocks in grid in each dimension (1D, 2D, or 3D). T – a structure that defines number of threads in a block in each dimension (1D, 2D, or 3D).

  6. 1-D grid and/or 1-D blocks If want a 1-D structure, can use a integer for B and T in: myKernel<<< B, T >>>(arg1, … ); B – An integer would define a 1D grid of that size T –An integer would define a 1D block of that size Example myKernel<<< 1, 100 >>>(arg1, … );

  7. CUDA Built-in Variables for a 1-D grid and 1-D block threadIdx.x-- “thread index” within block in “x” dimension blockIdx.x-- “block index” within grid in “x” dimension blockDim.x-- “block dimension” in “x” dimension (i.e. number of threads in block in x dimension) Full global thread ID in x dimension can be computed by: x = blockIdx.x * blockDim.x + threadIdx.x;

  8. Example -- x direction A 1-D grid and 1-D block 4 blocks, each having 8 threads Global ID 26 threadIdx.x threadIdx.x threadIdx.x threadIdx.x 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 blockIdx.x = 0 blockIdx.x = 1 blockIdx.x = 2 blockIdx.x = 3 gridDim = 4 x 1 blockDim = 8 x 1 Global thread ID = blockIdx.x * blockDim.x + threadIdx.x = 3 * 8 + 2 = thread 26 with linear global addressing Derived from Jason Sanders, "Introduction to CUDA C" GPU technology conference, Sept. 20, 2010.

  9. Code example with a 1-D grid and blocks Vector addition #define N 2048 // size of vectors #define T 256 // number of threads per block __global__ void vecAdd(int *a, int *b, int *c) { int i = blockIdx.x*blockDim.x + threadIdx.x; c[i] = a[i] + b[i]; } int main (int argc, char **argv ) { … vecAdd<<<N/T, T>>>(devA, devB, devC); // assumes N/T is an integer … return (0); } Note: __global__ CUDA function qualifier. __ is two underscores __global__ must return a void Number of blocks to map each vector across grid, one element of each vector per thread

  10. If T/N not necessarily an integer: #define N 2000 // size of vectors #define T 256 // number of threads per block __global__ void vecAdd(int *a, int *b, int *c) { int i = blockIdx.x*blockDim.x + threadIdx.x; if (i < N) c[i] = a[i] + b[i]; // allows for more threads than vector elements // some unused } int main (int argc, char **argv ) { int blocks = (N + T - 1) / T; // efficient way of rounding to next integer … vecAdd<<<blocks, T>>>(devA, devB, devC); … return (0); }

  11. Questions How many threads are created? How many threads are unused? What is the maximum number of threads that can be created in a GPU on coit-grid06/7? On coit-grid08?

  12. Higher dimensional grids/blocks 1-D grid and 1-D block suitable for processing one dimensional data Higher dimensional grids and blocks convenient for higher dimensional data. Processing 2-D arrays might use a two dimensional grid and two dimensional block Might need higher dimensions because of limitation on sizes of block in each dimension CUDA provided with built-in variables and structures to define number of blocks in grid in each dimension and number of threads in a block in each dimension.

  13. Built-in CUDA data types and structures CUDA Vector Types/Structures unit3 and dim3 – can be considered essentially as CUDA-defined structures of unsigned integers: x, y, z, i.e. structunit3{ x; y; z; }; structdim3{ x; y; z; }; Used to define grid of blocks and threads, see next. Unassigned structure components automatically set to 1. There are other CUDA vector types.

  14. Built-in Variables for Grid/Block Sizes dim3gridDim -- Grid dimensions, x, y, z. Number of blocks in grid = gridDim.x * gridDim.y* gridDim.z dim3 blockDim-- Size of block dimensions x, y, and z. Number of threads in a block = blockDim.x * blockDim.y * blockDim.z

  15. Example Initializing Values To set values in each dimensions, use for example: dim3 grid(16, 16); // Grid -- 16 x 16 blocks dim3 block(32, 32); // Block -- 32 x 32 threads … myKernel<<<grid, block>>>(...); which sets: gridDim.x = 16 gridDim.y = 16 gridDim.z = 1 blockDim.x = 32 blockDim.y = 32 blockDim.z = 1 when kernel called

  16. CUDA Built-in Variables for Grid/Block Indices uint3blockIdx-- block index within grid: blockIdx.x, blockIdx.y, blockIdx.z uint3threadIdx -- thread index within block: threadIdx.x, threadIdx.y, threadIdx.z 2-D: Full global thread ID in x and y dimensions can be computed by: x = blockIdx.x * blockDim.x + threadIdx.x; y = blockIdx.y * blockDim.y + threadIdx.y; CUDA structures

  17. 2-D Grids and 2-D blocks blockIdx.y * blockDim.y + threadIdx.y threadID.x threadID.y blockIdx.x * blockDim.x + threadIdx.x Thread

  18. Flattening arrays onto linear memory Generally memory allocated dynamically on device (GPU) and we cannot not use two-dimensional indices (e.g. a[row][column]) to access array as we might otherwise. (Why?) We will need to know how the array is laid out in memory and then compute the distance from the beginning of the array. C uses row-major order --- rows are stored one after the other in memory, i.e. row 0 then row 1 etc.

  19. Flattening an array Number of columns, N Array element a[row][column] = a[offset] offset = column + row * N where N is number of column in array column N-1 0 0 row Note: Another way to flatten array is: offset = row + column * N We will come back to this later as it does have very significant consequences on performance. row * number of columns

  20. Using CUDA variables intcol = blockIdx.x*blockDim.x+threadIdx.x; int row = blockIdx.y*blockDim.y+threadIdx.y; int index = col + row * N; a[index] = …

  21. Example using 2-D grid and 2-D blocks Adding two arrays Corresponding elements of each array added together to form element of third array

  22. CUDA version using 2-D grid and 2-D blocks Adding two arrays #define N 2048 // size of arrays __global__void addMatrix (int *a, int *b, int *c) { int col = blockIdx.x*blockDim.x+threadIdx.x; int row =blockIdx.y*blockDim.y+threadIdx.y; int index = col + row * N; if ( col < N && row < N) c[index]= a[index] + b[index]; } int main() { ... dim3 block (16,16); dim3 grid (N/block.x, N/block.y); addMatrix<<<grid, block>>>(devA, devB, devC); … }

  23. Matrix Multiplication Matrix multiplication is an important application in HPC and appears in many applications C = A * B where A, B, and C are matrices (two-dimensional arrays. A restricted case is when B has only one column -- matrix-vector multiplication, which appears in representation of linear equations and partial differential equations

  24. Matrix multiplication, C = A x B

  25. Implementing Matrix Multiplication Sequential Code Assume matrices square (N x N matrices). for (i = 0; i < N; i++) for (j = 0; j < N; j++) { c[i][j] = 0; for (k = 0; k < N; k++) c[i][j] = c[i][j] + a[i][k] * b[k][j]; } Requires n3 multiplications and n3 additions Sequential time complexity of O(n3). Very easy to parallelize.

  26. CUDA Kernel for multiplying two arrays __global__ void gpu_matrixmult(int *gpu_a, int *gpu_b, int *gpu_c, int N) { int k, sum = 0; int col = threadIdx.x + blockDim.x * blockIdx.x; int row = threadIdx.y + blockDim.y * blockIdx.y; if (col < N && row < N) { for (k = 0; k < N; k++) sum += a[row * N + k] * b[k * N + col]; c[row * N + col] = sum; } } In this example, one thread computes one C element and the number of threads must equal or greater than the number of elements

  27. Sequential version with flattened arrays for comparison void cpu_matrixmult(int *cpu_a, int *cpu_b, int *cpu_c, int N) { int i, j, k, sum; for (row =0; row < N; row++) // row of a for (col =0; col < N; col++) { // column of b sum = 0; for(k = 0; k < N; k++) sum += cpu_a[row * N + k] * cpu_b[k * N + col]; cpu_c[row * N + col] = sum; } }

  28. Matrix mapped on 2-D Grids and 2-D blocks A[][column] blockIdx.y * blockDim.y + threadIdx.y Grid Arrays mapped onto structure, one element per thread Block threadID.x threadID.y Array Thread A[row][] blockIdx.x * blockDim.x + threadIdx.x Basically array divided into “tiles” and one tile mapped onto one block

  29. Complete Program (several slides) // Matrix addition program MatrixMult.cu, Barry Wilkinson, Dec. 28, 2010. #include <stdio.h> #include <cuda.h> #include <stdlib.h> __global__ void gpu_matrixmult(int *gpu_a, int *gpu_b, int *gpu_c, int N) { … } void cpu_matrixmult(int *cpu_a, int *cpu_b, int *cpu_c, int N) { … } int main(int argc, char *argv[]) { int i, j; // loop counters int Grid_Dim_x=1, Grid_Dim_y=1; //Grid structure values int Block_Dim_x=1, Block_Dim_y=1; //Block structure values int noThreads_x, noThreads_y; // number of threads available in device, each dimension int noThreads_block; // number of threads in a block int N = 10; // size of array in each dimension int *a,*b,*c,*d; int *dev_a, *dev_b, *dev_c; int size; // number of bytes in arrays cudaEvent_t start, stop; // using cuda events to measure time float elapsed_time_ms; // which is applicable for asynchronous code also cudaEventCreate(&start); cudaEventCreate(&stop); /* --------------------ENTER INPUT PARAMETERS AND ALLOCATE DATA -----------------------*/ … // keyboard input dim3 Grid(Grid_Dim_x, Grid_Dim_x); //Grid structure dim3 Block(Block_Dim_x,Block_Dim_y); //Block structure, threads/block limited by specific device size = N * N * sizeof(int); // number of bytes in total in arrays a = (int*) malloc(size); //dynamically allocated memory for arrays on host b = (int*) malloc(size); c = (int*) malloc(size); // results from GPU d = (int*) malloc(size); // results from CPU … // load arrays with some numbers

  30. /* ------------- COMPUTATION DONE ON GPU ----------------------------*/ cudaMalloc((void**)&dev_a, size); // allocate memory on device cudaMalloc((void**)&dev_b, size); cudaMalloc((void**)&dev_c, size); cudaMemcpy(dev_a, a , size ,cudaMemcpyHostToDevice); cudaMemcpy(dev_b, b , size ,cudaMemcpyHostToDevice); cudaEventRecord(start, 0); // here start time, after memcpy gpu_matrixmult<<<Grid,Block>>>(dev_a,dev_b,dev_c,N); cudaMemcpy(c, dev_c, size , cudaMemcpyDeviceToHost); cudaEventRecord(stop, 0); // measuse end time cudaEventSynchronize(stop); cudaEventElapsedTime(&elapsed_time_ms, start, stop ); printf("Time to calculate results on GPU: %f ms.\n", elapsed_time_ms); Where you measure time will make a big difference See later about using CUDA events to measure time.

  31. /* ------------- COMPUTATION DONE ON HOST CPU ----------------------------*/ cudaEventRecord(start, 0); // use same timing* cpu_matrixmult(a,b,d,N); // do calculation on host cudaEventRecord(stop, 0); // measure end time cudaEventSynchronize(stop); cudaEventElapsedTime(&elapsed_time_ms, start, stop ); printf("Time to calculate results on CPU: %f ms.\n", elapsed_time_ms); // exe. time /* ------------------- check device creates correct results -----------------*/ … /* --------------------- repeat program ----------------------------------------*/ … // while loop to repeat calc with different parameters /* -------------- clean up ---------------------------------------*/ free(a); free(b); free(c); cudaFree(dev_a); cudaFree(dev_b); cudaFree(dev_c); cudaEventDestroy(start); cudaEventDestroy(stop); return 0; }

  32. Some Preliminaries Effects of First Launch Program is written so that can repeat with different parameters without stopping program – to eliminate effect of first kernel launch Also might take advantage of caching – seems not significant as first launch

  33. Some results 32 x 32 array 1 block of 32 x 32 threads Speedup = 1.65, First time Random numbers 0- 9 Answer Check both CPU and GPU same answers

  34. Some results 32 x 32 array 1 block of 32 x 32 threads Speedup = 2.12 Second time

  35. Some results 32 x 32 array 1 block of 32 x 32 threads Speedup = 2.16 Third time Subsequently can vary 2.12 – 2.18

  36. Some results 256 x 256 array 8 blocks of 32 x 32 threads Speedup = 151.86

  37. Some results 1024 x 1024 array 32 blocks of 32 x 32 threads Speedup = 860.9

  38. Questions

More Related