1 / 30

CIS 6930: Chip Multiprocessor: Parallel Architecture and Programming

Fall 2010 Jih-Kwon Peir Computer Information Science Engineering University of Florida. CIS 6930: Chip Multiprocessor: Parallel Architecture and Programming. Chapter 4: CUDA Threads. Programming Model: Square Matrix Multiplication Example. P = M * N of size WIDTH x WIDTH Without tiling:

larrymarcus
Download Presentation

CIS 6930: Chip Multiprocessor: Parallel Architecture and Programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Fall 2010 • Jih-Kwon Peir • Computer Information Science Engineering • University of Florida CIS 6930: Chip Multiprocessor: Parallel Architecture and Programming

  2. Chapter 4: CUDA Threads

  3. Programming Model:Square Matrix Multiplication Example • P = M * N of size WIDTH x WIDTH • Without tiling: • One thread calculates one element of P • M and N are loaded WIDTH times from global memory • Next, exploit Block • level Parallelism! • Handle large • problems N WIDTH M P WIDTH WIDTH WIDTH

  4. Block IDs and Thread IDs • Each thread uses Block and Thread IDs to decide what data to work on • Block ID: 1D or 2D • Thread ID: 1D, 2D, or 3D • Simplifies memoryaddressing when processingmultidimensional data • Image processing • Solving PDEs on volumes • …

  5. Buliding Block – Streaming Multiprocessor (SM) • Each TPC has two Streaming Processors (SM) • Each CUDA thread block assigned to one SM for execution • Each SM has eight Streaming Processors (SP or cores) • 8K registers • 16KB shared memory • 8 code blocks, 768 threads • Each CUDA thread block • 512 threads grouped into 32-thread Warps (SIMT mode) • Warps scheduled run on SP • No synchronization between thread blocks • A good overview paper: NVIDIA Tesla: Unified Graphics and Computing Architecture, IEEE Micro

  6. bx Matrix Multiplication Using Multiple Blocks 0 1 2 tx TILE_WIDTH-1 0 1 2 Nd • Break-up Pd into tiles (3x3) • Each block calculates one tile • Each thread calculates one element • Block size equal tile size WIDTH Md Pd 0 0 Pdsub 1 2 ty WIDTH TILE_WIDTHE by 1 TILE_WIDTH-1 TILE_WIDTH 2 WIDTH WIDTH

  7. A Small Example Block(0,0) Block(1,0) P0,0 P1,0 P2,0 P3,0 TILE_WIDTH = 2 P0,1 P1,1 P2,1 P3,1 P0,2 P1,2 P2,2 P3,2 P0,3 P1,3 P2,3 P3,3 Block(0,1) Block(1,1)

  8. A Small Example: Multiplication Nd0,0 Nd1,0 Nd0,1 Nd1,1 • Total four blocks (2D), and • Four threads in one block (0,0) Nd0,2 Nd1,2 Nd0,3 Nd1,3 Md0,0 Md1,0 Md2,0 Md3,0 Pd0,0 Pd1,0 Pd2,0 Pd3,0 Md0,1 Md1,1 Md2,1 Md3,1 Pd0,1 Pd1,1 Pd2,1 Pd3,1 Pd0,2 Pd1,2 Pd2,2 Pd3,2 Pd0,3 Pd1,3 Pd2,3 Pd3,3

  9. Revised Step 5: Kernel Invocation(Host-side Code) // Setup the execution configuration dim3 dimGrid(Width/TILE_WIDTH, Width/TILE_WIDTH); dim3 dimBlock(TILE_WIDTH, TILE_WIDTH); // Launch the device computation threads! MatrixMulKernel<<<dimGrid, dimBlock>>>(Md, Nd, Pd, Width);

  10. Revised Matrix Multiplication Kernel using Multiple Blocks • __global__ void MatrixMulKernel(float* Md, float* Nd, float* Pd, int Width) • { • // Calculate the row index of the Pd element and M • int Row = blockIdx.y*TILE_WIDTH + threadIdx.y; • // Calculate the column idenx of Pd and N • int Col = blockIdx.x*TILE_WIDTH + threadIdx.x; • float Pvalue = 0; • // each thread computes one element of the block sub-matrix • for (int k = 0; k < Width; ++k) • Pvalue += Md[Row*Width+k] * Nd[k*Width+Col]; • Pd[Row*Width+Col] = Pvalue; • }

  11. CUDA Thread Block • All threads in a block execute the same kernel program (SPMD) • Programmer declares block with block id: • Block size 1 to 512 concurrent threads • Block shape 1D, 2D, or 3D • Block dimensions in threads • Threads have thread id numbers within block • Thread program uses thread id to select work and address shared data • Threads in the same block share data and synchronize while doing their share of the work • Threads in different blocks cannot cooperate • Each block can execute in any order relative to other blocs! CUDA Thread Block Thread Id #:0 1 2 3 … m Thread program Courtesy: John Nickolls, NVIDIA

  12. Kernel grid Device Block 6 Block 4 Block 2 Block 0 Block 5 Block 1 Block 3 Block 7 Device Block 6 Block 5 Block 1 Block 5 Block 4 Block 4 Block 3 Block 2 Block 0 Block 6 Block 7 Block 1 Block 2 Block 3 Block 7 Block 0 Transparent Scalability • Hardware is free to assigns blocks to any processor at any time (no sync between block!) • A kernel scales across any number of parallel processors (multiprocessors) time Each block can execute in any order relative to other blocks. BUT, the scheduler always schedules blocks based on block id, the smaller, the earlier (coarse-grain global order).

  13. MT IU MT IU SP SP Shared Memory Shared Memory t0 t1 t2 … tm t0 t1 t2 … tm G80 Example: Executing Thread Blocks SM 0 SM 1 • Threads are assigned to Streaming Multiprocessors in block granularity • Up to 8 blocks to each SM as other resource allows • SM in G80 can take up to 768 threads • Could be 256 (threads/block) * 3 blocks • Or 128 (threads/block) * 6 blocks, etc. • Thread blocks run concurrently, once scheduled, they will run to the end • SM maintains thread/block id #s • SM manages/schedules thread execution Blocks Blocks

  14. t0 t1 t2 … t31 t0 t1 t2 … t31 t0 t1 t2 … t31 G80 Example: Thread Scheduling • Each Block is executed as 32-thread Warps • An implementation decision, not part of the CUDA programming model • Warps are scheduling units in SM • If 3 blocks are assigned to an SM and each block has 256 threads, how many Warps are there in an SM? • Each Block is divided into 256/32 = 8 Warps • There are 8 * 3 = 24 Warps • Max. threads 768/32 = 24 Warps … Block 1 Warps … Block 2 Warps … Block 1 Warps … … … Streaming Multiprocessor Instruction L1 Instruction Fetch/Dispatch Shared Memory SP SP SP SP SFU SFU SP SP SP SP Each SP has 1 MAD unit, each SM has two SFUs each also contains 4 F-P multipliers

  15. G80 Block Granularity Considerations • For Matrix Multiplication using multiple blocks, should I use 8X8, 16X16 or 32X32 blocks? • For 8X8, we have 64 threads per Block. Since each SM can take up to 768 threads, there are 12 Blocks. However, each SM can only take up to 8 Blocks, only 512 threads will go into each SM! • For 16X16, we have 256 threads per Block. Since each SM can take up to 768 threads, it takes up to 3 Blocks and achieve full capacity unless other resource considerations overrule. • For 32X32, we have 1024 threads per Block. Not even one can fit into an SM! • Does not have to be square blocks!

  16. G80 Example: Thread Scheduling (cont.) • SM implements zero-overhead warp scheduling • At any time, only one of the warps is executed by SM using all hardware resources • Warps whose next instruction has its operands ready for consumption are eligible for execution • Eligible Warps are selected for execution on a prioritized scheduling policy • All threads in a warp execute the same instruction when selected

  17. Thread Scheduling (cont.) • Each code block assigned to one SM, each SM can take up to 8 blocks • Each block up to 512 threads, divided into 32-therad wrap, each wrap scheduled on 8 SP, 4 threads on one SP, wrap executed SIMT mode • SP is pipelined ~30 stages, fetch, decode, gather and write-back act on whole warps, each thread is initiated in each fast clock (i.e. ALU cycle) • Execute acts on group of 8 threads or quarter-warps (there are only 8 SP/SM), so their throughput is 1 warp/4 fast clocks or 1 warp/2 slow clocks (i.e. GPU cycle) • The Fetch/decode/... stages have a higher throughput to feed both the SP/MAD and the SFU/MUL units alternatively. Hence the peak rate of 8 MAD + 8 MUL per (fast) clock cycle • Need 6 warps (or 192 threads) per SM to hide the read-after-write latencies (later!) GPU cycle • MUL / SPU ALU cycle, twice fast

  18. Thread Scheduling (cont.) • In addition to transcendental operations and interpolation of the values of each vertex, the SFU is capable of executing a floating-point multiplication. By alternating execution of MAD and MUL instructions, there’s an overlap of the duration of the instructions. In this way each GPU cycle produces the result of a MAD or a MUL on a warp – that is, 32 scalar values. • Whereas from Nvidia’s description you might expect to get the result of a MAD and a MUL every two GPU cycles (called Dual-Issue). In practice, the result is the same, but from a hardware point of view it greatly simplifies the front end, which handles launching execution of the instructions, with one starting at each GPU cycle.

  19. Reworked Streaming Multiprocessors • Active threads per SM from 768 to 1024 (24 to 32 32-thread warps) • 8,192 registers to 16,384 per SM. With the concomitant increase in the number of threads, the number of registers usable simultaneously by a thread has increased from 10 registers to 16 • Dual-issue mode: SM executes two instructions every two cycles: one MAD and one floating MUL • One instruction uses two GPU cycles (four ALU cycles) executed on a warp (32 threads executed by 8-way SIMD units), but the front end of SM can launch one instruction at each cycle, provided that the instructions are of different types: MAD in one case, SFU (MUL) in the other. • Provide double precision

  20. Multi-issued / Multithreaded Categories Simultaneous Multithreading Multiprocessing Superscalar Fine-Grained Coarse-Grained Time (processor cycle) Thread 1 Thread 3 Thread 5 Thread 2 Thread 4 Idle slot • Note, Nvidia GPU execution model is Single-Instruction-Multiple-Threads (SIMT)

  21. Some Additional API Features(See Appendix in Nvidia CUDA Programming Guide, Version 2.3)

  22. Application Programming Interface • The API is an extension to the C programming language • It consists of: • Language extensions • To target portions of the code for execution on the device • A runtime library split into: • A common component providing built-in vector types and a subset of the C runtime library in both host and device codes • A host component to control and access one or more devices from the host • A device component providing device-specific functions

  23. Language Extensions:Built-in Variables • dim3 gridDim; • Dimensions of the grid in blocks (gridDim.z unused) • dim3 blockDim; • Dimensions of the block in threads • dim3 blockIdx; • Block index within the grid • dim3 threadIdx; • Thread index within the block

  24. Common Runtime Component:Mathematical Functions • pow, sqrt, cbrt, hypot • exp, exp2, expm1 • log, log2, log10, log1p • sin, cos, tan, asin, acos, atan, atan2 • sinh, cosh, tanh, asinh, acosh, atanh • ceil, floor, trunc, round • Etc. • When executed on the host, a given function uses the C runtime implementation if available • These functions are only supported for scalar types, not vector types

  25. Device Runtime Component:Mathematical Functions • Some mathematical functions (e.g. sin(x)) have a less accurate, but faster device-only version (e.g. __sin(x)) • __pow • __log,__log2,__log10 • __exp • __sin,__cos,__tan

  26. Host Runtime Component • Provides functions to deal with: • Device management (including multi-device systems) • Memory management • Error handling • Initializes the first time a runtime function is called • A host thread can invoke device code on only one device • Multiple host threads required to run on multiple devices

  27. Device Runtime Component:Synchronization Function • void __syncthreads(); • Synchronizes all threads in a block • Once all threads have reached this point, execution resumes normally • Used to avoid RAW / WAR / WAW hazards when accessing shared or global memory • Allowed in conditional constructs only if the conditional is uniform across the entire thread block • NO Synchronizes among blocks • Because it causes deadlock if not ALL blocks are scheduled simultaneously

  28. Time Function • clock_t clock(); • when executed in device code, returns the value of a per-multiprocessor counter that is incremented every clock cycle. Sampling this counter at the beginning and at the end of a kernel, taking the difference of the two samples, and recording the result per thread provides a measure for each thread of the number of clock cycles taken by the device to completely execute the thread, but not of the number of clock cycles the device actually spent executing thread instructions. The former number is greater that the latter since threads are time sliced.

  29. Atomic Function • atomicAdd(int* address, intval) • An atomic function performs a read-modify-write atomic operation on one 32-bit or 64-bit word (signed and unsigned integer) residing in global or shared memory. • Read M[address]; compute M[address] + val; and store back to the same memory location as an atomic operation. (Note, function return: M[address].) • Atomic function only used in device function with compute capacity 1.1 or above • The compute capacity 1.2 or above works for atomic shared memory operations with 64-bit word. • Apply to many other arithmetic and logical operations.

  30. Get System Information • C:\Documents and Settings\All Users\Application Data\NVIDIA Corporation\NVIDIA G • PU Computing SDK\C\bin\win32\Release>devicequery (after make) • CUDA Device Query (Runtime API) version (CUDART static linking) • There is 1 device supporting CUDA • Device 0: "GeForce 9200" • CUDA Driver Version: 2.30 • CUDA Runtime Version: 2.30 • CUDA Capability Major revision number: 1 • CUDA Capability Minor revision number: 1 • Total amount of global memory: 266010624 bytes • Number of multiprocessors: (SM) 1 • Number of cores: 8 • Total amount of constant memory: 65536 bytes • Total amount of shared memory per block: 16384 bytes • Total number of registers available per block: 8192 • Warp size: 32 • Maximum number of threads per block: 512 • Maximum sizes of each dimension of a block: 512 x 512 x 64 • Maximum sizes of each dimension of a grid: 65535 x 65535 x 1 • Maximum memory pitch: 262144 bytes • Texture alignment: 256 bytes • Clock rate: 1.20 GHz • Concurrent copy and execution: No • Run time limit on kernels: Yes • Integrated: Yes • Support host page-locked memory mapping: Yes • Compute mode: Default (multiple host threads • can use this device simultaneously)

More Related