1 / 47

INF5063 – GPU & CUDA

INF5063 – GPU & CUDA. Håkon Kvale Stensland Department for Informatics / Simula Research Laboratory. GPU – Graphics Processing Units. Basic 3D Graphics Pipeline. Application. Host. Scene Management. Geometry. Rasterization. Frame Buffer Memory. GPU. Pixel Processing.

Download Presentation

INF5063 – GPU & CUDA

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. INF5063 – GPU & CUDA Håkon Kvale Stensland Department for Informatics / Simula Research Laboratory

  2. GPU – Graphics Processing Units

  3. Basic 3D Graphics Pipeline Application Host Scene Management Geometry Rasterization Frame Buffer Memory GPU Pixel Processing ROP/FBI/Display

  4. PC Graphics Timeline • Challenges: • Render infinitely complex scenes • And extremely high resolution • In 1/60th of one second (60 frames per second) • Graphics hardware has evolved from a simple hardwired pipeline to a highly programmable multiword processor DirectX 6 DirectX 7 DirectX 8 DirectX 9 DirectX 9.0c DirectX 9.0c DirectX 10 DirectX 5 Multitexturing T&L TextureStageState SM 1.x SM 2.0 SM 3.0 SM 3.0 SM 4.0 Riva 128 Riva TNT GeForce 256 GeForce 3 Cg GeForceFX GeForce 6 GeForce 7 GeForce 8 1998 1999 2000 2001 2002 2003 2004 2005 2006

  5. Graphics in the PC Architecture • DMI (Direct Media Interface) between processor and chipset • Memory Control now integrated in CPU • The old “Northbridge” integrated onto CPU • Intel calls this part of the CPU “System Agent” • PCI Express 3.0 x16 bandwidth at 32 GB/s (16 GB in each direction) • “Southbridge” (X99) handles all other peripherals • All mainstream CPUs now come with integrated GPU • Same capabilities as discreteGPU’s • Less performance (limited by die space and power) Intel Haswell

  6. GPUs not always for Graphics GeForce GTX Titan Black • GPUs are now common in HPC • Second largest supercomputer in November 2013 is the Titan at Oak Ridge National Laboratory • 18688 16-core Opteron processors • 16688 Nvidia Kepler GPU’s • Theoretical: 27 petaflops • Before: Dedicated compute card released after graphics model • Now: Nvidia’s high-end Kepler GPU (GK110) released first as a compute product Tesla K40

  7. High-end Compute Hardware • nVIDIA Kepler Architecture • The latest generation GPU, codenamed GK110 • 7,1 billiontransistors • 2688 Processing cores (SP) • IEEE 754-2008 Capable • Shared coherent L2 cache • Full C++ Support • Up to 32 concurrent kernels • 6 GB memory • Supports GPU virtualization Tesla K40

  8. High-end Graphics Hardware • nVIDIA Maxwell Architecture • The latest generation GPU, codenamed GM204 • 5,2 billiontransistors • 2048 Processing cores (SP) • IEEE 754-2008 Capable • Shared coherent L2 cache • Full C++ Support • Up to 32 concurrent kernels • 4 GB memory • Supports GPU virtualization GeForce GTX 980

  9. GeForce GM204 Architecture

  10. Lab Hardware • nVIDIA GeForce GTX 750 • All machines have the same GPU • Maxwell Architecture • Based on the GM107 chip • 1870 million transistors • 512 Processing cores (SP) at 1022 MHz (4 SMM) • 1024 MB Memory with 80 GB/sec bandwidth • 1044 GFLOPS theoretical performance • Compute version 5.0

  11. CPU and GPU Design Philosophy CPU Latency Oriented Cores GPU Throughput Oriented Cores Chip Chip Core Compute Unit Local Cache Cache/Local Mem Registers Registers Control Threading SIMD Unit SIMD Unit

  12. Control ALU ALU ALU ALU Cache DRAM CPUs: Latency Oriented Design • Large caches • Convert long latency memory accesses to short latency cache accesses • Sophisticated control • Branch prediction for reduced branch latency • Data forwarding for reduced data latency • Powerful ALU • Reduced operation latency CPU

  13. DRAM GPUs: Throughput Oriented Design • Small caches • To boost memory throughput • Simple control • No branch prediction • No data forwarding • Energy efficient ALUs • Many, long latency but heavily pipelined for high throughput • Require massive number of threads to tolerate latencies GPU

  14. Think both about CPU and GPU… • CPUs for sequential parts where latency matters • CPUs can be 10+X faster than GPUs for sequential code • GPUs for parallel parts where throughput wins • GPUs can be 10+X faster than CPUs for parallel code

  15. The Core: The basic processing block • The nVIDIA Approach: • Called Stream Processor and CUDA cores. Works on a single operation. • The AMD Approach: • VLIW5: The GPU work on up to five operations • VLIW4: The GPU work on up to four operations • GCN: 16-wide SIMD vector unit Graphics Core Next (GCN):

  16. The Core: The basic processing block • The (failed) Intel Approach: • 512-bit SIMD units in x86 cores • Failed because of complex x86 cores and software ROP pipeline • The (new) Intel Approach: • Used in Sandy Bridge, Ivy Bridge, Haswell & Broadwell • 128 SIMD-8 32-bit registers

  17. The nVIDIA GPU Architecture Evolving • Streaming Multiprocessor (SM) 2.0 on the Fermi Architecture (GF1xx) • 32 CUDA Cores (Core) • 4 Super Function Units (SFU) • Dual schedulers and dispatch units • 1 to 1536 threads active • Local register (32k) • 64 KB shared memory / Level 1 cache • 2 operations per cycle • Streaming Multiprocessor (SM) 1.x on the Tesla Architecture • 8 CUDA Cores (Core) • 2 Super Function Units (SFU) • Dual schedulers and dispatch units • 1 to 512 or 768 threads active • Local register (32k) • 16 KB shared memory • 2 operations per cycle

  18. The nVIDIA GPU Architecture Evolving • Streaming Multiprocessor (SMX) 3.5 on the Kepler Architecture (Compute) • 192 CUDA Cores (Core) • 64 DP CUDA Cores (DP Core) • 32 Super Function Units (SFU) • Four (simple) schedulers and eight dispatch units • 1 to 2048 threads active • Local register (64k) • 64 KB shared memory / Level 1 cache • 48 KB Texture Cache • 1 operation per cycle • Streaming Multiprocessor (SMX) 3.0 on the Kepler Architecture (Graphics) • 192 CUDA Cores (CC) • 8 DP CUDA Cores (DP Core) • 32 Super Function Units (SFU) • Four (simple) schedulers and eight dispatch units • 1 to 2048 threads active • Local register (32k) • 64 KB shared memory / Level 1 cahce • 1 operation per cycle

  19. Streaming Multiprocessor (SMM) – 4.0 • Streaming Multiprocessor (SMM) on Maxwell (Graphics) • 128 CUDA Cores (Core) • 4 DP CUDA Cores (DP Core) • 32 Super Function Units (SFU) • Four schedulers and eight dispatch units • 1 to 2048 active threads • Software controlled scheduling • Local register (64k) • 64 KB shared memory • 24 KB Level 1 / Texture Cache • 1 operation per cycle • GM107 / GM204

  20. Execution Pipes in SMM • Scalar MAD pipe • Float Multiply, Add, etc. • Integer operations • Conversions • Only one instruction per clock • Scalar SFU pipe • Special functions like Sin, Cos, Log, etc. • Only one operation per four clocks • Load/Store pipe • CUDA has both global and local memory access through Load/Store • Access to shared memory • Texture Units • Access to Texture memory with texture cache (per SMM) I $ L 1 Multithreaded Instruction Buffer R C $ Shared F L 1 Mem Operand Select MAD SFU

  21. GPGPU Foils adapted from nVIDIA

  22. What is really GPGPU? • Idea: • Potential for very high performance at low cost • Architecture well suited for certain kinds of parallel applications (data parallel) • Demonstrations of 30-100X speedup over CPU • Early challenges: • Architectures very customized to graphics problems (e.g., vertex and fragment processors) • Programmed using graphics-specific programming models or libraries

  23. Previous GPGPU use, and limitations • Working with a Graphics API • Special cases with an API like Microsoft Direct3D or OpenGL • Addressing modes • Limited by texture size • Shader capabilities • Limited outputs of the available shader programs • Instruction sets • No integer or bit operations • Communication is limited • Between pixels per thread per Shader per Context Input Registers Fragment Program Texture Constants Temp Registers Output Registers FB Memory

  24. Heterogeneous computing is catching on… Engineering Simulation Medical Imaging Data Intensive Analytics Scientific Simulation Financial Analysis Biomedical Informatics Electronic Design Automation Computer Vision Digital Video Processing Digital Audio Processing Statistical Modeling Ray Tracing Rendering Interactive Physics Numerical Methods

  25. “Compute Unified Device Architecture” General purpose programming model User starts several batches of threads on a GPU GPU is in this case a dedicated super-threaded, massively data parallel co-processor Software Stack Graphics driver, language compilers (Toolkit), and tools (SDK) Graphics driver loads programs into GPU All drivers from nVIDIA now support CUDA Interface is designed for computing (no graphics ) “Guaranteed” maximum download & readback speeds Explicit GPU memory management nVIDIA CUDA

  26. Outline • The CUDA Programming Model • Basic concepts and data types • An example application: • The good old Motion JPEG implementation! • Profiling with CUDA • Make an example program!

  27. The CUDA Programming Model • The GPU is viewed as a computedevicethat: • Is a coprocessor to the CPU, referred to as the host • Has its own DRAM called device memory • Runs manythreads in parallel • Data-parallel parts of an application are executed on the device as kernels, which run in parallel on many threads • Differences between GPU and CPU threads • GPU threads are extremely lightweight • Very little creation overhead • GPU needs 1000s of threads for full efficiency • Multi-core CPU needs only a few

  28. . . . . . . CUDA C – Execution Model • Integrated host + device C program • Serial or modestly parallel parts in host C code • Highly parallel parts in device SPMD kernel C code Serial Code (host)‏ Parallel Kernel (device)‏ KernelA<<< nBlk, nTid >>>(args); Serial Code (host)‏ Parallel Kernel (device)‏ KernelB<<< nBlk, nTid >>>(args);

  29. Host Device Kernel 1 Kernel 2 Grid 1 Block (0, 0) Block (0, 1) Block (1, 0) Block (1, 1) Block (2, 1) Block (2, 0) Grid 2 Block (1, 1) Thread (0, 0) Thread (0, 2) Thread (0, 1) Thread (1, 1) Thread (1, 0) Thread (1, 2) Thread (2, 0) Thread (2, 1) Thread (2, 2) Thread (3, 0) Thread (3, 1) Thread (3, 2) Thread (4, 0) Thread (4, 1) Thread (4, 2) Thread Batching: Grids and Blocks • A kernel is executed as a grid of thread blocks • All threads share data memory space • A thread block is a batch of threads that can cooperate with each other by: • Synchronizing their execution • Non synchronous execution is very bad for performance! • Efficiently sharing data through a low latency shared memory • Two threads from two different blocks cannot cooperate

  30. (Device) Grid Block (0, 0) Block (1, 0) Shared Memory Shared Memory Registers Registers Registers Registers Thread (0, 0) Thread (1, 0) Thread (0, 0) Thread (1, 0) Local Memory Local Memory Local Memory Local Memory Host Global Memory Constant Memory Texture Memory CUDA Device Memory Space Overview • Each thread can: • R/W per-thread registers • R/W per-thread local memory • R/W per-block shared memory • R/W per-grid global memory • Read only per-grid constant memory • Read only per-grid texture memory • The host can R/W global, constant, and texture memories

  31. (Device) Grid Block (0, 0) Block (1, 0) Shared Memory Shared Memory Registers Registers Registers Registers Thread (0, 0) Thread (1, 0) Thread (0, 0) Thread (1, 0) Local Memory Local Memory Local Memory Local Memory Host Global Memory Constant Memory Texture Memory Global, Constant, and Texture Memories • Global memory: • Main means of communicating R/W Data between hostand device • Contents visible to all threads • Texture and Constant Memories: • Constants initialized by host • Contents visible to all threads

  32. Terminology Recap device = GPU = Set of multiprocessors Multiprocessor = Set of processors & shared memory Kernel = Program running on the GPU Grid = Array of thread blocks that execute a kernel Thread block = Group of SIMD threads that execute a kernel and can communicate via shared memory

  33. Access Times • Register – Dedicated HW – Single cycle • Shared Memory – Dedicated HW – Single cycle • Local Memory – DRAM, no cache – “Slow” • Global Memory – DRAM, no cache – “Slow” • Constant Memory – DRAM, cached, 1…10s…100s of cycles, depending on cache locality • Texture Memory – DRAM, cached, 1…10s…100s of cycles, depending on cache locality

  34. Terminology Recap device = GPU = Set of multiprocessors Multiprocessor = Set of processors & shared memory Kernel = Program running on the GPU Grid = Array of thread blocks that execute a kernel Thread block = Group of SIMD threads that execute a kernel and can communicate via shared memory

  35. Reality Global Memory Bandwidth Ideal

  36. Massively parallel execution cannot afford serialization Contentions in accessing critical data causes serialization Conflicting Data Accesses Cause Serialization and Delays

  37. Some Information on the Toolkit

  38. Compilation • Any source file containing CUDA language extensions must be compiled with nvcc • nvcc is acompiler driver • Works by invoking all the necessary tools and compilers like cudacc, g++, etc. • nvcc can output: • Either C code • That must then be compiled with the rest of the application using another tool • Or object code directly

  39. Linking & Profiling • Any executable with CUDA code requires two dynamic libraries: • The CUDA runtime library (cudart) • The CUDA core library (cuda) • Several tools are available to optimize your application • nVIDIA CUDA Visual Profiler • nVIDIA Occupancy Calculator • NVIDIA Parallel Nsight for Visual Studio and Eclipse

  40. Before you start… • Four lines have to be added to your group users .bash_profile or .bashrc file PATH=$PATH:/usr/local/cuda-6.5/bin LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-6.5/lib64:/lib export PATH export LD_LIBRARY_PATH • Code samples is installed with CUDA • Copy and build in your users home directory

  41. Someusefullresources NVIDIA CUDA Programming Guide 6.5 http://docs.nvidia.com/cuda/pdf/CUDA_C_Programming_Guide.pdf NVIDIA CUDA C Best Practices Guide 6.5 http://docs.nvidia.com/cuda/pdf/CUDA_C_Best_Practices_Guide.pdf Tuning CUDA Applications for Maxwell http://docs.nvidia.com/cuda/pdf/Maxwell_Tuning_Guide.pdf ParallelThreadExecution ISA – Version 4.1 http://docs.nvidia.com/cuda/pdf/ptx_isa_4.1.pdf NVIDIA CUDA GettingStarted Guide for Linux http://docs.nvidia.com/cuda/pdf/CUDA_Getting_Started_Linux.pdf

  42. Example: Motion JPEG Encoding

  43. 14 different MJPEG encoders on GPU Nvidia GeForce GPU • Problems: • Only used global memory • To much synchronization between threads • Host part of the code not optimized

  44. Profiling a Motion JPEG encoder on x86 • A small selection of DCT algorithms: • 2D-Plain: Standard forward 2D DCT • 1D-Plain: Two consecutive 1D transformations with transpose in between and after • 1D-AAN:Optimized version of 1D-Plain • 2D-Matrix: 2D-Plain implemented with matrix multiplication • Single threaded application profiled on a Intel Core i5 750

  45. Optimizing for GPU, use the memory correctly!! • Several different types of memory on GPU: • Globalmemory • Constant memory • Texture memory • Shared memory • First Commandment when using the GPUs: • Select the correct memory space, AND use it correctly!

  46. How about using a better algorithm?? • Used CUDA Visual Profiler to isolate DCT performance • 2D-Plain Optimized is optimized for GPU: • Shared memory • Coalesced memory access • Loop unrolling • Branch prevention • Asynchronous transfers • Second Commandment when using the GPUs: • Choose an algorithm suited for the architecture!

  47. Effect of offloading VLC to the GPU • VLC (Variable Length Coding) can also be offloaded: • One thread per macro block • CPU does bitstream merge • Even though algorithm is not perfectly suited for the architecture, offloading effect is still important!

More Related