1 / 18

Parallel Programming with CUDA

Parallel Programming with CUDA. Matthew Guidry Charles McClendon. Introduction to CUDA. CUDA is a platform for performing massively parallel computations on graphics accelerators CUDA was developed by NVIDIA It was first available with their G8X line of graphics cards

dinh
Download Presentation

Parallel Programming with CUDA

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Parallel Programming with CUDA Matthew Guidry Charles McClendon

  2. Introduction to CUDA • CUDA is a platform for performing massively parallel computations on graphics accelerators • CUDA was developed by NVIDIA • It was first available with their G8X line of graphics cards • Approximately 1 million CUDA capable GPUs are shipped every week • CUDA presents a unique opportunity to develop widely-deployed parallel applications

  3. CUDA • Because of the Power Wall, Latency Wall, etc (free lunch is over), we must find a way to keep our processor intensive programs from slowing down to a crawl • With CUDA developments, it is possible to do things like simulating Networks of Brain Neurons • CUDA brings the possibility of ubiquitous supercomputing to the everyday computer…

  4. CUDA • CUDA is supported on all of NVIDIA’s G8X and above graphics cards • The current CUDA GPU Architecture is branded Tesla • 8-series GPUs offer 50-200 GFLOPS

  5. CUDA Compilation • As a programming model, CUDA is a set of extensions to ANSI C • CPU code is compiled by the host C compiler and the GPU code (kernel) is compiled by the CUDA compiler. Separate binaries are produced $ nvcc -o executable source.cu

  6. CUDA Stack

  7. Limitations of CUDA • Tesla does not fully support IEEE spec for double precision floating point operations • Code only supported on NVIDIA hardware • No use of recursive functions (can workaround) • Bus latency between host CPU and GPU (Although double precision will be resolved with Fermi)

  8. Thread Hierarchy Thread – Distributed by the CUDA runtime (identified by threadIdx) Warp – A scheduling unit of up to 32 threads Block – A user defined group of 1 to 512 threads. (identified by blockIdx) Grid – A group of one or more blocks. A grid is created for each CUDA kernel function

  9. CUDA Memory Hierarchy • The CUDA platform has three primary memory types Local Memory – per threadmemory for automatic variables and register spilling. Shared Memory – per blocklow-latency memory to allow for intra-block data sharing and synchronization. Threads can safely share data through this memory and can perform barrier synchronization through _ _syncthreads() Global Memory – device level memory that may be shared between blocks or grids

  10. Moving Data… CUDA allows us to copy data from one memory type to another. This includes dereferencing pointers, even in the host’s memory (main system RAM) To facilitate this data movement CUDA provides cudaMemcpy()

  11. Optimizing Code for CUDA • Prevent thread starvation by breaking your problem down (128 execution units are available for use • thousands of threads may be in flight) • Utilize shared memory and avoid latency problems • (communicating with system memory is slow) • Keep in mind there is no built-in way to synchronize threads in different blocks • Avoid thread divergence in warps by blocking threads with similar control paths

  12. Code Example SAX: Symbolic Aggregate approximation a simple function for time series motif discovery Will be explained more in depth later…

  13. Kernel Functions • A kernel function is the basic unit of work within a CUDA thread • Kernel functions are CUDA extensions to ANSI C that are compiled by the CUDA compiler and the object code generator

  14. Kernel Limitations • There must be no recursion; there’s no call stack  • There must no static variable declarations • Functions must have a non-variable number of arguments

  15. CUDA Warp • CUDA utilizes SIMT (Single Instruction Multiple Thread) • Warps are groups of 32 threads. • Each warp receives a single instruction and “broadcasts” it to all of its threads. • CUDA provides “zero-overhead” warp and thread scheduling. Also, the overhead of thread creation is on the order of 1 clock. • Because a warp receives a single instruction, it will diverge and converge as each thread branches independently

  16. The GPU

  17. Myths About CUDA • GPUs are the only processors in a CUDA application • The CUDA platform is a co-processor, using the CPU and GPU • GPUs have very wide (1000s) SIMD machines • No, a CUDA Warp is only 32 threads • Branching is not possible on GPUs • Incorrect. • GPUs are power-inefficient • Nope, performance per watt is quite good • CUDA is only for C or C++ programmers • Not true, there are third party wrappers for Java, Python, and more

  18. Different Types of CUDA Applications

More Related