1 / 18

CUDA 5.0

CUDA 5.0. By Peter Holvenstot CS6260. CUDA 5.0. Latest iteration of CUDA toolkit Requires Compute Capability 3.0 Compatible Kepler cards being installed @WMU. Major New Features. GPUDirect Allows Direct Memory Access GPU Object Linking Libraries for GPU code Dynamic Parallelism

luisa
Download Presentation

CUDA 5.0

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CUDA 5.0 By Peter Holvenstot CS6260

  2. CUDA 5.0 Latest iteration of CUDA toolkit Requires Compute Capability 3.0 Compatible Kepler cards being installed @WMU

  3. Major New Features GPUDirect Allows Direct Memory Access GPU Object Linking Libraries for GPU code Dynamic Parallelism Kernels inside kernels

  4. GPUDirect Allows Direct Memory Access to PCIe bus Third-party device access now supported Requires use of pinned memory DMAs can be chained across network

  5. GPUDirect

  6. Pinned Memory Malloc() - unpinned, can be paged out CudaHostAlloc() - pinned Cannot be paged out Takes longer to allocate, but allows features requiring DMA and increases copy performance

  7. Kernel Linking Kernels now support compilation to .obj file Allows compiling into/against static libraries Allows closed-source distribution of libraries

  8. Dynamic Parallelism CUDA 4.1: __device__ functions may make inline-able recursive calls However, __global__ functions/kernels cannot CUDA 5: GPU/kernels may launch additional kernels

  9. Dynamic Parallelism Most important feature in release Reduces need for synchronization Allows program flow to be controlled by GPU Allows recursion and subdivision of problems

  10. Dynamic Parallelism CPU code can now become a kernel Kernel calls can be used as tasks GPU controls kernel launch/flow/scheduling Increases practical thread count to thousands

  11. Dynamic Parallelism Interesting data is not uniformly distributed Dynamic parallelism can launch additional threads in interesting areas Allows higher resolution in critical areas without slowing down others

  12. Source: NVIDIA

  13. Dynamic Parallelism Nested Dependencies Source: NVIDIA

  14. Dynamic Parallelism Scheduling can be controlled by streams No new concurrency guarantees Launched kernels may execute out-of-order within a stream Named streams can guarantee concurrency

  15. Dynamic Parallelism Nested Dependencies - cudaDeviceSynchronize () Can be used inside a kernel Synchronizes all launches by any kernel in block Does NOT imply __syncthreads()!

  16. Dynamic Parallelism Kernel launch implies memory sync operation Child sees state at time of launch Parent sees child writes after sync Local and shared memory are private, cannot be shared with children

  17. Questions?

  18. Sources http://docs.nvidia.com/cuda/pdf/CUDA_Dynamic_Parallelism_Programming_Guide.pdf http://docs.nvidia.com/cuda/gpudirect-rdma/index.html http://developer.download.nvidia.com/GTC/PDF/GTC2012/PresentationPDF/S0338-GTC2012-CUDA-Programming-Model.pdf https://developer.nvidia.com/content/trenches-gtc-cuda-5-and-beyond

More Related