cuda dli training sessions at gtc 2019 n.
Skip this Video
Loading SlideShow in 5 Seconds..
CUDA DLI Training Courses at GTC 2019 PowerPoint Presentation
Download Presentation
CUDA DLI Training Courses at GTC 2019

Loading in 2 Seconds...

play fullscreen
1 / 13

CUDA DLI Training Courses at GTC 2019 - PowerPoint PPT Presentation


  • 22 Views
  • Uploaded on

Check out these DLI training courses at GTC 2019 designed for developers, data scientists & researchers looking to solve the world’s most challenging problems with accelerated computing.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
    Presentation Transcript
    expected to be the biggest yet gtc features

    EXPECTED TO BE THE BIGGEST YET, GTC

    FEATURES SESSIONS AND DLI TRAINING ON THE

    MOST IMPORTANT TOPICS IN COMPUTING TODAY

    why dli hands on training

    WHY DLI HANDS-ON TRAINING?

    ● LEARN HOW TO BUILD APPS ACROSS INDUSTRY SEGMENTS

    ● GET HANDS-ON EXPERIENCE USING INDUSTRY-STANDARD SOFTWARE, TOOLS & FRAMEWORKS

    ● GAIN EXPERTISE THROUGH CONTENT DESIGNED WITH INDUSTRY LEADERS

    fundamentals of accelerated computing with cuda

    FUNDAMENTALS OF ACCELERATED COMPUTING WITH CUDA

    PYTHON

    This course explores how to use Numba—the just-in-

    time, type-specializing Python function compiler—

    to accelerate Python programs to run on massively

    parallel NVIDIA GPUs. You’ll learn how to:

    Use Numba to compile CUDA kernels from

    NumPy universal functions (ufuncs)

    Use Numba to create and launch custom CUDA

    kernels

    Apply key GPU memory management

    techniques

    ADD TO MY SCHEDULE

    accelerating applications with cuda c c

    ACCELERATING APPLICATIONS WITH CUDA C/C++

    The CUDA computing platform enables acceleration of

    CPU-only applications to run on the world's fastest

    massively parallel GPUs. Learn how to accelerate C/C++

    applications by:

    Exposing the parallelization of CPU-only

    applications, and refactoring them to run in parallel

    on GPUs

    Successfully managing memory

    Utilizing CUDA parallel thread hierarchy to further

    increase performance

    ADD TO MY SCHEDULE

    cuda on drive agx

    CUDA ON DRIVE AGX

    Explore how to write CUDA code and run it on

    DRIVE AGX. You'll learn about:

    Hardware architecture of DRIVE AGX

    Memory Management of iGPU and dGPU

    GPU acceleration for inferencing

    ADD TO MY SCHEDULE

    accelerating data science workflows with rapids

    ACCELERATING DATA SCIENCE WORKFLOWS WITH

    RAPIDS

    The open source RAPIDS project allows data scientists

    to GPU-accelerate their data science and data

    analytics applications from beginning to end, creating

    possibilities for drastic performance gains and

    techniques not available through traditional CPU-only

    workflows. Learn how to GPU-accelerate your data

    science applications by:

    Utilizing key RAPIDS libraries like cuDF & cuML

    Learning techniques and approaches to end-to-

    end data science

    Understanding key differences between CPU-

    driven and GPU-driven data science

    ADD TO MY SCHEDULE

    debugging and optimizing cuda applications with

    DEBUGGING AND OPTIMIZING CUDA APPLICATIONS

    WITH NSIGHT PRODUCTS ON LINUX TRAINING

    Learn how NVIDIA tools can improve development

    productivity by narrowing down bugs and spotting

    areas of optimization in CUDA applications on a Linux

    x86_64 system.

    Through a set of exercises, you'll gain hands-on

    experience using NVIDIA's new Nsight Systems and

    Nsight Compute tools for debugging, narrowing down

    memory issues, and optimizing a CUDA application.

    ADD TO MY SCHEDULE

    accelerated data science pipeline with rapids

    ACCELERATED DATA SCIENCE PIPELINE WITH

    RAPIDS ON AZURE

    Learn how to deploy RAPIDS machine learning jobs

    on NVIDIA's GPUs using Microsoft Azure and

    explore:

    Azure Portal Permits: a convenient way to

    perform functional experimentation with RAPIDS.

    Azure Machine Learning (AML) SDK: enables a

    batch experimentation mode and where the user

    can set ranges on different parameters to be run

    on a RAPIDS program, saving the results for later

    analysis

    ADD TO MY SCHEDULE

    high performance computing using containers

    HIGH PERFORMANCE COMPUTING USING CONTAINERS

    Learn to build, deploy and run containers in an HPC

    environment.

    During this session, you will learn: the basics of

    building container images with Docker and Singularity,

    how to use HPC Container Maker (HPCCM) to make it

    easier to build container images for HPC applications,

    and how to use containers from the NGC with

    Singularity.

    ADD TO MY SCHEDULE

    introduction to cuda python with numba

    INTRODUCTION TO CUDA PYTHON WITH NUMBA

    Explore an introduction to Numba, a just-in-time

    function compiler that allows developers to utilize

    the CUDA platform in Python applications. You'll

    learn how to:

    Decorate Python functions to be compiled by

    Numba

    Use Numba to GPU accelerate NumPy ufuncs

    ADD TO MY SCHEDULE

    cuda programming in python with numba and cupy

    CUDA PROGRAMMING IN PYTHON WITH NUMBA AND

    CUPY

    Combining Numba, an open source compiler that can

    translate Python functions for execution on the GPU,

    with the CuPy GPU array library, a nearly complete

    implementation of the NumPy API for CUDA, creates a

    high productivity GPU development environment.

    Learn the basics of using Numba with CuPy, techniques

    for automatically parallelizing custom Python functions

    on arrays, and how to create and launch CUDA kernels

    entirely from Python.

    ADD TO MY SCHEDULE

    register today for gtc and explore the full list

    REGISTER TODAY FOR GTC

    AND EXPLORE THE FULL

    LIST OF CUDA TRAINING,

    TALKS & EXPERT SESSIONS

    LEARN MORE