hpc opportunities in deep learning greg diamos n.
Skip this Video
Loading SlideShow in 5 Seconds..
HPC Opportunities in Deep Learning- Greg Diamos PowerPoint Presentation
Download Presentation
HPC Opportunities in Deep Learning- Greg Diamos

Loading in 2 Seconds...

play fullscreen
1 / 19

HPC Opportunities in Deep Learning- Greg Diamos - PowerPoint PPT Presentation


  • 84 Views
  • Uploaded on

Read how deep learning has impacted HPC, see some published results, and learn about future opportunities with this slideshare.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
    Presentation Transcript
    hpc opportunities in deep learning greg diamos

    HPC Opportunities

    In Deep Learning

    Greg Diamos, SC16

    an overview

    AN OVERVIEW …

    1. Why is Deep Learning Important Now in HPC?

    2. Published Results with ImageNet, Google DeepMind, Baidu

    AI Lab.

    3. Getting Started with Deep Learning in HPC.

    4. Future Direction and Opportunities for Growth with HPC

    and Deep Learning.

    Source: Greg Diamos SC16 Talk

    why is deep learning important now in hpc

    Why is Deep Learning

    Important Now in HPC?

    Before, we had no idea how to train

    neural networks. The prevailing

    opinion, at the time, was that they

    were impossible to train.

    But now, we have powerful tools

    that can start applying to problem

    after problem and making progress

    on those that are really incredible

    inherently difficult.

    Content Source: Greg Diamos SC16 Talk

    Image Source: NVIDIA

    the imagenet challenge we first found success

    The ImageNet Challenge

    We first found success in the ImageNet

    challenge, in which ImageNet was given

    images and had to produce a corresponding

    label. The challenge encompassed over a

    very large dataset of images and then

    classified into a thousand different

    categories. We’ve approached human-level

    accuracy with deep learning algorithms for

    these systems.

    Content Source: Greg Diamos SC16 Talk

    Image Source: NVIDIA, Greg Diamos SC16 Talk

    deepmind at google just last year a deep neural

    DeepMind at Google

    Just last year, a deep neural network defeated one of the best players, the best

    human players in a game of ‘Go.’ This is a game with an absolutely enormous

    optimization space. There’s no way to search over all possible combinations.

    Content Source: Greg Diamos SC16 Talk

    Image Source: Greg Diamos SC16 Talk

    Image Source

    baidu s ai lab at our lab we can approach human

    Baidu’s AI Lab

    At our lab, we can approach human-level

    accuracy on many test sets.

    For example, when you build a speech

    recognition system, you would hand-design

    all of these components. You would not

    have one neural network. You would have

    five or six components all hand-designed by

    linguists, speech, signal processing, and

    mathematicians.

    Image Source

    We cut all that out.

    Content Source: Greg Diamos SC16 Talk

    Image Source

    baidu s ai lab cont

    Baidu’s AI Lab Cont.

    We can now take a team of five people who

    don’t speak any Mandarin, and produce a

    speech recognition system that beats all of

    the existing systems that we have and

    actually does better than a human grader.

    Image Source

    But these things are incredibly

    computationally intensive to train.

    Content Source: Greg Diamos SC16 Talk

    and we re now getting into the relationship

    AND WE’RE NOW GETTING INTO THE RELATIONSHIP

    BETWEEN DEEP LEARNING AND HIGH PERFORMANCE

    COMPUTING SYSTEMS.

    getting started with deep learning in hpc

    GETTING STARTED WITH DEEP LEARNING IN HPC

    What do you need in order to get started solving a new

    problem that you want to apply deep learning to? There are

    three simple, but high-level factors:

    1.Big Model

    2.Big Data

    3.Big Computer

    Source: Greg Diamos SC16 Talk

    1 big model

    1. Big Model

    First, you need a big model. Your model

    has to be able to approximate the function

    that you’re trying to represent. For

    example, the function that maps images to

    text is complicated. Many parameters are

    needed to actually represent it. The model

    must be big in order to capture a

    extremely intricate function.

    Content Source: Greg Diamos SC16 Talk

    Image Source

    2 big data

    2. Big Data

    Deep Learning doesn’t perform very well

    with small datasets. This was another

    reason as to why people might not have

    thought Deep Learning was important

    before. Deep Learning on smaller datasets

    would easily be beaten out by simpler, more

    explicit methods. But, as the datasets get

    larger, they start to surpass other methods.”

    Content Source: Greg Diamos SC16 Talk

    Image Source

    3 big computer

    3. Big Computer

    And when you have a big network and big

    data, you need a powerful supercomputer

    to run it. If you don’t have a fast enough

    computer, one can be stuck waiting years

    or decades for a result.

    So we come to this need for speed. And

    this is really the most important point in

    the talk.

    Content Source: Greg Diamos SC16 Talk

    Image Source

    what are the opportunities of growth

    WHAT ARE THE OPPORTUNITIES OF

    GROWTH FOR HPC IN DEEP LEARNING?

    opportunities for growth first we need to figure

    Opportunities For Growth

    First, we need to figure out a way of

    scaling up models. Currently, the biggest

    model that runs at a high percent of

    efficiency is about 100 processors, which is

    large from a machine learning perspective

    but it is small from an HPC perspective.

    Second, we are far away from the power

    limit in CMOS. Right now, we’re around ten

    teraflops per processor. I think we can get

    to 20 petaflops before we hit the power

    limit. You can make progress on speech,

    vision, and language problems by making

    faster computers.

    Image Source

    Content Source: Greg Diamos SC16 Talk

    future directions of hpc and deep learning

    Future Directions of HPC

    and Deep Learning

    The two big directions that we see are

    around speech powered interfaces and self-

    driving cars. Speech powered interfaces

    are really three different components:

    recognition, human-level accuracy, and

    computer generation. Self-driving cars are

    also highly valuable as they leverage a lot

    of vision technology that’s already been

    developed.

    Both areas are significant directions going

    forward, but there are definitely even more

    application beyond these that are close to

    becoming possible using deep learning.

    Image Source

    Content Source: Greg Diamos SC16 Talk

    Image Source

    about the speaker greg diamos

    About the Speaker: Greg Diamos

    Greg Diamos is a senior researcher at Baidu’s Silicon Valley AI

    Lab (SVAIL). Previously, he was on the research team at

    NVIDIA. Greg holds a PhD from the Georgia Institute of

    Technology, where he contributed to the development of the

    GPU-Ocelot dynamic compiler, which targeted CPUs and GPUs

    from the same program representation.

    FOR THE FULL RECORDING:

    WATCH HERE

    learn more about the intersection of ai and hpc

    LEARN MORE ABOUT THE

    INTERSECTION OF AI AND HPC

    INSIDEBIGDATA GUIDE