1 / 8

Accelerating Deep Learning with Storage for GPU Computing

Data-intensive applications from autonomous vehicles to marketplace search engines are generating petabytes of data every day. One of the most interesting fields for big data today is deep learning, a form of artificial intelligence that is changing how we interact with technology at home, in the office and on the move. Deep learning requires high-performing hardware and complex software frameworks that can run on GPUs, but until now there has been little discussion around how best to build an infrastructure that can support these complex storage needs.

SkyHighTech
Download Presentation

Accelerating Deep Learning with Storage for GPU Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Accelerating Deep Learning with Storage for GPU Computing

  2. The explosive growth in GPU computing for AI, machine learning and deep learning has rewritten storage requirements in today’s data center. Last month, Excelero was honored to host a webinar featuring experts from NVIDIA, Barclays and Excelero on accelerating deep learning with storage for GPU computing. A blog article revisiting the webinar and highlight the essentials was long overdue.

  3. GPU storage is one of the most pressing challenges faced by AI/ML/DL deployments today. Traditional controller-based storage usually isn’t suitable, and each expert was a veteran of implementing from concept to production with these systems while navigating all manner of “gotcha’s” along the way. Given the highly informed questions from the large number of attendees, many of whom already have deployed multiple GPU servers, the challenge is a significant one.

  4. Matching GPU and storage

  5. NVIDIA’s senior technical marketing engineer Jacci Cenci looked holistically at three data center challenges affecting data scientists: 1) the dramatically larger data sets no longer fit into system memory; 2) slow CPU processing; 3) complex installation and management.

  6. As processing moves off the CPU toward GPUs, teams still find they need to adjust big data storage systems since the time for loading application data starts to become a strain on the entire application’s performance. She reviewed NVIDIA’s work with MAGNUM IO, whose accelerated compute, networking and storage approach can accelerate AI and predictive analytics workloads so users gain faster time to insights and iterate more often on models – key steps to creating more accurate predictions. You can read about Excelero big data storage solutions also.

  7. The architectural advantage of NVMesh, he explained, derives in part from its use as an abstraction layer on top of individual NVMe drives, through which customers can create logical volumes – allowing the entire capacity of NVMe drives to be accessed enterprise wide, if customer so choose. Results affirm what IT leaders in the trenches know – software-defined scale out storage the only way to go to accelerate these highly demanding workloads.

  8. Judging from the astute questions and the patience of attendees in staying well over the allotted webinar time to ask them, data center leaders know that effective storage for deep learning, machine learning and AI on GPU systems requires new thinking and approaches. Excelero is delighted to share its longstanding innovation in this domain – and is offering more webinars in the coming weeks. Stay tuned!

More Related