interconnect your future n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Interconnect Your Future PowerPoint Presentation
Download Presentation
Interconnect Your Future

Loading in 2 Seconds...

play fullscreen
1 / 24

Interconnect Your Future - PowerPoint PPT Presentation


  • 84 Views
  • Uploaded on

Interconnect Your Future. Gilad Shainer, VP of Marketing. Dec 2013. Leading Supplier of End-to-End Interconnect Solutions. Comprehensive End-to-End Software Accelerators and Managment. Management. Storage and Data. MXM Mellanox Messaging Acceleration. FCA Fabric Collectives Acceleration.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Interconnect Your Future' - wolfgang-richard


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
interconnect your future
Interconnect Your Future

Gilad Shainer, VP of Marketing

Dec 2013

leading supplier of end to end interconnect solutions
Leading Supplier of End-to-End Interconnect Solutions

Comprehensive End-to-End Software Accelerators and Managment

Management

Storage and Data

MXM

Mellanox Messaging Acceleration

  • FCA
  • Fabric Collectives Acceleration
  • UFM
  • Unified Fabric Management

VSA

Storage Accelerator

(iSCSI)

  • UDA
  • Unstructured Data Accelerator

Comprehensive End-to-End InfiniBand and Ethernet Portfolio

ICs

Adapter Cards

Switches/Gateways

Host/Fabric Software

Metro / WAN

Cables/Modules

mellanox infiniband paves the road to exascale computing
Mellanox InfiniBand Paves the Road to Exascale Computing

Accelerating Half of the World’s Petascale Systems

Mellanox Connected Petascale System Examples

top500 infiniband accelerated petascale capable machines
TOP500 InfiniBand Accelerated Petascale Capable Machines
  • Mellanox FDR InfiniBand systems Tripled from Nov’12 to Nov’13
    • Accelerating 63% of the Petaflop capable systems (12 systems out of 19)
fdr infiniband delivers highest return on investment
FDR InfiniBand Delivers Highest Return on Investment

Higher is better

Higher is better

Higher is better

Higher is better

Source: HPC Advisory Council

connect ib
Connect-IB

Architectural Foundation for Exascale Computing

mellanox connect ib the world s fastest adapter
Mellanox Connect-IB The World’s Fastest Adapter
  • The 7th generation of Mellanox interconnect adapters
  • World’s first 100Gb/s interconnect adapter (dual-port FDR 56Gb/s InfiniBand)
  • Delivers 137 million messages per second – 4X higher than competition
  • Support the new innovative InfiniBand scalable transport – Dynamically Connected
connect ib provides highest interconnect throughput
Connect-IB Provides Highest Interconnect Throughput

Higher is Better

Connect-IB FDR

(Dual port)

Connect-IB FDR

(Dual port)

ConnectX-3 FDR

ConnectX-3 FDR

ConnectX-2 QDR

ConnectX-2 QDR

Competition (InfiniBand)

Competition (InfiniBand)

Source: Prof. DK Panda

Gain Your Performance Leadership With Connect-IB Adapters

connect ib delivers highest application performance
Connect-IB Delivers Highest Application Performance

200% Higher Performance Versus Competition, with Only 32-nodes

Performance Gap Increases with Cluster Size

mellanox scalablehpc
Mellanox ScalableHPC

Accelerations for Parallel Programs

mellanox scalablehpc accelerate parallel applications
Mellanox ScalableHPC Accelerate Parallel Applications

MPI

SHMEM

PGAS

P1

P2

P1

P2

P3

P1

P2

P3

P3

Logical Shared Memory

Logical Shared Memory

Memory

Memory

Memory

Memory

Memory

Memory

Memory

  • FCA
  • Topology Aware Collective Optimization
  • Hardware Multicast
  • Separate Virtual Fabric for Collectives
  • CORE-Direct Hardware Offload
  • MXM
  • Reliable Messaging Optimized for Mellanox HCA
  • Hybrid Transport Mechanism
  • Efficient Memory Registration
  • Receive Side Tag Matching

InfiniBand Verbs API

nonblocking alltoall overlap wait benchmark
Nonblocking Alltoall (Overlap-Wait) Benchmark

CoreDirect Offload allows Alltoall benchmark with almost 100% compute

gpudirect 1 0
GPUDirect 1.0

Transmit

Receive

Non GPUDirect

2

2

System

Memory

System

Memory

System

Memory

System

Memory

1

1

CPU

CPU

CPU

CPU

1

1

Chip

set

Chip

set

Chip

set

Chip

set

GPU

GPU

GPU

GPU

InfiniBand

InfiniBand

InfiniBand

InfiniBand

GPU

Memory

GPU

Memory

GPU

Memory

GPU

Memory

GPUDirect 1.0

gpudirect rdma
GPUDirect RDMA

Transmit

Receive

GPUDirect 1.0

System

Memory

System

Memory

System

Memory

System

Memory

1

1

1

1

CPU

CPU

CPU

CPU

Chip

set

Chip

set

Chip

set

Chip

set

GPU

GPU

GPU

GPU

InfiniBand

InfiniBand

InfiniBand

InfiniBand

GPU

Memory

GPU

Memory

GPU

Memory

GPU

Memory

GPUDirect RDMA

slide18

Performance of MVAPICH2 with GPUDirect RDMA

Higher is Better

Lower is Better

GPU-GPU Internode MPI Latency

GPU-GPU Internode MPI Bandwidth

5X

67 %

5.49 usec

Source: Prof. DK Panda

67% Lower Latency

5X Increase in Throughput

performance of mvapich2 with gpu direct rdma
Performance of MVAPICH2 with GPU-Direct-RDMA

Execution Time of HSG (Heisenberg Spin Glass) Application with 2 GPU Nodes

Source: Prof. DK Panda

Problem Size

technology roadmap one generation lead over the c ompetition
Technology Roadmap – One-Generation Lead over the Competition

Mellanox

200Gbs

100Gbs

56Gbs

40Gbs

20Gbs

Terascale

Petascale

Exascale

3rd

1st

TOP500 2003

Virginia Tech (Apple)

“Roadrunner”

Mellanox Connected

Mega Supercomputers

2000

2005

2010

2015

2020

paving the road for 100gb s and beyon d
Paving The Road for 100Gb/s and Beyond

Recent Acquisitions are Part of Mellanox’s Strategy

to Make 100Gb/s Deployments as Easy as 10Gb/s

Copper (Passive, Active)

Optical Cables (VCSEL)

Silicon Photonics

the only provider of end to end 40 56gb s solutions
The Only Provider of End-to-End 40/56Gb/s Solutions

Comprehensive End-to-End InfiniBand and Ethernet Portfolio

ICs

Adapter Cards

Switches/Gateways

Host/Fabric Software

Metro / WAN

Cables/Modules

From Data Center to Metro and WAN

X86, ARM and Power based Compute and Storage Platforms

The Interconnect Provider For 10Gb/s and Beyond